Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTERIZED SYSTEM AND METHOD FOR OBTAINING INFORMATION ABOUT A REGION OF AN OBJECT
Document Type and Number:
WIPO Patent Application WO/2020/150271
Kind Code:
A1
Abstract:
A method, system and computer readable medium for providing information about a region of a sample. The method includes (i) obtaining, by an imager, multiple images of the region; wherein the multiple images differ from each other by at least one parameter (ii) receiving or generating multiple reference images; (iii) generating multiple difference images that represent differences between the multiple images and the multiple reference images; (iv) calculating a set of region pixel attributes, (v) calculating a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and (vi) determining for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

Inventors:
FELDMAN HAIM (IL)
NEISTEIN EYAL (IL)
ILAN HAREL (IL)
ARAD SHAHAR (IL)
ALMOG IDO (IL)
Application Number:
PCT/US2020/013555
Publication Date:
July 23, 2020
Filing Date:
January 14, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLIED MATERIALS ISRAEL LTD (IL)
PORTNOVA MARINA (US)
International Classes:
G06T7/11; G06T7/00; G06T7/174
Domestic Patent References:
WO2013155008A12013-10-17
Foreign References:
US9880107B22018-01-30
KR20180102117A2018-09-14
US9625726B22017-04-18
US20170292918A12017-10-12
Attorney, Agent or Firm:
PORTNOVA, Marina et al. (US)
Download PDF:
Claims:
WE CLAIM

1. A method for obtaining information about a region of a sample, the method comprises: obtaining, by an imager, multiple images of the region; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, angle of collection, and sensing type; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels;

receiving or generating multiple reference images;

generating, by an image processor, multiple difference images that represent differences between the multiple images and the multiple reference images;

calculating a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the calculating is based on pixels of the multiple difference images;

calculating a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and

determining for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

2. The method according to claim 1 wherein the determining, whether the region pixel represents a defect, is also responsive to a set of attributes of an actual defect.

3. The method according to claim 1 wherein the determining, whether the region pixel represents a defect, is also responsive to a set of attributes of an estimated defect.

4. The method according to claim 1 comprising calculating the set of noise attributes by calculating a covariance matrix.

5. The method according to claim 4 wherein the calculating of the covariance matrix comprises: calculating, for each region pixel, a set of covariance values that represent the covariance between different attributes of the set of region pixel attributes of the region pixel; and calculating the given covariance matrix based on multiple sets of covariance values of the multiple region pixels.

6. The method according to claim 4 comprising determining, for each region matrix, whether the region pixel represents a defect by comparing, to a threshold, a product of a multiplication between (i) a set of attributes of the region pixel, (ii) the covariance matrix, and (iii) a set of attributes of an actual noise or of an estimated noise.

7. The method according to claim 1 wherein the set of pixel attributes of a region pixel comprises data regarding the region pixel and neighbouring region pixels of the region pixel.

8. The method according to claim 1, wherein the imager comprises multiple detectors for generating the multiple images, and wherein the method comprises allocating different detectors to detect radiation from different pupil segments of the multiple pupil segments.

9. The method according to claim 8, wherein the different pupil segments of the multiple pupil segments exceed four pupil segments.

10. The method according to claim 1, wherein the imager comprises multiple detectors for generating the multiple images, and wherein the method comprises allocating different detectors to detect radiation from different combinations of (a) polarization and (b) different pupil segments of the multiple pupil segments.

11. The method according to claim 1 comprising obtaining the multiple images at a same point in time.

12. The method according to claim 1 comprising obtaining the multiple images at different points in time.

13. The method according to claim 1 comprising classifying the defect.

14. The method according to claim 1 comprising determining whether the defect is a defect of interest or not a defect of interest

15. A computerized system for obtaining information about a region of a sample, the system comprises:

an imager that comprises optics and an image processor;

wherein the imager is configured to obtain multiple images of the region; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, and angle of collection; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels; wherein the computerized system is configured to receive or generate multiple reference images;

wherein the image processor is configured to:

generate multiple difference images that represent differences between the multiple images and the multiple reference images;

calculate a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the set of region pixel attributes are calculated based on pixels of the multiple difference images;

calculate a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and

determine, for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

16. A non-transitory computer-readable medium that stores instructions that cause a computerized system to:

obtain, by an imager of the computerized sysetm, multiple images of a region of an object; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, angle of collection, and sensing type; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels;

receive or generate multiple reference images;

generate, by an image processor of the computerized system, multiple difference images that represent differences between the multiple images and the multiple reference images;

calculate a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the calculating is based on pixels of the multiple difference images;

calculate a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and

determine, for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

Description:
COMPUTERIZED SYSTEM AND METHOD FOR OBTAINING

INFORMATION ABOUT A REGION OF AN OBJECT

BACKGROUND OF THE INVENTION

[0001] As the design rule shrinks, wafer inspection tools are required to detect ever smaller defects. Previously, defect detection was limited mainly by laser power and detectors noise. In present day (and probably in future) tools, detection are frequently limited by light scattered by the roughness of the patterns printed on wafer and in particular by line edge roughness. These irregularities scatter light with random phases, which can combine to generate bright spots on the detector (speckles), almost undistinguishable from a real defect. Current filtering techniques are not effective in removing these speckles, as their shapes in the standard bright field or dark field channels are very similar to those of real defects.

[0002] There is a growing need to provide a reliable defect detection system.

SUMMARY

[0003] There may be provided a method for obtaining information about a region of a sample, the method may include (a) obtaining, by an imager, multiple images of the region; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, angle of collection, and sensing type; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels; (b) receiving or generating multiple reference images; (c) generating, by an image processor, multiple difference images that represent differences between the multiple images and the multiple reference images; (d) calculating a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the calculating is based on pixels of the multiple difference images; (e) calculating a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and (f) determining for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

[0004] There may be provided a computerized system for obtaining information about a region of a sample, the system may include an imager that comprises optics and an image processor; wherein the imager is configured to obtain multiple images of the region; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, and angle of collection; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels; wherein the computerized system is configured to receive or generate multiple reference images; wherein the image processor is configured to: generate multiple difference images that represent differences between the multiple images and the multiple reference images; calculate a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the set of region pixel attributes are calculated based on pixels of the multiple difference images; calculate a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and determine, for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

[0005] There may be provided a non-transitory computer-readable medium that may store instructions that cause a computerized system to obtain, by an imager of the computerized sysetm, multiple images of a region of an object; wherein the multiple images differ from each other by at least one parameter selected out of illumination spectrum, collection spectrum, illumination polarization, collection polarization, angle of illumination, angle of collection, and sensing type; wherein the obtaining of the multiple images comprises illuminating the region and collecting radiation from the region; wherein the region comprises multiple region pixels; receive or generate multiple reference images; generate, by an image processor of the computerized system, multiple difference images that represent differences between the multiple images and the multiple reference images; calculate a set of region pixel attributes for each region pixel of the multiple region pixels; wherein the calculating is based on pixels of the multiple difference images; calculate a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels; and determine, for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:

[0007] FIG. 1 illustrates an example of a method;

[0008] FIG. 2 illustrates an example of a system and a sample;

[0009] FIG. 3 illustrates an example of a scanning of a region of an object;

[0010] FIG. 4 illustrates an example of a system and a sample; [0011] FIG. 5 illustrates an example of a system and a sample;

[0012] FIG. 6 illustrates an example of various images;

[0013] FIG. 7 illustrates an example of various images; and

[0014] FIG. 8 illustrates an example of various images.

[0015] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.

DETAILED DESCRIPTION OF THE DRAWINGS

[0016] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.

[0017] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings.

[0018] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.

[0019] Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.

[0020] Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non- transitory computer-readable medium that stores instructions that once executed by a computer result in the execution of the method. [0021] Any reference in the specification to a system should be applied mutatis mutandis to a method that can be executed by the system and should be applied mutatis mutandis to a non- transitory computer-readable medium that stores instructions that can be executed by the system.

[0022] Any reference in the specification to a non-transitory computer-readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer-readable medium and should be applied mutatis mutandis to method that can be executed by a computer that reads the instructions stored in the non-transitory computer- readable medium.

[0023] There may be provided a system, method and a computer-readable medium for using multiple detection channels in combination with a high-dimensional analysis of data acquired from the multiple detection channels. The suggested method takes into account much more information from the interaction of the electromagnetic field and the wafer, with a significant benefit to detection. An improvement of at least seventy percent was acquired using the listed below method.

[0024] Figure 1 illustrates an example of method 300.

[0025] Method 300 may include steps 310, 320, 330, 340, 350 and 360.

[0026] Method 300 may start by step 310 of obtaining, by an imager, multiple images of the region. The region may include multiple region pixels.

[0027] An imager is a module or unit that is configured to acquire the multiple images.

[0028] The imager may be configured to illuminate the region with radiation, collect radiation from the region and detect the collected radiation from the sample. The imager may include optics, an image processor and may include multiple detectors.

[0029] Step 310 may include illuminating a region of a sample, collecting radiation from the region and detecting the collected radiation from the sample.

[0030] The radiation may be ultraviolet (UV) radiation, deep UV radiation, extreme UV radiation or any other type of radiation.

[0031] It is assumed that a single radiation beam may scan the region - but it should be noted that multiple radiation beams may scan multiple regions simultaneously.

[0032] The multiple images differ from each other by at least one parameter selected out of (i) illumination spectrum (which is the spectral response of an illumination portion of the imager), collection spectrum spectrum (which is the spectral response of a collection portion of the imager), illumination polarization (which is the polarization imposed by the illumination portion of the imager), collection polarization (which is the polarization imposed by the collection portion of the imager), angle of illumination (angle of illumination of the region by the illumination portion of the imager), angle of collection, and a detection type (for example - detecting amplitude and/or detecting phase). [0033] The imager may include multiple detectors for generating the multiple images. Different detectors may be allocated to detect radiation from different pupil segments of the multiple pupil segments - one detector per pupil segment. Each one of the multiple detectors may be located in a plane that is conjugate to the pupil plane. See, for example, detectors 70 that are located in a plane that is conjugate to pupil plane 26 of system 10 of figure 1.

[0034] The different pupil segments may not overlap, may completely non-overlap or only partially overlap. The pupil segments can be of equal shape and size but at least two pupil segments may differ from each other by shape and additionally or alternatively by size and/or a position on the exit pupil plane.

[0035] There may be more than four pupil segments.

[0036] The imager may include multiple detectors for generating the multiple images. Different detectors may be allocated to detect radiations of different combinations of (a) polarization and (b) different pupil segments of the multiple pupil segments. See, for example, detectors 70 are allocated to detect radiation of a first polarization from different pupil segments and detectors 70’ are allocated to detect radiation of a second polarization from different pupil segments.

[0037] It should be noted that the pupil may not be segmented and each of the multiple detectors may be allocated to the entire pupil.

[0038] Step 310 may include obtaining the multiple images at the same point in time.

[0039] Alternatively, step 310 may include obtaining two or more of the multiple images at different points in time.

[0040] Step 320 may include receiving or generating multiple reference images. The reference images may be images obtained by imager of an area of the sample that differs from the region of the sample. The area and the region may not overlap (for example when performing die to die comparison), or may partially overlap (for example when performing cell to cell comparison). The reference images may be calculated in various manners- for example by processing computer- aided design (CAD) information of the region, by generating a golden reference, and the like.

[0041] It should be noted that one image of the multiple images may be used as a reference image of another image of the multiple images.

[0042] Using a reference image that is not another image of the multiple images may be beneficial - as it may provide more information about the region.

[0043] Step 330 may include generating, by the image processor, multiple difference images that represent differences between the multiple images and the multiple reference images. [0044] Step 340 may include calculating a set of region pixel attributes for each region pixel of the multiple region pixels. The calculating may be based on pixels of the multiple difference images.

[0045] The set of pixel attributes of a region pixel may include data regarding the region pixel and neighbouring region pixels of the region pixel.

[0046] Step 350 may include calculating a set of noise attributes, based on multiple sets of region pixels attributes of the multiple region pixels.

[0047] Step 350 may include calculating a covariance matrix.

[0048] Step 350 may include:

a. Calculating for each region pixel, a set of covariance values that represent the covariance between different attributes of the set of region pixel attributes of the region pixel.

b. Calculating the covariance matrix based on multiple sets of covariance values of the multiple region pixels.

[0049] Step 360 may include determining for each region pixel, whether the region pixel represents a defect based on a relationship between the set of noise attributes and the set of region pixel attributes of the pixel.

[0050] Step 360 may also be responsive to a set of attributes of an actual defect, or to a set of attributes of an estimated defect. The estimated defect may be a model of the defect.

[0051] Step 360 may include:

a. Comparing, to a threshold, a product of a multiplication between (i) a set of attributes of the region pixel, (ii) the covariance matrix, and (iii) a set of attributes of an actual noise or of an estimated noise.

[0052] Figure 2 illustrates an example of system 10 and sample 100.

[0053] Figure 2 illustrates an allocation of nine detectors to nine pupil segments - one detector per pupil segment.

[0054] Figure 2 also illustrates the pupil plane 26.

[0055] System 10 include radiation source 30, optics such as first beam splitter 40 and objective lens 50, detectors 70 and image processor 90.

[0056] The optics may include any combination of optical elements that may determine one or more optical properties (such as shape, size, polarization) of the radiation beam from radiation source 30, may determine the path of the radiation beam from radiation source 30, may determine one or more optical properties (such as shape, size, polarization) of one or more radiation beams scattered and/or reflected by the sample and determine the path of the one or more radiation beams - and direct the one or more radiation beams towards the detectors 70. [0057] The optics may include lenses, grids, telescopes, beam splitters, polarizers, reflectors, deflectors, apertures, and the like.

[0058] In figure 2 the radiation beam from radiation source 30 passes through first beam splitter 40 and is focused by objective lens 50 onto a region of sample 22. Radiation beam from the region is collected by the objective lens 50 and reflected by first beam splitter 40 towards detector 70.

[0059] Figure 2 illustrates a pupil 60 that is segmented to nine segments - first pupil segment 61, second pupil segment 62, third pupil segment 63, fourth pupil segment 64, fifth pupil segment 65, sixth pupil segment 66, seventh pupil segment 67, eighth pupil segment 68 and ninth pupil segment 69.

[0060] Figure 2 illustrates detectors 70 as including nine detectors that are arranged in a 3x3 grid - the nine detectors include first detector 71, second detector 72, third detector 73, fourth detector 74, fifth detector 75, sixth detector 76, seventh detector 77, eighth detector 78 and ninth detector 79 - one detector per pupil segment.

[0061] The nine image generate nine images that differ from each other - the images include first image 91, second image 92, third image 93, fourth image 94, fifth image 95, sixth image 96, seventh image 97, eighth image 98 and ninth image 99 - one image per detector.

[0062] Figure 3 illustrates a scanning of the region of the object along the y axis. It is noted that the scanning can occur along any other axis.

[0063] Figure 4 illustrates system 12 that implements an allocation of eighteen detectors to nine pupil segments and to two polarizations - one detector per combination of pupil segment and polarization.

[0064] First detector till ninth detector 71-79 (collectively denoted 70) are allocated for a first polarization and for nine pupil segments.

[0065] Eighth detector till eighteenth detector 71’, 72’, 73’, 74’, 75’, 76’, 77’, 78’ and 79’ (collectively denoted 70’) are allocated for a second polarization and for nine pupil segments.

[0066] The different in the polarization aimed to detector 70 and to detectors 70’ may be introduced using polarizing beam splitters and/or by inserting polarizing elements before the detectors. It should be noted that full characterization of polarization may require applying at least three different polarizations on each (and not just two) - thus additional polarizing elements and additional detectors should be added.

[0067] Figure 4 illustrates a system 13 that includes a first polarizing beam splitter 81, a second polarizing beam splitter 82 and a third polarizing beam splitter 83.

[0068] Figure 5 illustrates a system 13 in which different detectors receive reflected radiation at different polarizations - due to the lack of a polarizer before third detector 73, the positioning of a first polarizer 89 before first detector 71 and the positioning of second polarized 88 before second detector. Each detectors out of first detector 71, second detector 72 and third detector 73 receives radiation from the entire pupil.

[0069] For simplicity of explanation the radiation source is not shown in figure 6.

[0070] The optics of system 13 includes, first beam splitter 40 and additional beam splitters (such as second beam splitter 86 and third beam splitter 87) for splitting the beam from the object between the first, second and third detectors.

[0071] In the following text it is assumed that there are nine different pupil segments, that nine difference images are calculated and that the neighbourhood of each pixel include eight pixels - so that the pixel and his neighbours include nine pixels. The number of pupil segments may differ from nine, the number of differential images may differ from nine, and the number of pixels neighbours may differ from eight.

[0072] Under these assumptions, each region pixel is represented by a vector of eighty-one elements and a covariance matrix include 81 x 81 elements. The number of elements per vector may differ from eighty-one, and the number of elements of the covariance matrix may differ from 81 x 81. For example - the number of pixel neighbours may differ by more than one then the number of difference images.

[0073] Figure 6 illustrates first difference image 111, second difference image 112, third difference image 113, fourth difference image 114, fifth difference image 115, sixth difference image 116, seventh difference image 117, eighth difference image 118 and ninth difference image 119.

[0074] Figure 6 also illustrates the (m,n)’th pixel of each of the nine difference images and its eight neighboring pixels - l l l(m-l,n-l) till 11 l(m+l,n+l), 112(m-l,n-l) till 112(m+l,n+l), 113(m-l,n-l) till 113(m+l,n+l), 114(m-l,n-l) till 114(m+l,n+l), 115(m-l,n-l) till

115(m+l,n+l), 116(m- 1 ,n- 1 ) till 116(m+l,n+l), 117(m-l,n-l) till 117(m+l,n+l), 118(m-l,n-l) till 118(m+l,n+l), and 119(m-l,n-l) till 119(m+l,n+l).

[0075] The nine difference images represent the same region of the wafer. Each location on the region (also referred to as a region pixel) is associated with nine pixels of the nine difference images - that are located at the same location within the respective difference images.

[0076] A (m,n)’th region pixel may be represented by a vector (Vdata(m,n)) that includes values (such as intensity) related to the (m,n)’th pixels of each of the nine difference images and to values related to the neighboring pixels of the (m,n)’th pixels of each of the nine difference images.

[0077] For example- for the (m,n)’th region pixel the vector (Vdata(m,n)) may include the following eighty one vector elements - I[11 l(m-l,n-l)] ... I[11 l(m+l,n+l)], I[112(m-l,n-l)] ... I[112(m+l,n+l)I, I[113(m-l,n-l)] ... I[113(m+l,n+l)], I[114(m-l,n-l)] ... I[114(m+l,n+l)], I[115(m-l,n-l)] ... I[115(m+l,n+l)], I[116(m-l,n-l)] ... I[116(m+l,n+l)], I[117(m-l,n-l)] ... I[117(m+l,n+l)], I[118(m-l,n-l)] ... I[118(m+l,n+l)], and I[119(m-l,n-l)] ... I[119(m+l,n+l)].

[0078] The noise may be estimated by a covariance matrix. The covariance matrix may be calculated by: (a) for each region pixel calculating all the possible multiplications between pairs of vector elements - thus for eighty one vector elements of each vector there are 81 x 81 multiplications, (b) summing corresponding products for all of the region pixels, and (c) normalizing the sums to provide the covariance matrix.

[0079] The normalizing may include averaging the sums.

[0080] For example - the first eighty one multiplications of step (a) may include multiplying I[11 l(m-l,n-l)] by all elements of V(m,n), and the last eighty one multiplications of step (a) may include multiplying I[119(m+l,n+l)] by all elements of V(m,n).

[0081] Assuming that there are M x N region pixels (and M x N pixels per each difference image) than step (a) includes calculating M x N x 81 x 81 products. Step (b) includes performing, generating 81 x 81 sums - each sum is of M X N elements, and step (c) include normalizing the 81 x 81 sums - for example by calculating an average - dividing each sum by nine.

[0082] Assuming that a defect is known or estimated - the defect may be represented by a defect vector (Vdefect) of eighty one elements.

[0083] The decision of whether a region pixel includes a defect may include calculating the relationship (such as ratio) between the probability that the region pixel (represented by vector Vdata) was obtained by a defect to the probability that the region pixel (represented by vector Vdata) was not obtained by due to a defect.

[0084] A decision, per region pixel, of whether the region pixel is defective may include comparing a product of Vdefect 1 x Cov x Vdata to a threshold TH. If the product exceeds TH then it is determined that the region pixel represents a defect - else it is assumed that the region pixel does not include a defect.

[0085] Other decision can be made, the threshold may be calculated in any manner, may be fixed, may change over time, and the like. The same threshold may be applied for all region pixels - but this is not necessarily so and different thresholds may be calculated for different region pixels. Differences in thresholds may result, for example from non-uniformity in the optics, aberrations, and the like different parts in the die may have different properties too and require different threshold.

[0086] Figure 7 illustrates first difference image 111, second difference image 112, third difference image 113, fourth difference image 114, fifth difference image 115, sixth difference image 116, seventh difference image 117, eighth difference image 118 and ninth difference image 119.

[0087] Figure 7 also illustrates first reference image 101, second reference image 102, third reference image 103, fourth reference image 104, fifth reference image 105, sixth reference image 106, seventh reference image 107, eighth reference image 108 and ninth reference image 109.

[0088] Figure 7 further illustrates nine images acquired by nine detector - the nine images (also referred to acquired images) include first image 91, second image 92, third image 93, fourth image 94, fifth image 95, sixth image 96, seventh image 97, eighth image 98 and ninth image 99.

[0089] First difference image 1 11 represents the difference between first image 91 and first reference image 101.

[0090] Second difference image 112 represents the difference between second image 92 and second reference images 102.

[0091] Third difference image 1 13 represents the difference between third image 93 and third reference image 103.

[0092] Fourth difference image 114 represents the difference between fourth image 94 and fourth reference image 104.

[0093] Fifth difference image 115 represents the difference between fifth image 95 and fifth reference image 105.

[0094] Sixth difference image 116 represents the difference between sixth image 96 and sixth reference image 106.

[0095] Seventh difference image 117 represents the difference between seventh image 97 and seventh reference image 107.

[0096] Eighth difference image 118 represents the difference between eighth image 98 and eighth reference image 108.

[0097] Ninth difference image 1 19 represents the difference between ninth image 99 and ninth reference image 109.

[0098] Figure 8 is an example of first difference image 111, second difference image 112, third difference image 113, fourth difference image 114, fifth difference image 115, sixth difference image 116, seventh difference image 117, eighth difference image 118, ninth difference image 119, first image 91, second image 92, third image 93, fourth image 94, fifth image 95, sixth image 96, seventh image 97, eighth image 98 and ninth image 99.

[0099] The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. [0100] A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.

[0101] The computer program may be stored internally on a non-transitory computer-readable medium. All or some of the computer program may be provided on computer-readable media permanently, removably or remotely coupled to an information processing system. The computer- readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.

[0102] A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.

[0103] The computer system may for instance include at least one processing unit, associated memory and a number of input/output (EO) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via EO devices.

[0104] In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.

[0105] Moreover, the terms“front,”“back,”“top,”“bottom,”“over, “under” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. [0106] The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.

[0107] Although specific conductivity types or polarity of potentials have been described in the examples, it will be appreciated that conductivity types and polarities of potentials may be reversed.

[0108] Each signal described herein may be designed as positive or negative logic. In the case of a negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero. In the case of a positive logic signal, the signal is active high where the logically true state corresponds to a logic level one. Note that any of the signals described herein can be designed as either negative or positive logic signals. Therefore, in alternate embodiments, those signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.

[0109] Furthermore, the terms "assert" or“set” and "negate" (or "deassert" or“clear”) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically false state is a logic level one.

[0110] Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality.

[0111] Any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected," or "operably coupled," to each other to achieve the desired functionality.

[0112] Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.

[0113] Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner.

[0114] Also for example, the examples, or portions thereof, may implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.

[0115] Also, the invention is not limited to physical devices or units implemented in non programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices, commonly denoted in this application as‘computer systems’.

[0116] However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.

[0117] In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms“a” or“an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as“at least one” and“one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an." The same holds true for the use of definite articles. Unless stated otherwise, terms such as“first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.

[0118] While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.