Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTICAL MEASUREMENT OF BUMP HIEGHT
Document Type and Number:
WIPO Patent Application WO/2018/031574
Kind Code:
A1
Abstract:
A method of generating 3D information including: varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps; capturing an image at each pre-determined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, the greatest characteristic value across a first portion of pixels in the captured image; comparing the greatest characteristic value for each captured image to determine if a surface of the sample is present at each pre-determined step; determining a first captured image that is focused on an apex of a bump of the sample; determining a second captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and determining a first distance between the apex of the bump and the first surface.

Inventors:
SOETARMAN RONNY (US)
XU JAMES JIANGUO (US)
Application Number:
PCT/US2017/045950
Publication Date:
February 15, 2018
Filing Date:
August 08, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KLA TENCOR CORP (US)
International Classes:
G01B11/02; G01B9/04; G01B11/24; G06T1/00; G06T7/60
Domestic Patent References:
WO2012094175A22012-07-12
Foreign References:
US20080291532A12008-11-27
US20120019626A12012-01-26
US20060192075A12006-08-31
US20150248991A12015-09-03
Attorney, Agent or Firm:
MCANDREWS, Kevin et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of generating three-dimensional (3-D) information of a sample using an optical microscope, the method comprising:

varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps;

capturing an image at each pre-determined step, wherein a first surface of the sample and a second surface of the sample are within a field of view of each of the captured images;

determining a characteristic value of each pixel in each captured image;

determining, for each captured image, the greatest characteristic value across a first portion of pixels in the captured image;

comparing the greatest characteristic value for each captured image to determine if a surface of the sample is present at each pre-determined step;

determining a first captured image that is focused on an apex of a bump of the sample;

determining a second captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and

determining a first distance between the apex of the bump and the first surface.

2. The method of Claim 1, wherein the optical microscope includes a stage, wherein the sample is supported by the stage, wherein the optical microscope is adapted to communicate with a computer system, wherein the computer system includes a memory device that is adapted to store each captured image, and wherein the optical microscope is selected from the group consisting of: a confocal microscope, a structured illumination microscope, and an interferometer..

3. The method of Claim 1, wherein the determining of the first captured image further comprises: determining a maximum characteristic value for each x-y pixel location, within a second portion of x-y pixel locations, across all captured images, wherein the second portion of x-y pixel locations includes at least some of the x-y pixel locations included in each captured image;

determining a subset of the captured images, wherein only captured images that include a x-y pixel location maximum characteristic value are included in the subset; and determining that, of all captured images within the subset of captured images, the first captured image is focused on a highest z-position compared to all other captured images within the subset of captured images.

4. The method of Claim 1, wherein the first portion of pixels includes all pixels included in the captured image, and wherein the characteristic value of each pixel is selected from the group consisting of: intensity, contrast and fringe contrast..

5. The method of Claim 1, wherein the first portion of pixels includes less than all pixels included in the captured image.

6. The method of Claim 3, wherein the second portion of pixels includes all pixels included in the captured image.

7. The method of Claim 3, wherein the second portion of pixels includes less than all pixels included in the captured image.

8. The method of Claim 1, wherein the first portion of pixels does not receive reflected light from the metal bump.

9. The method of Claim 3, wherein the second portion of pixels receive reflected light from the apex of the metal bump.

10. The method of Claim 3, wherein the spacial relationship between the first portion of pixels to the second portion of pixels is fixed.

11. The method of Claim 3, wherein the second portion of pixels are contiguous and centered on the apex of the bump.

12. The method of Claim 1, wherein the bump is a metal bump and wherein the first surface is a top surface of a passivation layer.

13. A method of generating three-dimensional (3-D) information of a sample using an optical microscope, the method comprising:

varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps;

capturing an image at each pre-determined step, wherein a first surface of the sample and a second surface of the sample are within a field of view of each of the captured images;

determining a characteristic value of each pixel in each captured image;

determining, for each captured image, a count of pixels that have a characteristic value within a first range across a first portion of pixels, wherein all pixels that do not have a characteristic value within the first range are not included in the count of pixels; determining if a surface of the sample is present at each pre-determined step based on the count of pixels for each captured image;

determining a first captured image that is focused on an apex of a bump of the sample;

determining a second captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and

determining a first distance between the apex of the bump and the first surface.

14. The method of Claim 13, wherein the optical microscope includes a stage, wherein the sample is supported by the stage, wherein the optical microscope is adapted to communicate with a computer system, wherein the computer system includes a memory device that is adapted to store each captured image, and wherein the optical microscope is selected from the group consisting of: a confocal microscope, a structured illumination microscope, and an interferometer.

15. The method of Claim 13, wherein the determining of the first captured image further comprises:

determining a maximum characteristic value for each x-y pixel location, within a second portion of x-y pixel locations, across all captured images, wherein the second portion of x-y pixel locations includes at least some of the x-y pixel locations included in each captured image;

determining a subset of the captured images, wherein only captured images that include a x-y pixel location maximum characteristic value are included in the subset; and determining that, of all captured images within the subset of captured images, the first captured image is focused on the highest z-position compared to all other captured images within the subset of captured images.

16. The method of Claim 13, wherein the first portion of pixels includes all pixels included in the captured image, and wherein the characteristic value of each pixel is selected from the group consisting of: intensity, contrast and fringe contrast..

17. The method of Claim 13, wherein the first portion of pixels includes less than all pixels included in the captured image.

18. The method of Claim 15, wherein the second portion of pixels includes all pixels included in the captured image.

19. The method of Claim 15, wherein the spacial relationship between the first portion of pixels to the second portion of pixels is fixed.

20. The method of Claim 15, wherein the second portion of pixels are contiguous and centered on the apex of the bump.

21. A method of generating three-dimensional (3-D) information of a sample using an optical microscope, the method comprising:

varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps; capturing an image at each pre-determined step, wherein a first surface of the sample and a second surface of the sample are within a field of view of each of the captured images;

determining a characteristic value of each pixel in each captured image;

determining a z-position of an apex of a bump of the sample;

determining a first captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and

determining a first distance between the apex of the bump and the first surface.

22. The method of Claim 21, wherein the determining of the z-position of the apex comprises:

identifying a plurality of x,y,z pixel locations across all captured images, wherein the plurality of x,y,z pixel locations are associated with a top surface of the bump;

applying a best fit algorithm to a generate a continuous 3-D estimate of the top surface of the bump; and

determining a maximum height of the continuous 3-D estimate.

Description:
OPTICAL MEASUREMENT OF BUMP HIEGHT

CROSS REFERNCE TO RELATED APPLICATIONS

[0001 ] This application is a continuation-in-part of, and claims priority under 35 U.S.C. § 120 from, nonprovisional U.S. patent application serial number 15/338,838 entitled "OPTICAL MEASUREMENT OF OPENING DIMENSIONS IN A WAFER," filed on October 31, 2016. The disclosure of which is incorporated herein by reference in its entirety. Application 15/338,838 is a continuation-in-part of, and claims priority under 35 U.S.C. § 120 from, nonprovisional U.S. patent application serial number 15/233,812 entitled "AUTOMATED 3-D MEASUREMENT," filed on August 10, 2016. The disclosure of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002 ] The described embodiments relate generally to measuring 3-D information of a sample and more particularly to automatically measuring 3-D information in a fast and reliable fashion.

BACKGROUND INFORMATION

[0003 ] Three-dimensional (3-D) measurement of various objects or samples is useful in many different applications. One such application is during wafer level package processing. 3-D measurement information of a wafer during different steps of wafer level fabrication can provide insight as to the presence of wafer processing defects that may be present on the wafer. 3-D measurement information of the wafer during wafer level fabrication can provide insight as to the absence of defects before additional capital is expended to continue processing the wafer. 3-D measurement information of a sample is currently gathered by human manipulation of a microscope. The human user focuses the microscope using their eyes to determine when the microscope is focused on a surface of the sample. An improved method of gathering 3- D measurement information is needed. SUMMARY

[ 0004 ] In a first novel aspect, three-dimensional (3-D) information of a sample using an optical microscope is generated by varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps, capturing an image at each pre-determined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, the greatest characteristic value across all pixels in the captured image; comparing the greatest characteristic value for each captured image to determine if a surface of the sample is present at each pre-determined step; determining a first captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; determining a second captured image that is focused on a second surface of the sample based on the characteristic value of each pixel in each captured image; and determining a first distance between the first surface and the second surface.

[ 0005 ] In a second novel aspect, A three-dimensional (3-D) measurement system includes determining a thickness of a semi-transparent layer of the sample; and determining a thickness of a metal layer of the sample, where the thickness of the metal layer is equal to the difference between the thickness of the semi-transparent layer and the first distance, where the first surface is a top surface of a photoresist layer, and where the second surface is a top surface of a metal layer.

[ 0006 ] In a third novel aspect, three-dimensional (3-D) information of a sample using an optical microscope is generated by varying the distance between the sample and an objective lens of the optical microscope at pre-determined steps; capturing an image at each pre-determined step; determining a characteristic value of each pixel in each captured image; determining, for each captured image, the greatest characteristic value across a first portion of pixels in the captured image; comparing the greatest characteristic value for each captured image to determine if a surface of the sample is present at each pre-determined step; determining a first captured image that is focused on an apex of a bump of the sample; determining a second captured image that is focused on a first surface of the sample based on the characteristic value of each pixel in each captured image; and determining a first distance between the apex of the bump and the first surface. [0007 ] In a fourth novel aspect, determining a maximum characteristic value for each x-y pixel location, within a second portion of x-y pixel locations, across all captured images, where the second portion of x-y pixel locations includes at least some of the x-y pixel locations included in each captured image; determining a subset of the captured images, where only captured images that include a x-y pixel location maximum characteristic value are included in the subset; and determining that, of all captured images within the subset of captured images, the first captured image is focused on a highest z-position compared to all other captured images within the subset of captured images.

[0008 ] Further details and embodiments and techniques are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.

[0010] FIG. 1 is a diagram of a semi-automated 3-D metrology system 1 that performs automated 3-D measurement of a sample.

[0011] FIG. 2 is a diagram of a 3-D imaging microscope 10 including adjustable objective lenses 11 and an adjustable stage 12.

[0012 ] FIG. 3 is a diagram of a 3-D metrology system 20 including a 3-D microscope, a sample handler, a computer, a display, and input devices.

[0013 ] FIG. 4 is a diagram illustrating a method of capturing images as the distance between the objective lens of the optical microscope and the stage is varied.

[0014] FIG. 5 is a chart illustrating the distance between the objective lens of the optical microscope and the sample surface for which each x-y coordinate had the maximum characteristic value.

[0015 ] FIG. 6 is a 3-D diagram of an image rendered using the maximum characteristic value for each x-y coordinate shown in FIG. 5.

[0016] FIG. 7 is a diagram illustrating peak mode operation using images captured at various distances. [ 0017 ] FIG. 8 is a diagram illustrating peak mode operation using images captured at various distances when a photoresist opening is within the field of view of the optical microscope.

[ 0018 ] FIG. 9 is a chart illustrating the 3-D information resulting from the peak mode operation.

[ 0019 ] FIG. 10 is a diagram illustrating summation mode operation using images captured at various distances.

[ 0020 ] FIG. 11 is a diagram illustrating erroneous surface detection when using summation mode operation.

[ 0021 ] FIG. 12 is a chart illustrating the 3-D information resulting from the summation mode operation.

[ 0022 ] FIG. 13 is a diagram illustrating range mode operation using images captured at various distances.

[ 0023 ] FIG. 14 is a chart illustrating the 3-D infomiation resulting from the range mode operation.

[ 0024 ] FIG. 15 is a chart illustrating only the count of pixels that have a characteristic value within a first range.

[ 0025 ] FIG. 16 is a chart illustrating only the count of pixels that have a characteristic value within a second range.

[ 0026 ] FIG. 17 is a flowchart illustrating the various steps included in peak mode operation.

[ 0027 ] FIG. 18 is a flowchart illustrating the various steps included in range mode operation.

[ 0028 ] FIG. 19 is a diagram of a captured image, including a single feature, focused on the top surface of a photoresist layer.

[ 0029 ] FIG. 20 is a diagram illustrating a first method of generating an intensity threshold.

[ 0030 ] FIG. 21 is a diagram illustrating a second method of generate an intensity threshold.

[ 0031 ] FIG. 22 is a diagram illustrating a third method of generate an intensity threshold.

[ 0032 ] FIG. 23 is a 3-D diagram of a photoresist opening in a sample. [ 0033 ] FIG. 24 is a 2-D diagram of the top surface opening of the photoresist shown in FIG. 23.

[ 0034 ] FIG. 25 is 2-D diagram of the bottom surface opening of the photoresist shown in FIG. 23.

[ 0035 ] FIG. 26 is a captured image focused on a top surface of a photoresist layer.

[ 0036 ] FIG. 27 is a diagram illustrating the detection of a border of the

photoresist layer illustrated in FIG. 26.

[ 0037 ] FIG. 28 is a captured image focused on a bottom surface of a photoresist layer.

[ 0038 ] FIG. 29 is a diagram illustrating the detection of a border of the

photoresist layer illustrated in FIG. 28.

[ 0039 ] FIG. 30 is a captured image focused on a top surface of a photoresist layer in a trench structure.

[ 0040 ] FIG. 31 is a diagram illustrating the detection of a border of the

photoresist layer illustrated in FIG. 30.

[ 0041 ] FIG. 32 is a 3-D diagram of a photoresist openings partially filled with plated metal.

[ 0042 ] FIG. 33 is a cross-sectional diagram of a photoresist opening partially filled with plated metal.

[ 0043 ] FIG. 34 is a 3-D diagram of a photoresist opening with plated metal.

[ 0044 ] FIG. 35 is a cross-sectional diagram of a photoresist opening with plated metal.

[ 0045 ] FIG. 36 is a 3-D diagram of a metal pillar over passivation.

[ 0046 ] FIG. 37 is a cross-sectional diagram of a metal pillar over passivation.

[ 0047 ] FIG. 38 is a 3 -D diagram of metal over passivation.

[ 0048 ] FIG. 39 is a cross-sectional diagram of metal over passivation.

[ 0049 ] FIG. 40 is a cross-sectional drawing illustrating the measurement of a semi-transparent material in the proximity to a plated metal surface.

[ 0050 ] FIG. 41 is a diagram illustrating peak mode operation using images captured at various distances when a photoresist opening is within the field of view of the optical microscope. [0051] FIG. 42 is a chart illustrating the 3-D information resulting from the peak mode operation illustrated in FIG. 41.

[0052 ] FIG. 43 is a diagram of a captured image focused on a top surface of a photoresist layer in a trench structure including an outline of a first analysis region A and a second analysis region B.

[0053 ] FIG. 44 is a 3-D diagram of a bump over passivation structure.

[0054 ] FIG. 45 is a top-down diagram of the bump over passivation structure including an outline of a first analysis region A and a second analysis region B.

[0055 ] FIG. 46 is a top-down diagram illustrating adjustment of analysis region A and analysis region B when the entire bump is not located in the original analysis region A.

[0056] FIG. 47 is a cross-sectional diagram of the bump over passivation structure.

[0057 ] FIG. 48 is a diagram illustrating peak mode operation using images captured at various distances when only a photoresist layer is within region B of the field of view of the optical microscope.

[0058 ] FIG. 49 is a chart illustrating the 3-D information resulting from the peak mode operation of FIG. 48.

DETAILED DESCRIPTION

[0059] Reference will now be made in detail to background examples and some embodiments of the invention, examples of which are illustrated in the accompanying drawings. In the description and claims below, relational terms such as "top", "down", "upper", "lower", "top", "bottom", "left" and "right" may be used to describe relative orientations between different parts of a structure being described, and it is to be understood that the overall structure being described can actually be oriented in any way in three-dimensional space.

[0060] FIG. 1 is a diagram of a semi-automated 3-D metrology system 1. Semi- automated 3-D metrology system 1 includes an optical microscope (not shown), an ON/OFF button 5, a computer 4 and a stage 2. In operation, a wafer 3 is placed on the stage 2. The function of the semi-automated 3-D metrology system 1 is to capture multiple images of an object and generate 3-D information describing various surfaces of the object automatically. This is also referred to as a "scan" of an object. Wafer 3 is an example of an object that is analyzed by the semi-automated 3-D metrology system 1. An object may also be referred to as a sample. In operation, the wafer 3 is placed on stage 2 and the semi-automated 3-D metrology system 1 begins the process of automatically generating 3-D information describing the surfaces of the wafer 3. In one example, the semi-automated 3-D metrology system 1 is started by pressing a designated key on a keyboard (not shown) that is connected to computer 4. In another example, the automated 3-D metrology system 1 is started by sending a start command to the computer 4 across a network (not shown). Semi-automated 3-D metrology system 1 may also be configured to mate with an automated wafer handling system (not shown) that automatically removes a wafer once a scan of the wafer is completed and inserts a new wafer for scanning.

[ 0061 ] A fully automated 3-D metrology system (not shown) is similar to the semi-automated 3-D metrology system of FIG. 1; however, a fully automated 3-D metrology system also includes a robotic handler that can automatically pick up a wafer and place the wafer onto the stage without human intervention. In a similar fashion, a fully automated 3-D metrology system can also use the robotic handler to automatically pickup a wafer from the stage and remove the wafer from the stage. A fully automated 3-D metrology system is desirable during the production of many wafers because it avoids possible contamination by a human operator and improves time efficiency and overall cost. Alternatively, the semi-automated 3-D metrology system 1 is desirable during research and development activities when only a small number of wafers need to be measured.

[ 0062 ] FIG. 2 is a diagram of a 3-D imaging microscope 10 including multiple objective lenses 11 and an adjustable stage 12. 3-D imaging microscope may be a confocal microscope, a structured illumination microscope, an interferometer microscope or any other type of microscope well known in the art. A confocal microscope will measure intensity. A structured illumination microscope will measure contrast of a projected structure. An interferometer microscope will measure interference fringe contrast. [ 0063 ] In operation, a wafer is placed on adjustable stage 12 and an objective lens is selected. The 3-D imaging microscope 10 captures multiple images of the wafer as the height of the stage, on which the wafer rests, is adjusted. This results in multiple images of the wafer to be captured while the wafer is located at various distances away from the selected lens. In one alternate example, the wafer is placed on a fixed stage and the position of the objective lens is adjusted, thereby varying the distance between the objective lens and the sample without moving the stage. In another example, the stage is adjustable in the x-y direction and the objective lens is adjustable in the z- direction.

[ 0064 ] The captured images may be stored locally in a memory included in 3-D imaging microscope 10. Alternatively, the captured images may be stored in a data storage device included in a computer system, where the 3-D microscope 10 communicates the captured images to the computer system across a data

communication link. Examples of a data communication link include: a Universal Serial Bus (USB) Interface, an ethernet connection, a FireWire bus interface, a wireless network such as WiFi.

[ 0065 ] FIG. 3 is a diagram of a 3-D metrology system 20 including a 3-D microscope 21, a sample handler 22, a computer 23, a display 27 (optional), and input devices 28. 3-D metrology system 20 is an example of a system that is included in semi-automated 3-D metrology system 1. Computer 23 includes a processor 24, a storage device 25, and a network device 26 (optional). The computer outputs information to a user via display 27. The display 27 may be used as an input device as well if the display is a touch screen device. Input devices 28 may include a keyboard and a mouse. The computer 23 controls the operation of 3-D microscope 21 and sample handler/stage 22. When a start scan command is received by the computer 23, the computer sends one or more commands to configure the 3-D microscope for image capturing ("scope control data"). For example, the correct objective lens needs to be selected, the resolution of the images to be captured needs to be selected, and the mode of storing captured images needs to be selected. When a start scan command is received by the computer 23, the computer sends one or more commands to configure the sample handler/stage 22 ("handler control data"). For example, the correct height (z-direction) adjustment needs to be selected and the correct horizontal (x-y dimension) alignment needs to be selected.

[ 0066 ] During operation, the computer 23 causes sample handler/stage 22 to be adjusted to the proper position. Once the sample handler/stage 22 is properly positioned, the computer 23 will cause the 3-D microscope to focus on a focal plane and capture at least one image. The computer 23 will then cause that stage to be move in the z-direction such that the distance between the sample and the objective lens of the optical microscope is changed. Once the stage is moved to the new position, the computer 23 will cause the optical microscope to capture a second image. This process continues until an image is captured at each desired distance between the objective lens of the optical microscope and the sample. The images captured at each distance are communicated from 3-D microscope 21 to computer 23 ("image data"). The captured images are stored in storage device 25 included in computer 23. In one example, the computer 23 analyzes the captured images and outputs 3-D information to display 27. In another example, computer 23 analyzes the captured images and outputs 3-D information to a remote device via network 29. In yet another example, computer 23 does not analyze the captured images, but rather sends the captured images to another device via network 29 for processing. 3-D information may include a 3-D image rendered based on the captured images. 3-D information may not include any images, but rather include data based on various characteristics of each captured image.

[ 0067 ] FIG. 4 is a diagram illustrating a method of capturing images as the distance between the objective lens of the optical microscope and the sample is varied. In the embodiment illustrated in FIG. 4, each image includes one-thousand by one- thousand pixels. In other embodiments, the image may include various configurations of pixels. In one example, the spacing between consecutive distances is fixed to be a predetermined amount. In another example, the spacing between consecutive distances may not be fixed. This no fixed spacing between images in the z-direction may be advantageous in the event that additional z-direction resolution is required for only a portion of the z-direction scan of the sample. The z-direction resolution is based on the number of images captured per unit length in the z-direction, therefore capturing additional image images per unit length in the z-direction will increase the z-direction resolution measured. Conversely, capturing fewer images per unit length in the z- direction will decrease the z-direction resolution measured.

[ 0068 ] As discussed above, the optical microscope is first adjusted to be focused on a focal plane located at distance 1 away from an objective lens of the optical microscope. The optical microscope then captures an image that is stored in a storage device (i.e. "memory"). The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 2. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 3. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 4. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 5. The optical microscope then captures an image that is stored in the storage device. The process is continued for N different distances between the objective lens of the optical microscope and the sample.

Information indicating which image is associated with each distance is also stored in the storage device for processing.

[ 0069] In an alternative embodiment, the distance between the objective lens of the optical microscope and the sample is fixed. Rather, the optical microscope includes a zoom lens that allows the optical microscope to vary the focal plane of the optical microscope. In this fashion, the focal plane of the optical microscope is varied across N different focal planes while the stage, and the sample supported by the stage, is stationary. An image is captured for each focal plane and stored in a storage device. The captured images across all the various focal planes are then processed to determine 3-D information of the sample. This embodiment requires a zoom lens that can provide sufficient resolution across all focal planes and that introduces minimal image distortion. Additionally, calibration between each zoom position and resulting focal length of the zoom lens is required.

[ 0070 ] FIG. 5 is a chart illustrating the distance between the objective lens of the optical microscope and the sample for which each x-y coordinate had the maximum characteristic value. Once images are captured and stored for each distance, characteristics of each pixel of each image can be analyzed. For example, the intensity of the light of each pixel of each image can be analyzed. In another example, the contrast of each pixel of each image can be analyzed. In yet another example, the fringe contrast of each pixel of each image can be analyzed. The contrast of a pixel may be determined by comparing the intensity of a pixel with that of a preset number of surrounding pixels. For additional description regarding how to generate contrast information, see U.S. Patent Application Serial Number 12/699,824, entitled "3-D Optical Microscope", filed February 3, 2010, by James Jianguo Xu et al. (the subject matter of which is incorporated herein by reference).

[0071] FIG. 6 is a 3-D diagram of a 3-D image rendered using the maximum characteristic value for each x-y coordinate shown in FIG. 5. All pixels with an X location between 1 and 19 have a maximum characteristic value at z-direction distance 7. All pixels with an X location between 20 and 29 have a maximum characteristic value at z-direction distance 2. All pixels with an X location between 30 and 49 have a maximum characteristic value at z-direction distance 7. All pixels with an X location between 50 and 59 have a maximum characteristic value at z-direction distance 2. All pixels with an X location between 60 and 79 have a maximum characteristic value at z- direction distance 7. In this fashion, the 3-D image illustrated in FIG. 6 can be created using the maximum characteristic value per x-y pixel across all captured images.

Additionally, given that distance 2 is known and that distance 7 is known, the depth of the well illustrated in FIG. 6 can be calculated by subtracting distance 7 from distance 2.

[0072] PEAK MODE OPERATION

[0073 ] FIG. 7 is a diagram illustrating peak mode operation using images captured at various distances. As discussed regarding FIG. 4 above, the optical microscope is first adjusted to be focused on a plane located at distance 1 away from an objective lens of the optical microscope. The optical microscope then captures an image that is stored in a storage device (i.e. "memory"). The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 2. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 3. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 4. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 5. The optical microscope then captures an image that is stored in the storage device. The process is continued for N different distances between the objective lens of the optical microscope and the stage. Information indicating which image is associated with each distance is also stored in the storage device for processing.

[ 0074 ] Instead of determining the maximum characteristic value for each x-y location across all captured images at various z-distances, the maximum characteristic value across all x-y locations in a single captured image at one z-distance is determined in peak mode operation. Said another way, for each captured image the maximum characteristic value across all pixels included in the captured image is selected. As illustrated in FIG. 7, the pixel location with the maximum characteristic value will likely vary between different captured images. The characteristic may be intensity, contrast, or fringe contrast.

[ 0075 ] FIG. 8 is a diagram illustrating peak mode operation using images captured at various distances when a photoresist (PR) opening is within the field of view of the optical microscope. The top-down view of the object shows the cross- section area of the PR opening in the x-y plane. The PR opening also has a depth of specific depth in the z-direction. The images captured at various distances are shown below the top-down view in FIG. 8. At distance 1, the optical microscope is not focused on the top surface of the wafer or the bottom surface of the PR opening. At distance 2, the optical microscope is focused on the bottom surface of the PR opening, but is not focused on the top surface of the wafer. This results in an increased characteristic value (intensity/contrast/fringe contrast) in the pixels that receive light reflecting from the bottom surface of the PR opening compared to the pixels that receive reflected light from other surfaces that are out of focus (top surface of the wafer). At distance 3, the optical microscope is not focused on the top surface of the wafer or the bottom surface of the PR opening. Therefore, at distance 3 the maximum characteristic value will be substantially lower than the maximum characteristic value measured at distance 2. At distance 4, the optical microscope is not focused on any surface of the sample; however, due to the difference of the index of refraction of air and the index of refraction of the photoresist layer an increase in the maximum characteristic value (intensity/contrast/fringe contrast) is measured. FIG. 1 1 and the accompanying text describe this phenomenon in greater detail. At distance 6, the optical microscope is focused on the top surface of the wafer, but is not focused on the bottom surface of the PR opening. This results in an increased characteristic value (intensity/contrast/fringe contrast) in the pixels that receive light reflected from the top surface of the wafer compared to the pixels that receive reflected light from other surfaces that are out of focus (bottom surface of the PR opening). Once the maximum characteristic value from each captured image is determined, the results can be utilized to determine at which distances a surface of the wafer is located.

[ 0076 ] FIG. 9 is a chart illustrating the 3-D information resulting from the peak mode operation. As discussed regarding FIG. 8, the maximum characteristic value of the images captured at distances 1 , 3 and 5 have a lower maximum characteristic value compared to the maximum characteristic value of the images captured at distances 2, 4 and 6. The curve of the maximum characteristics values at various z-distances may contain noise due to environmental effects, such as vibration. To minimize such noise, a standard smoothing method, such as Gaussian filtering with certain kernel size, can be applied before further data analysis.

[ 0077 ] One method of comparing the maximum characteristics values is performed by a peak finding algorithm. In one example, a derivative method is used to locate zero crossing point along the z-axis to determine the distance at which each "peak" is present. The maximum characteristic value at each distance where a peak is found is then compared to determine the distance where the greatest characteristic value was measured. In the case of FIG. 9, a peak will be found at distance 2, which is used as an indication that a surface of the wafer is located at distance 2.

[ 0078 ] Another method of comparing the maximum characteristics values is performed by comparing each maximum characteristic value with a preset threshold value. The threshold value may be calculated based on the wafer materials, distances, and the specification of the optical microscope. Alternatively, the threshold value may be determined by empirical testing before automated processing. In either case, the maximum characteristic value for each captured image is compared to the threshold value. If the maximum characteristic value is greater than the threshold, then it is determined that the maximum characteristic value indicates the presence of a surface of the wafer. If the maximum characteristic value is not greater than the threshold, then it is determined that the maximum characteristic value does not indicate a surface of the wafer.

[0079] SUMMATION MODE OPERATION

[0080 ] FIG. 10 is a diagram illustrating summation mode operation using images captured at various distances. As discussed regarding FIG. 4 above, the optical microscope is first adjusted to be focused on a plane located at distance 1 away from an objective lens of the optical microscope. The optical microscope then captures an image that is stored in a storage device (i.e. "memory"). The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 2. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted such that the distance between the objective lens of the optical microscope and the sample is distance 3. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 4. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 5. The optical microscope then captures an image that is stored in the storage device. The process is continued for N different distances between the objective lens of the optical microscope and the sample. Information indicating which image is associated with each distance is also stored in the storage device for processing.

[0081 ] Instead of determining the maximum characteristic value across all x-y locations in a single captured image at one z-distance, the characteristic values of all x- y locations of each captured image are added together. Said another way, for each captured image the characteristic values for all pixels included in the captured image are summed together. The characteristic may be intensity, contrast, or fringe contrast. A summed characteristics value that is substantially greater than the average summed characteristic value of neighboring z-distances indicates that a surface of the wafer is present at the distance. However, this method can also result in false positives as described in FIG. 11.

[ 0082 ] FIG. 11 is a diagram illustrating erroneous surface detection when using summation mode operation. The wafer illustrated in FIG. 11 includes a silicon substrate 30 and a photoresist layer 31 deposited on top of the silicon substrate 30. The top surface of the silicon substrate 30 is located at distance 2. The top surface of the photoresist layer 31 is located at distance 6. The image captured at distance 2 will result in a summation of characteristic values that is substantially greater than other images captured at distances where a surface of the wafer is not present. The image captured at distance 6 will result in a summation of characteristic values that is substantially greater than other images captured at distances where a surface of the wafer is not present. At this point, the summation mode operation seems to be a valid indicator of the presence of a surface of the wafer. However, the image captured at distance 4 will result in a summation of characteristic values that is substantially greater than other images captured at distances where a surface of the wafer is not present. This is a problem because, as is clearly shown in FIG. 11, a surface of the wafer is not located at distance 4. Rather, the increase in the summation of characteristics values at distance 4 is an artifact of the surfaces located at distances 2 and 6. A major portion of the light that irradiates the photoresist layer does not reflect, but rather travels into the photoresist layer. The angle at which this light travels is changed due to the difference of the index of refraction of air and photoresist. The new angle is closer to normal than the angle of the light irradiating the top surface of the photoresist. The light travels to the top surface of the silicon substrate beneath the photoresist layer. The light is then reflected by the highly reflected silicon substrate layer. The angle of the reflected light is changed again as the reflected light leaves the photoresist layer and enters the air due to the difference in the index of refraction between air and the photoresist layer. This first redirecting, reflecting, and second redirecting of the irradiating light causes the optical microscope to observe an increase in characteristic values

(intensity/contrast/fringe contrast) at distance 4. This example illustrates that whenever a sample includes a transparent material, the summation mode operation will detect surfaces that are not present on the sample. [ 0083 ] FIG. 12 is a chart illustrating the 3-D information resulting from the summation mode operation. This chart illustrates the result of the phenomenon illustrated in FIG. 11. The large value of summed characteristic values at distance 4 erroneously indicates the present of a surface at distance 4. A method that does not result in false positive indications of the presence of surface of the wafer is needed.

[ 0084 ] RANGE MODE OPERATION

[ 0085 ] FIG. 13 is a diagram illustrating range mode operation using images captured at various distances. As discussed regarding FIG. 4 above, the optical microscope is first adjusted to be focused on a plane located at distance 1 away from an objective lens of the optical microscope. The optical microscope then captures an image that is stored in a storage device (i.e. "memory"). The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 2. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 3. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted to such that the distance between the objective lens of the optical microscope and the sample is distance 4. The optical microscope then captures an image that is stored in the storage device. The stage is then adjusted such that the distance between the objective lens of the optical microscope and the sample is distance 5. The optical microscope then captures an image that is stored in the storage device. The process is continued for N different distances between the objective lens of the optical microscope and the sample. Information indicating which image is associated with each distance is also stored in the storage device for processing.

[ 0086 ] Instead of determining the summation of all characteristic values across all x-y locations in a single captured image at one z-distance, a count of pixels that have a characteristic value within a specific range that are included in the single captured image is determined. Said another way, for each captured image a count of pixels that have a characteristic value within a specific range is determined. The characteristic may be intensity, contrast, or fringe contrast. A count of pixels at one particular z- distance that is substantially greater than the average count of pixels at neighboring z- distances indicates that a surface of the wafer is present at the distance. This method reduces the false positives described in FIG. 11.

[ 0087 ] FIG. 14 is a chart illustrating the 3-D information resulting from the range mode operation. Given knowledge of the different types of material that are present on the wafer and the optical microscope configuration, an expected range of characteristic values can be determined for each material type. For example, photoresist layer will reflect a relative small amount of light that irradiates the top surface of the photoresist layer (i.e. 4%). Silicon layer will reflect light that irradiates the top surface of the silicon layer (i.e. 37%). The redirected reflections observed at distance 4 (i.e. 21%) will be substantially greater than the reflections observed at distance 6 from the top surface of the photoresist layer; however, the redirected reflections observed at distance 4 (i.e. 21%) will be substantially less than the reflection observed at distance 2 from the top surface of the silicon substrate. Therefore, when looking for the top surface of the photoresist layer, a first range that is centered on the expected characteristic value for photoresist can be used to filter out pixels that have characteristic values outside of the first range, thereby filtering out pixels that have characteristic values not resulting from reflections from the top surface of the photoresist layer. The pixel count across all distances generated by applying the first range of characteristic values is illustrated in FIG. 15. As shown in FIG. 15, some but not necessarily all pixels from other distances (surfaces) are filtered out by applying the first range. This occurs when the

characteristic values measured at multiple distances fall within the first range.

Nevertheless, application of the first range before counting pixels still functions to make the pixel count at the desired surface more prominent in comparison to other pixel counts at other distances. This is illustrated in FIG. 15. The pixel count at distance 6 is greater than the pixel count at distances 2 and 4 after the first range is applied, whereas before the first range was applied the pixel count at distance 6 was less than the pixel count at distances 2 and 4 (as shown in FIG. 14).

[ 0088 ] In a similar fashion, when looking for the top surface of the silicon substrate layer, a second range that is centered on the expected characteristic value for silicon substrate layer can be used to filter out pixels that have characteristic values outside of the second range, thereby filtering out pixels that have characteristic values not resulting from reflections from the top surface of the silicon substrate layer. The pixel count across all distances generated by applying the second range of characteristic values is illustrated in FIG. 16. This application of ranges reduces the false indication of a wafer surface located at distance 4 by virtue of the knowledge of what

characteristic values are expected from all the material present on the wafer being scanned. As discussed regarding in FIG. 15, some but not necessarily all pixels from other distances (surfaces) are filtered out by applying a range. However, when the characteristic values measured at multiple distances do not fall within the same range, then the result of applying the range will eliminate all pixel counts from other distances (surfaces). FIG. 16 illustrates this scenario. In FIG. 16, the second range is applied before generating the pixel count at each distance. The result of applying the second range is that only pixels at distance 2 are counted. This creates a very clear indication that surface of the silicon substrate is located at distance 2.

[0089] It is noted, that reduce the impact caused by potential noise such as environmental vibration, a standard smoothing operation such as Gaussian filtering can be applied to the total pixel count along the z-distances before carrying out any peak searching operations.

[0090] FIG. 17 is a flowchart 200 illustrating the various steps included in peak mode operation. In step 201, the distance between the sample and the objective lens of an optical microscope is varied at pre-determined steps. In step 202, an image is captured at each pre-determined step. In step 203, a characteristic of each pixel in each captured image is determined. In step 204, for each captured image, the greatest characteristic across all pixels in the captured image is determined. In step 205, the greatest characteristic for each captured image is compared to determine if a surface of the sample is present at each pre-determined step.

[0091] FIG. 18 is a flowchart 300 illustrating the various steps included in range mode operation. In step 301, the distance between the sample and the objective lens of an optical microscope is varied at pre-determined steps. In step 302, an image is captured at each pre-determined step. In step 303, a characteristic of each pixel in each captured image is determined. In step 304, for each captured image, a count of pixels that have a characteristic value within a first range is determined. In step 305, it is determined if a surface of the sample is present at each pre-determined step based on the count of pixels for each captured image. [ 0092 ] FIG. 19 is a diagram of a captured image including a single feature. One example of a feature is an opening in the photoresist layer in the shape of a circle.

Another example of a feature is an opening in the photoresist layer in the shape of trench, such as an unplated redistribution line (RDL) structure. During the wafer fabrication process the ability to measure various features of a photoresist opening in a wafer layer is advantageous. Measurement of a photoresist opening provides detection of flaws in the structure before metals are plated into the hole. For example, if a photoresist opening does not have the correct size, the plated RDL width will be wrong. Detection of this type of defect can prevent the further fabrication of a defective wafer. Preventing further fabrication of a defective wafer saves material and processing expenses. FIG. 19 illustrates that the measured intensity of light reflected from the top surface of the photoresist layer is greater than the measured intensity of light reflected from the opening in the photoresist layer when the captured image is focused on the top surface of the photoresist layer. As discussed in greater detail below, the information associated with each pixel in the captured image can be used to generate an intensity value for each pixel in the captured image. The intensity value of each pixel can then be compared with an intensity threshold to determine if each pixel is associated with a first region of the captured image, such as the top surface of the photoresist layer, or is associated with a second region of the captured image, such as the photoresist opening area. This can be done by (i) first applying an intensity threshold to the measured intensity of each pixel in the captured image, (ii) categorizing all pixels with an intensity value below the intensity threshold as being associated with a first region of the captured image, (iii) categorizing all pixels with an intensity value above the intensity threshold as being associated with a second region of the captured image, and (iv) defining a feature to be a group of pixels within the same region that are contiguous with other pixels associated with the same region.

[ 0093 ] The captured image shown in FIG. 19 may be a color image. Each pixel of the color image includes red, blue and green (RBG) channel values. Each of these color values can be combined to generate a single intensity value for each pixel. The various methods for converting the RBG values for each pixel to single intensity value are described below. [ 0094 ] A first method is to use three weighted values to convert three color channels to an intensity value. Said another way, each color channel has its own weighted value or conversion factor. One can either use a default set of three conversion factors defined in a system recipe or modify them based on his sample measurement needs. A second method is to subtract the color channels for each pixel from a default color channel value for each color channel, the result of which is then converted into intensity values using the conversion factors discussed in the first method. A third method is to use a "color difference" scheme to convert the colors to intensity values. In a color difference scheme, the resultant pixel intensity is defined by how close a pixel's color is compared to a predefined fixed Red, Green and Blue color value. One example of color difference is the weighted vector distance between a pixel's color value and the fixed color value. Yet another method of "color difference", which is a color difference method with a fixed color value automatically derived from the image. In one example, where the border area of an image is known to be of the background color. The weighted average of the border area pixels' color can be used as the fixed color value for the color difference scheme.

[ 0095 ] Once the color image has been converted to an intensity image, an intensity threshold can be compared to the intensity of each pixel to determine the region of the image to which the pixel belongs. Said another way, a pixel with an intensity value above the intensity threshold indicates that the pixel received light reflected from a first surface of the sample, and a pixel with an intensity value below the intensity threshold indicates that the pixel did not receive light reflected from the first surface of the sample. Once each pixel in the image is mapped to a region, the approximate shape of the feature that is in focus in the image can be determined.

[ 0096 ] FIG. 20, FIG. 21 and FIG. 22 illustrate three different methods of generating an intensity threshold value that can be used to differentiate pixels that measure light reflecting from the top surface of the photoresist layer from pixels that measure light not reflecting from the top surface of the photoresist layer.

[ 0097 ] FIG. 20 illustrates a first method of generating an intensity threshold value used for analyzing the captured image. In this first method, a count of pixels is generated for each measured intensity value. This type of graph is also referred to as a histogram. Once the count of pixels per intensity value is generated, the intensity range between the peak count of pixels resulting from measured light reflecting from the photoresist layer and the peak count of pixels resulting from measured light not reflecting from the photoresist layer can be determined. An intensity value within that range of intensity is selected to be the intensity threshold. In one example, the midpoint between the two peak counts is selected to be the threshold intensity. Other intensity values between the two peak counts may be used in other examples that fall within the disclosure of the present invention.

[0098] FIG. 21 is a second method of generating an intensity threshold value used for analyzing the captured image. In step 311, a determination is made as to a first percentage of the captured image that represents the photoresist region. This determination can be made by physical measurement, optical inspection or based on production specification. In step 312, a determination is made as to a second percentage of the captured image that represents the photoresist opening area. This determination can be made by physical measurement, optical inspection or based on production specification. In step 313, all pixels in the captured image are sorted according to the intensity measured by each pixel. In step 314, all pixels that have an intensity within the bottom second percentage of all pixel intensities are selected. In step 315, all selected pixels are analyzed.

[0099] FIG. 22 illustrates a third method of determining an intensity threshold value. In step 321, a predetermined intensity threshold is stored into memory. In step 322, the intensity of each pixel is compared against the stored intensity threshold. In step 323, all pixels that have an intensity value less than the intensity threshold are selected. In step 324, the selected pixels are analyzed.

[00100] Regardless of how the intensity threshold is generated, the threshold intensity value is used to determine roughly where the border of the feature in the captured image is located. The rough border of the feature will then be used to determine a much more accurate measurement of the border of the feature, as discussed below.

[00101] FIG. 23 is a 3-D diagram of a photoresit opening shown in FIG. 19.

Various photoresist opening measurements are of interest during the fabrication process, such as the area of the top and bottom opening , the diameter of the top and bottom opening, the circumference of the top and bottom opening, the cross-sectional width of the top and bottom opening, and the depth of the opening. A first measurement is the top surface opening area. FIG. 8 (and the accompanying text) describes how an image focused on the top surface of the photoresist opening, and an image focused on the bottom surface of the photoresist opening, are selected from a plurality of images taken at different distances from the sample. Once the image focused on the top surface is selected, the image focused on the top surface of the photoresist opening may be used to determine the above mentioned top opening measurements. Likewise, once the image focused on the bottom surface of the photoresist opening is selected, the image focused on the bottom surface of the photoresist opening may be used to determine the above mentioned bottom opening measurements. As discussed above and in U.S. Patent Application Serial Number 12/699,824, entitled "3-D Optical Microscope", filed February 3, 2010, by James Jianguo Xu et al. (the subject matter of which is incorporated herein by reference), a pattern or grid may be projected onto the surface of the sample while multiple images are captured. In one example, an image including the projected pattern or grid is used to determine the photoresist opening measurements. In another example, a new image not including the pattern or grid, captured at the same z-distance is used to determine the photoresist opening

measurements. In the latter example, the new image without a projected pattern or grid on the sample provides a "cleaner" image which provides easier detection of the border of the photoresist opening.

[00102 ] FIG. 24 is a 2-D diagram of the top surface opening shown in FIG. 23. The 2-D diagram clearly shows the border of the top surface opening (solid line) 40. The border is traced using a best fit line (the dashed line 41). Once the best fit line trace is created, the diameter, area and circumference of the best fit line 41 can be generated.

[00103] FIG. 25 is 2-D diagram of the bottom surface opening shown in FIG. 23. The 2-D diagram clearly shows the border of the bottom surface opening (solid line 42). The border is traced using a best fit line (dashed line 43). Once the best fit line trace is created, the bottom surface opening diameter, area and circumference of the best fit line can be calculated.

[00104] In the present example, the best fit line is automatically generated by a computer system in communication with the optical microscope. The best fit line can be generated by analyzing transitions between dark and bright portions of the selected image, as is discussed in greater detail below.

[00105 ] FIG. 26 is a 2-D image of an opening in a photoresist layer. The image is focused on the top surface of the photoresist layer. In this example, the light reflecting from the top surface of the photoresist layer is bright because the microscope is focused on the top surface of the photoresist layer. The light intensity measured from the photoresist opening is dark because there is no reflecting surface in the photoresist opening. The intensity of each pixel is used to determine whether the pixel belongs to the top surface of the photoresist or the opening in the photoresist. The change in intensity from the transition between the top surface of the photoresist to the opening in the photoresist may span multiple pixels and multiple intensity levels. The image background intensity may also not be uniform. Therefore, further analysis is needed to determine exact pixel locations of the border of the photoresist. To determine the pixel location of a single surface transition point, an intensity average is taken within a neighboring bright area outside of the transition area, and an intensity average is taken within the neighboring dark area outside the transition area. The middle intensity value between the average of the neighboring bright area and the average of the neighboring dark area is used as the intensity threshold value that distinguishes whether a pixel belongs to the top surface of the photoresist or the opening in the photoresist. This intensity threshold may be different from the earlier discussed intensity threshold used to select the feature within a single captured image. Once the middle intensity threshold is determined, the middle intensity threshold is compared to all pixels to differentiate pixels that belong to the top surface of the photoresist or the opening in the photoresist. If the pixel intensity is above the intensity threshold, then the pixel is determined to be a photoresist pixel. If the pixel intensity is below the intensity threshold, then the pixel is determined to an opening area pixel. Multiple border points can be determined in this fashion and used to fit a shape. The fitted shape is then used to calculate the all desired dimensions of the top opening of the photoresist. In one example, the fitted shape may be selected from the group of: a circle, a square, a rectangle, a triangle, a oval, a hexagon, a pentagon, ect.

[00106] FIG. 27 illustrates the variation of measured intensity across the neighboring area around the brightness transition of FIG. 26. At the left most portion of the neighboring area the measured intensity is high because the microscope is focused on the top surface of the photoresist layer. The measured light intensity decreases through the brightness transition of the neighboring area. The measured light intensity falls to a minimum range at the right most portion of the neighboring area because the top surface of the photoresist layer is not present in the right most portion of the neighboring area. FIG. 27 graphs this variation of measured intensity across the neighboring area. The border point indicating where the top surface of the photoresist layer ends can then be determined by application of a threshold intensity. The boarder point where the top surface of the photoresist ends is located at the intersection of the measured intensity and the threshold intensity. This process is repeated at different neighboring areas located along the brightness transition. For each neighboring area, a border point is determined. The border point for each neighboring area is then used to determine the size and shape of the top surface border.

[00107 ] FIG. 28 is a 2-D image of an opening in a photoresist layer. The image is focused on the bottom surface of photoresist opening. In this example, the light reflecting from the bottom surface of the photoresist opening area is bright because the microscope is focused on the bottom surface of the photoresist opening. The light reflecting from the photoresist area is also relatively bright because the substrate is either silicon or metal seed layer which has high reflectivity. The light reflecting from the border of the photoresist layer is dark due to light scattering caused by the photoresist border. The measured intensity of each pixel is used to determine if the pixel belongs to the bottom surface of the photoresist opening or not. The change in intensity from the transition between the bottom surface of the photoresist to the photoresist opening area may span multiple pixels and multiple intensity levels. The image background intensity may also not be uniform. Therefore, further analysis is needed to determine the exact pixel locations of the photoresist opening. To determine the pixel location of a border point, the location of a pixel with minimum intensity is determined within neighboring pixels. Multiple border points can be determined in this fashion, and are used to fit a shape. The fitted shape is then used to calculate the desired dimensions of the bottom opening.

[00108 ] FIG. 29 illustrates the variation of measured intensity across the neighboring area around the brightness transition of FIG. 28. At the left most portion of the neighboring area the measured intensity is high because the microscope is focused on the bottom surface of the photoresist opening. The measured light intensity decreases to a minimum intensity and then increases through the brightness transition of the neighboring area. The measured light intensity rises to a relatively high intensity range at the right most portion of the neighboring area due to light reflections from substrate surface. FIG. 29 graphs this variation of measured intensity across the neighboring area. The border point indicating where border of the photoresist opening is located can then be determined by finding the location of the minimum measured intensity. The boarder point is located where the minimum measured intensity is located. The process is repeated at different neighboring areas located along the brightness transition. For each neighboring area, a border point is determined. The border point for each neighboring area is then used to determine the size and shape of the bottom surface border.

[ 00109 ] FIG. 30 is a 2 -D image of a trench structure in a photoresist layer, such as an unplated redistribution line (RDL) structure. The image is focused on the top surface of the photoresist layer. In this example, the light reflecting from the top surface of the photoresist layer is bright because the microscope is focused on the top surface of the photoresist layer. The light reflecting from the opening in the photoresist layer is darker because less light is reflected from the open trench area. The intensity of each pixel is used to determine whether the pixel belongs to the top surface of the photoresist or the opening area in the photoresist. The change in intensity from the transition between the top surface of the photoresist to the opening area in the photoresist may be span multiple pixels and multiple intensity levels. The image background intensity may also not be uniform. Therefore, further analysis is needed to determine exact pixel locations of the border of the photoresist. To determine the pixel location of a single surface transition point, an intensity average is taken within a neighboring bright area outside of the transition area, and an intensity average is taken within the neighboring dark area outside the transition area. The middle intensity value between the average of the neighboring bright area and the average of the neighboring dark area is used as the intensity threshold value to distinguish top surface photoresist reflections from non-top surface photoresist reflections. Once the middle intensity threshold is determined, the middle intensity threshold is compared to all neighboring pixels to determine a border between the top surface pixels and the photoresist opening area. If the pixel intensity is above the intensity threshold, then the pixel is determined to be a top surface photoresist pixel. If the pixel intensity is below the intensity threshold, then the pixel is determined to be a photoresist opening area pixel. Multiple border points can be determined in this fashion and used to fit a shape. The fitted shape is then used to calculate the all desired dimensions of the photoresist opening of the trench such as trench width.

[00110] FIG. 31 illustrates the variation of measured intensity across the neighboring area around the brightness transition of FIG. 30. At the left most portion of the neighboring area the measured intensity is high because the microscope is focused on the top surface of the photoresist layer. The measured light intensity decreases through the brightness transition of the neighboring area. The measured light intensity falls to a minimum range at the right most portion of the neighboring area because the top surface of the photoresist layer is not present in the right most portion of the neighboring area. FIG. 31 graphs this variation of measured intensity across the neighboring area. The border point indicating where the top surface of the photoresist layer ends can then be determined by application of a threshold intensity. The boarder point where the top surface of the photoresist ends is located at the intersection of the measured intensity and the threshold intensity. This process is repeated at different neighboring areas located along the brightness transition. For each neighboring area, a border point is determined. The border point for each neighboring area is then used to determine the size and shape of the top surface border.

[00111] With respect of FIG. 26 through FIG. 31 , pixel intensity is only one example of pixel characteristics that can be used to distinguish pixels of different regions in an image. For example, the wavelengths, or colors, of each pixel may also be used to distinguish pixels from different regions in an image in a similar fashion. Once the border between each region is accurately defined, it is then used to determine the critical dimensions (CD) of a PR opening such as its diameter or width. Often times, the measured CD values are then compared with those measured on other types of tools such as a critical dimension - scanning electron microscope (CD-SEM). This kind of cross-calibration is necessary to ensure measurement precision among production monitoring tools. [00112 ] FIG. 32 is a 3-D diagram of a photoresist openings partially filled with plated metal. The openings in the photoresist layer are in the shape of trench, such as a plated redistribution line (RDL) structure. During the wafer fabrication process the ability to measure various features of plated metal deposited into the photoresist openings while the photoresist is still intact is advantageous. For example, if the thickness of the metal is not thick enough, one can always plate additional metal as long as the photoresist has not been stripped. The ability to catch potential problems while the wafer is still at a reworkable stage prevents further fabrication of a defective wafer and saves material and processing expenses.

[00113] FIG. 33 is a cross-sectional diagram of a photoresist opening partially filled with plated metal. FIG. 33 clearly shows that the height of the top surface of the photoresist ("PR") region is greater than the height of the top surface of the plated metal. The width of the top surface of the plated metal is also illustrated in FIG. 33. Using the various methods described above, the z-position of the top surface of the photoresist region and the z-position of the top surface of the plated metal can be determined. The distance between the top surface of the photoresist region and the top surface of the plated metal (also referred to as "step height") is equal to the difference between the height of the top surface of the photoresist region and the height of the top surface of the plated metal. To determine the thickness of the plated metal another measurement of the thickness of the photoresist region is required. As discussed above regarding FIG. 11, the photoresist region is semi-transparent and has an index of refraction that is different from the index of refraction for open air. Therefore, the focal plane of the captured image that is focused on the light reflecting from the bottom surface of the photoresist region is not actually located at the bottom surface of the photoresist region. However, our goal at this point is different. We do not want to filter out the erroneous surface measurement, but rather the thickness of the photoresist region is now desired. FIG. 40 illustrates how a portion of incident light that does not reflect from the top surface of the photoresist region, travels through the photoresist region at a different angle than the incident light due to the index of refraction of the photoresist material. If this error is not accounted for, the measured thickness of the photoresist region is D' (measured z-position of the captured image focused on light reflecting from top surface of the photoresist region minus the measured z-position of the captured image focused on light reflecting from the bottom surface of the photoresist region), which FIG. 40 clearly illustrates is not close to the actual thickness of the photoresist region D. The error introduced by the index of refraction of the photoresist region, however, can be removed by applying a correction calculation to the measured thickness of the photoresist region. A first correction calculation is shown in FIG. 40, where the actual thickness of the photoresist region (D) is equal to the measured thickness of the photoresist region (D') times the index of refraction of the photoresist region. A second correction calculation is shown in FIG. 40, where the actual thickness of the photoresist region (D) is equal to the measured thickness of the photoresist region (D') times the index of refraction of the photoresist region plus an offset value. The second correction calculation is more general and takes into account the fact that the index of refraction of photoresist is a function of wavelength and that, when imaging through a transparent media, spherical aberration of an objective lens may affect z position measurement. Therefore, as long as proper calibration procedure is followed, a z-position of the focal plane of the capture image that is focused on the light reflecting from the bottom surface of the photoresist region can be used to calculate the actual thickness of the photoresist region.

[00114 ] Once the correction equation is applied to the measured thickness of the photoresist region, the true thickness of the photoresist region can be attained.

Referring back to FIG. 33, the thickness of the plated metal can now be calculated. The thickness of the plated metal is equal to the thickness of the photoresist region minus the difference between the z-position of the top surface of the photoresist region and the z-position of the top surface of the plated metal.

[00115 ] FIG. 34 is a 3-D diagram of a circular photoresist opening with plated metal. FIG. 35 is a cross-sectional diagram of the circular photoresist opening with plated metal shown in FIG. 34. The cross-sectional diagram of FIG. 35 is similar to the cross-sectional diagram of FIG. 33. FIG. 35 clearly shows that the height of the top surface of the photoresist ("PR") region is greater than the height of the top surface of the plated metal. Using the various methods described above, the z-position of the top surface of the photoresist region and the z-position of the top surface of the plated metal can be determined. The distance between the top surface of the photoresist region and the top surface of the plated metal (also referred to as "step height") is equal to the difference between the height of the top surface of the photoresist region and the height of the top surface of the plated metal. To determine the thickness of the plated metal another measurement of the thickness of the photoresist region is required. As discussed above regarding FIG. 11, the photoresist region is semi-transparent and has an index of refraction that is different from the index of refraction for open air.

Therefore, the focal plane of the captured image that is focused on the light reflecting from the bottom surface of the photoresist region is not actually located at the bottom surface of the photoresist region. However, our goal at this point is different. The thickness of the photoresist region is now desired. FIG. 40 illustrates how a portion of incident light that does not reflect from the top surface of the photoresist region, travels through the photoresist region at a different angle than the incident light due to the index of refraction of the photoresist material. If this error is not accounted for, the measured thickness of the photoresist region is D' (measured z-position of the captured image focused on light reflecting from top surface of the photoresist region minus the measured z-position of the captured image focused on light reflecting from the bottom surface of the photoresist region), which FIG. 40 clearly illustrates is not close to the actual thickness of the photoresist region D. The error introduced by the index of refraction of the photoresist region, however, can be removed by applying a correction calculation to the measured thickness of the photoresist region. A first correction calculation is shown in FIG. 40, where the actual thickness of the photoresist region (D) is equal to the measured thickness of the photoresist region (D') times the index of refraction of the photoresist region. A second correction calculation is shown in FIG. 40, where the actual thickness of the photoresist region (D) is equal to the measured thickness of the photoresist region (D') times the index of refraction of the photoresist region plus an offset value. The second correction calculation is more general and takes into account the fact that the index of refraction of photoresist is a function of wavelength and that, when imaging through a transparent media, spherical aberration of an objective lens may affect z position measurement. Therefore, as long as proper calibration procedure is followed, a z-position of the focal plane of the capture image that is focused on the light reflecting from the bottom surface of the photoresist region can be used to calculate the actual thickness of the photoresist region. [00116] Once the correction equation is applied to the measured thickness of the photoresist region, the true thickness of the photoresist region can be attained.

Referring back to FIG. 35, the thickness of the plated metal can now be calculated. The thickness of the plated metal is equal to the thickness of the photoresist region minus the difference between the z-position of the top surface of the photoresist region and the z-position of the top surface of the plated metal.

[00117] FIG. 36 is a 3-D diagram of a metal pillar over passivation. FIG. 37 is a cross-sectional diagram of a metal pillar over passivation shown in FIG. 36. FIG. 37 clearly shows that the height of the top surface of the passivation layer is less than the height of the top surface of the metal layer. The diameter of the top surface of the plated metal is also illustrated in FIG. 37. Using the various methods described above, the z-position of the top surface of the passivation layer and the z-position of the top surface of the metal layer can be determined. The distance between the top surface of the passivation layer and the top surface of the metal layer (also referred to as "step height") is equal to the difference between the height of the top surface of the metal layer and the height of the top surface of the passivation layer. To determine the thickness of the metal layer another measurement of the thickness of the passivation layer is required. As discussed above regarding FIG. 11, the semi-transparent materials, such as a photoresist region or a passivation layer, has an index of refraction that is different from the index of refraction for open air. Therefore, the focal plane of the captured image that is focused on the light reflecting from the bottom surface of the passivation layer is not actually located at the bottom surface of the passivation layer. However, our goal at this point is different. The thickness of the passivation layer is now desired. FIG. 47 illustrates how a portion of incident light that does not reflect from the top surface of the passivation layer, travels through the passivation layer at a different angle than the incident light due to the index of refraction of the passivation material. If this error is not accounted for, the measured thickness of the passivation layer is D' (measured z-position of the captured image focused on light reflecting from top surface of the passivation region minus the measured z-position of the captured image focused on light reflecting from the bottom surface of the passivation region), which FIG. 47 clearly illustrates is not close to the actual thickness of the passivation layer D. The error introduced by the index of refraction of the passivation layer, however, can be removed by applying a correction calculation to the measured thickness of the passivation layer. A first correction calculation is shown in FIG. 47, where the actual thickness of the passivation layer (D) is equal to the measured thickness of the passivation layer (D') times the index of refraction of the passivation layer. A second correction calculation is shown in FIG. 47, where the actual thickness of the passivation layer (D) is equal to the measured thickness of the passivation layer (D') times the index of refraction of the passivation layer plus an offset value. The second correction calculation is more general and takes into account the fact that the index of refraction of passivation layer is a function of wavelength and that, when imaging through a transparent media, spherical aberration of an objective lens may affect z position measurement. Therefore, as long as proper calibration procedure is followed, a z-position of the focal plane of the capture image that is focused on the light reflecting from the bottom surface of the passivation layer is used to calculate the actual thickness of the passivation layer.

[00118 ] Once the correction equation is applied to the measured thickness of the passivation layer, the true thickness of the passivation layer can be attained. Referring back to FIG. 37, the thickness of the metal layer can now be calculated. The thickness of the metal layer is equal to the sum of the thickness of the passivation layer and the difference between the z-position of the top surface of the passivation layer and the z- position of the top surface of the metal layer.

[00119] FIG. 38 is a 3-D diagram of metal over passivation. In this particular case, the metal structure shown is redistribution lines (RDL). FIG. 39 is a cross-sectional diagram of metal over passivation shown in FIG. 38. FIG. 39 clearly shows that the height of the top surface of the passivation layer is less than the height of the top surface of the metal layer. Using the various methods described above, the z-position of the top surface of the passivation layer and the z-position of the top surface of the metal layer can be determined. The distance between the top surface of the passivation layer and the top surface of the metal layer (also referred to as "step height") is equal to the difference between the height of the top surface of the metal layer and the height of the top surface of the passivation layer. To determine the thickness of the metal layer another measurement of the thickness of the passivation layer is required. As discussed above regarding FIG. 11, the semi-transparent materials, such as a photoresist region or a passivation layer, has an index of refraction that is different from the index of refraction for open air. Therefore, the focal plane of the captured image that is focused on the light reflecting from the bottom surface of the passivation layer is not actually located at the bottom surface of the passivation layer. However, our goal at this point is different. The thickness of the passivation layer is now desired. FIG. 40 illustrates how a portion of incident light that does not reflect from the top surface of the passivation layer, travels through the passivation layer at a different angle than the incident light due to the index of refraction of the passivation material. If this error is not accounted for, the measured thickness of the passivation layer is D' (measured z- position of the captured image focused on light reflecting from top surface of the passivation region minus the measured z-position of the captured image focused on light reflecting from the bottom surface of the passivation region), which FIG. 40 clearly illustrates is not close to the actual thickness of the passivation layer D. The error introduced by the index of refraction of the passivation layer, however, can be removed by applying a correction calculation to the measured thickness of the passivation layer. A first correction calculation is shown in FIG. 40, where the actual thickness of the passivation layer (D) is equal to the measured thickness of the passivation layer (D') times the index of refraction of the passivation layer. A second correction calculation is shown in FIG. 40, where the actual thickness of the passivation layer (D) is equal to the measured thickness of the passivation layer (D') times the index of refraction of the passivation layer plus an offset value. The second correction calculation is more general and takes into account the fact that the index of refraction of passivation layer is a function of wavelength and that, when imaging through a transparent media, spherical aberration of an objective lens may affect z position measurement. Therefore, as long as proper calibration procedure is followed, a z- position of the focal plane of the capture image that is focused on the light reflecting from the bottom surface of the passivation layer can be used to calculate the actual thickness of the passivation layer.

[ 00120 ] Once the correction equation is applied to the measured thickness of the passivation layer, the true thickness of the passivation layer can be attained. Referring back to FIG. 39, the thickness of the metal layer can now be calculated. The thickness of the metal layer is equal to the sum of the thickness of the passivation layer and the difference between the z-position of the top surface of the passivation layer and the z- position of the top surface of the metal layer.

[ 00121 ] FIG. 41 is a diagram illustrating peak mode operation using images captured at various distances when a photoresist opening is within the field of view of the optical microscope. The captured images illustrated in FIG. 41 are taken from a sample similar to the structure of the sample shown in FIG. 32. This structure is a metal plated trench structure. The top-down view of the sample shows the area of the photoresist opening, a plated metal, in the x-y plane. The PR opening also has a depth of specific depth in the z-direction (above the plated metal). The images captured at various distances are shown below the top-down view in FIG. 41. At distance 1, the optical microscope is not focused on the top surface of the photoresist region or the top surface of the plate metal. At distance 2, the optical microscope is focused on the top surface of the plated metal, but is not focused on the top surface of the photoresist region. This results in an increased characteristic value (intensity/contrast/fringe contrast) in the pixels that receive light reflecting from the top surface of the plated metal compared to the pixels that receive reflected light from other surfaces that are out of focus (top surface of the photoresist region). At distance 3, the optical microscope is not focused on the top surface of the photoresist region or the top surface of the plated metal. Therefore, at distance 3 the maximum characteristic value will be substantially lower than the maximum characteristic value measured at distance 2. At distance 4, the optical microscope is not focused on any surface of the sample; however, due to the difference of the index of refraction of air and the index of refraction of the photoresist region an increase in the maximum characteristic value (intensity/contrast/fringe contrast) is measured. FIG. 11, FIG. 40, and the accompanying text describe this phenomenon in greater detail. At distance 6, the optical microscope is focused on the top surface of the photoresist region, but is not focused on the top surface of the plated metal. This results in an increased characteristic value (intensity/contrast/fringe contrast) in the pixels that receive light reflected from the top surface of the photoresist region compared to the pixels that receive reflected light from other surfaces that are out of focus (top surface of the plated metal). Once the maximum characteristic value from each captured image is determined, the results can be utilized to determine at which distances each surface of the wafer is located. [00122 ] FIG. 42 is a chart illustrating the 3-D information resulting from the peak mode operation illustrated in FIG. 41. As discussed regarding FIG. 41, the maximum characteristic value of the images captured at distances 1 , 3 and 5 have a lower maximum characteristic value compared to the maximum characteristic value of the images captured at distances 2, 4 and 6. The curve of the maximum characteristics values at various z-distances may contain noise due to environmental effects, such as vibration. To minimize such noise, a standard smoothing method, such as Gaussian filtering with certain kernel size, can be applied before further data analysis.

[00123 ] One method of comparing the maximum characteristics values is performed by a peak finding algorithm. In one example, a derivative method is used to locate zero crossing point along the z-axis to determine the distance at which each "peak" is present. The maximum characteristic value at each distance where a peak is found is then compared to determine the distance where the greatest characteristic value was measured. In the case shown in FIG. 42, a peak will be found at distance 2, which is used as an indication that a surface of the sample is located at distance 2.

[00124] Another method of comparing the maximum characteristics values is performed by comparing each maximum characteristic value with a preset threshold value. The threshold value may be calculated based on the sample materials, distances, and the specification of the optical microscope. Alternatively, the threshold value may be determined by empirical testing before automated processing. In either case, the maximum characteristic value for each captured image is compared to the threshold value. If the maximum characteristic value is greater than the threshold, then it is determined that the maximum characteristic value indicates the presence of a surface of the sample. If the maximum characteristic value is not greater than the threshold, then it is determined that the maximum characteristic value does not indicate a surface of the sample.

[00125 ] An alternative to use of the peak mode methods described above, the range mode method, described in FIG. 13 and the related text, can be used to determine the z-position of different surfaces of a sample.

[00126] FIG. 43 is a diagram of a captured image focused on a top surface of a photoresist layer in a trench structure including an outline of a first analysis region A and a second analysis region B. As discussed above, an entire field of view of each captured image can be used to generating 3-D information. However, it is

advantageous to have the option to only use a selectable portion (region A or region B) of the field of view to generate 3-D information. In one example, the region is selected by a user using a mouse or touch screen device in communication with a computer that processes the captured images. Once selected, one can apply different threshold values to each region to more effectively single out a particular surface peak as shown in FIG.42. This scenario is illustrated in FIG. 43. When acquisition of 3-D information regarding the top surface of plated metal is desired, the selectable portion of the field of view (region A) is set to include multiple regions of plated metal because the characteristic values associated with a metal surface are usually much greater than the characteristic values associated with the photoresist, therefore a high threshold value can be applied to region A to filter out the characteristic values associated with the photoresist to improve detection of the metal surface peak. Alternatively, when acquisition of 3-D information regarding the top surface of a photoresist region is desired, the selectable portion of the field of view (region B) is set to a small area located in the center of an image. The characteristic value associated with a photoresist surface is usually relatively weak compared to characteristic value associated the metal surface. The quality of raw signal used to determine the characteristic value calculation may be best around the center of the field of view enclosed within region B. By setting an appropriate threshold for region B, one can more effectively detect a weak characteristic value peak of the photoresist surface. Region A and region B, and the threshold to be used within each region, can be set and adjusted by the user via graphical interface that displays a top-down image of the sample and saved in a recipe for automated measurements.

[ 00127 ] FIG. 44 is a 3-D diagram of a bump over passivation structure. FIG. 45 is a top-down diagram of the bump over passivation structure shown in FIG. 44 including an outline of a first analysis region A and a second analysis region B. Region A may be set so that region A will always include the apex of the metal bump during an automated sequence measurement. Region B does not enclose any portion of the metal bump and only encloses a portion of the passivation layer. Analysis of only region A of all captured images provides pixel filtering such that the majority of the pixels analyzed include information about the metal bump. Analysis of region B of all captured images provides pixel filtering such that all of the pixels analyzed include information about the passivation layer. The application of user selectable analysis regions provides pixel filter based on location rather than on pixel value. For example, when location of the top surface of the passivation layer is desired, region B can be applied and all effects caused by the metal bump can be instantly eliminated from the analysis. In another example, where the location of the apex of the metal bump is desired, region A can be applied and all effects caused by the large passivation layer area can be instantly eliminated from the analysis.

[ 00128 ] In some examples, it is also useful to fix the spatial relationship between region A and region B. When measuring a metal bump of a known size, such as illustrated in FIG. 44 and FIG. 45, it is useful to fix the spatial relationship between region A and region B to provide consistent measurements because region A is always used to measure 3-D information of the metal bump and region B is always used to measure 3-D information of the passivation layer. Moreover, when region A and region B have a fixed spatial relationship, the adjustment of one region automatically causes an adjustment of the other region. This situation is illustrated in FIG. 46. FIG. 46 is a top-down diagram illustrating adjustment of analysis region A and analysis region B when the entire bump is not located in the original analysis region A. This can occur for multiple reasons, such as an inaccurate placement of the sample by the handler or process variation during the fabrication of the sample. Regardless of the cause, region A needs to be adjusted to properly center around the apex of the metal bump. Region B also needs to be adjusted to ensure that region B does not include any portion of the metal bump. When the spatial relationship between region A and region B is fixed, then adjustment of region A automatically causes realignment of region B.

[ 00129] FIG. 47 is a cross-sectional diagram of the bump over passivation structure illustrated in FIG. 44. When the thickness of the passivation layer is substantially larger than the distance between the predetermined steps of the optical microscope during image acquisition, the z-position of the top surface of the passivation layer can be easily detected as discussed above. However, when the thickness of the passivation layer is not substantially larger than the distance between the predetermined steps of the optical microscope (i.e. the passivation layer is relatively thin), the z-position of the top surface of the passivation layer may not be easily detected and measured. The difficulty arises due to the small percentage of light that reflects from the top surface of the passivation layer compared to the large percentage of light that reflects from the bottom surface of the passivation layer. In other words, the characteristic value peak associated with the top surface of the passivation layer is very weak compared to that of the bottom surface of the passivation layer. When the captured image at a predetermined step is focused on the high intensity reflection from the bottom surface of the passivation layer is less than a few predetermined steps away from the captured image at a predetermined step focused on the low intensity reflections from the top surface of the passivation layer, it is not possible to

differentiate the reflections received from the bottom surface of the passivation from the reflection received from the top surface of the passivation layer. This problem can be addressed by operation of different methods.

[ 00130 ] In a first method, the total number of predetermined steps across the scan can be increased so to provide additional resolution across the entire scan. For example, the number of predetermined steps across the same scan distance can be doubled, which would result in a doubling the Z resolution of the scan. This method would also result in doubling the amount of images that are captured during a single scan. The resolution of the scan can be increased until the characteristic peak measured from top surface reflections can be differentiated from bottom surface reflections. FIG. 49 illustrates a situation where sufficient resolution is provided in the scan to differentiate reflections from the top surface and the bottom surface of the passivation layer.

[ 00131 ] In a second method, the total number of predetermined steps are also increased, however, only a portion of the steps are used to capture images and the remainder are skipped.

[ 00132 ] In a third method, the distance between predetermined steps can be varied such that the distance between steps is smaller in the vicinity of the passivation layer and that the distance between steps is greater outside of the vicinity of the passivation layer. This method provides greater resolution in the vicinity of the passivation layer and less resolution outside the vicinity of the passivation layer. This method does not require additional predetermined steps be added to the scan, but rather redistributes the predetermined steps in a non-linear fashion to provide additional resolution where needed at the sacrifice of lower resolution where high resolution is not needed.

[00133 ] For additional description regarding how to improve scan resolution, see U.S. Patent Application Serial Number 13/333,938, entitled "3D Microscope Including Insertable Components To Provide Multiple Imaging and Measurement Capabilities", filed December 21, 2011, by James Jianguo Xu et al. (the subject matter of which is incorporated herein by reference).

[00134 ] Using any of the methods discussed above, the z-position of the top surface of the passivation layer can be determined.

[00135 ] The height of the apex of the metal bump with respect to the top surface of the passivation layer ("bump height over passivation layer') is also a measure of interest. The bump height over passivation layer is equal to the z-position of the apex of the bump minus the z-position of the top surface of the passivation layer.

Determination of the z-position of the top surface of the passivation layer is described above. Determination of the z-position of the apex of the bump can be performed using different methods.

[00136] In a first method, the z-position of the apex of the bump is determined by determining the z-position of the peak characteristic value for each x-y pixel location across all captured images. Said another way, for each x-y pixel location the measured characteristic value is compared across all captured images at every z-position and the z-position containing the maximum characteristic value is stored in an array. The result of performing this process across all x-y pixel locations is an array of all x-y pixel locations and associated peak z-positions for every x-y pixel location. The greatest z- position in the array is the measured z-position of the apex of the bump. For additional description regarding how to generate 3-D information, see U.S. Patent Application Serial Number 12/699,824 and U.S. Patent No. 8,174,762, entitled "3-D Optical Microscope", filed February 3, 2010, by James Jianguo Xu et al. (the subject matter of which is incorporated herein by reference).

[00137] In a second method, the z-position of the apex of the bump is determined by creating a fitted 3-D model of the surface of the bump and then calculating the peak of the surface of the bump using the 3-D model. In one example, this can be done by generating the same array described above regarding the first method, however, once the array is completed, the array is used to generate the 3-D model. The 3-D model can be generated using a second order polynomial function fit to the data. Once the 3-D model is created the derivative of the slope of the surface of the bump is determined. Where the derivative of the slope of the surface of the bump is equal to zero is where the apex of the bump is calculated to be located.

[00138 ] Once the z-position of the apex of the bump is determined, the bump height over passivation layer can be calculated by subtracting the z-position of the top surface of the passivation layer from the z-position of the apex of the bump.

[00139] FIG. 48 is a diagram illustrating peak mode operation using images captured at various distances when only a passivation layer is within region B of the field of view of the optical microscope. By only analyzing pixels within region B (shown in FIG. 45) all pixel information related to the metal bump is excluded.

Therefore, the 3-D information generated by analyzing the pixels within region B will only be influenced by the passivation layer present in region B. The captured images illustrated in FIG. 48 are taken from a sample similar to the structure of the sample shown in FIG. 44. This structure is a metal bump over passivation structure. The top- down view of the sample shows the of the passivation layer in the x-y plane. Given that only pixels within region B are selected, the metal bump is not visible in the top- down view. The images captured at various distances are shown below the top-down view in FIG. 48. At distance 1, the optical microscope is not focused on the top surface of the passivation layer or the top surface of the passivation layer. At distance 2, the optical microscope is not focused on any surface of the sample; however, due to the difference of the index of refraction of air and the index of refraction of the passivation layer an increase in the maximum characteristic value (intensity/contrast/fringe contrast) is measured. FIG. 11, FIG. 40, and the accompanying text describe this phenomenon in greater detail. At distance 3, the optical microscope is not focused on the top surface of the passivation layer or the bottom surface of the passivation layer. Therefore, at distance 3 the maximum characteristic value will be substantially lower than the maximum characteristic value measured at distance 2. At distance 4, the optical microscope is focused on the top surface of the passivation layer, which results in an increased characteristic value (intensity/contrast/fringe contrast) in the pixels that receive light reflected from the top surface of the passivation layer compared to the pixels that receive reflected light from other surfaces that are out of focus. At distances 5, 6, and 7, the optical microscope is not focused on the top surface of the passivation layer or the bottom surface of the passivation layer. Therefore, at distances 5, 6, and 7 the maximum characteristic value will be substantially lower than the maximum characteristic value measured at distances 2 and 4. Once the maximum characteristic value from each captured image is determined, the results can be utilized to determine at which distances each surface of the sample is located.

[00140] FIG. 49 is a chart illustrating the 3-D information resulting from the peak mode operation of FIG. 48. Due to the pixel filtering provided by only analyzing pixels within region B of all captured images, the peak mode operation only provides an indication of a surface of the passivation layer at two z-positions: two and four. The top surface of the passivation layer is located at the higher of the two indicated z-position locations. The lowest of the two indicated z-position locations is an erroneous "ghost surface" where the light reflected from the bottom surface of the passivation layer was measured due to the index of refraction of the passivation layer. Measuring the z- position of the top surface of the passivation layer using only pixels located within region B simplifies the peak mode operation and reduces the potential of erroneous measurements due to light reflections from the metal bump located on the same sample.

[00141 ] An alternative to use of the peak mode methods described above, the range mode method, described in FIG. 13 and the related text, can be used to determine the z-position of different surfaces of a sample.

[00142 ] Although certain specific embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the claims.