Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGING SYSTEM FOR DETECTING SURFACE DEFECTS AND EVALUATING OPTICAL DENSITY
Document Type and Number:
WIPO Patent Application WO/2001/040780
Kind Code:
A1
Abstract:
An image capturing and processing system and method for superficial defect research and tone evaluation of industrial products are provided, wherein the method comprises the steps of: capturing digitalized images of the product; performing a processing of the digitalized images to detect one or more defective areas of the product; performing a first classification of the product based upon the detected defective areas; assigning a tone to the product; and performing a second classification of the product based upon the assigned tone.

Inventors:
FALESSI ROBERTO (IT)
MOROLI VALERIO (IT)
Application Number:
PCT/IT2000/000504
Publication Date:
June 07, 2001
Filing Date:
December 06, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CT SVILUPPO MATERIALI SPA (IT)
FALESSI ROBERTO (IT)
MOROLI VALERIO (IT)
International Classes:
G01N21/88; G06T1/00; (IPC1-7): G01N21/88
Domestic Patent References:
WO1998001746A11998-01-15
Foreign References:
US5301129A1994-04-05
US5857119A1999-01-05
FR2345717A11977-10-21
Attorney, Agent or Firm:
Leone, Mario (Piazza di Pietra 39, Roma, IT)
Download PDF:
Claims:
CLAIMS
1. An image capturing and processing method of an industrial product, for superficial defect research and tone evaluation of said industrial product, characterized in that it comprises the steps of : capturing digitalized images of said product ; performing a processing of said digitalized images to detect one or more defective areas of said product ; performing a first classification of said product based upon said detected defective areas ; assigning a tone to said product ; and performing a second classification of said product based upon the assigned tone.
2. The image capturing and processing method according to claim 1, characterized in that said processing of said digitalized images comprises the steps of : calculating a plurality of decreasing resolution images starting from an initial image ; performing, on each of said decreasing resolution images, a derivativetype convolution operation soas to obtain respective filtered images ; determining, for each resolution, a set of lower threshold points and a set of upper threshold points ; comparing, for each resolution, each point of said filtered images to said lower and upper threshold points, so as to detect outofthreshold points representing defective points ; and grouping said defective points in defective areas.
3. The image capturing and processing method according to claim 1 or 2, characterized in that said first classification of said product based upon detected defective areas is so as to divide said areas into Frame defective areas and Inner Area defective areas.
4. The image capturing and processing method according to claim 3, characterized in that said first classification of said product is performed based upon cavitytype, spottype and longprinttype defects for Inner Area defective areas, and based upon chippingtype and clefttype defects for Frame defective areas.
5. The image capturing and processing method according to any of the preceding claims, characterized in that it further comprises the step of performing a third classification of said product based upon other products having a similar number and similar sizes of defective areas.
6. The image capturing and processing method according to any of the preceding claims, characterized in that it comprises a step of generating a product grouping code apt to be utilized by an automatic machine.
7. An image capturing and processing system for superficial defect research and tone evaluation of industrial products comprising : a linear lighting device (7) ; a first frame (6) apt to rotate around a frame rotation axis, whereon said linear lighting device is placed ; a first image transducer (3) ; a second frame (2) apt to rotate around said frame rotation axis, whereon said first image transducer is mounted ; a second image transducer (4) ; a third frame (5) apt to rotate around said frame rotation axis, whereon said second image transducer is mounted ; means for capturing and digitalizing images (11, 12) detected by said first and second image transducer ; a processing unit (13), apt to process the images captured and digitalized by said means for capturing and digitalizing images ; and control means, said first image transducer being positioned so as to be sensible to the intensity of light diffused by said industrial products, and said second image transducer being positioned so as to be sensible to the intensity of light reflected by said industrial products.
8. The image capturing and processing system for superficial defect research and tone evaluation of industrial products according to claim 7, characterized in that it further comprises a proximity sensor (14), connected to said means for capturing and digitalizing, apt to control capture start and end based upon the presence of said industrial products in the apparatus operating field.
9. The image capturing and processing system for superficial defect research and tone evaluation of industrial products according to claim 7 or 8, characterized in that said control means are such : to perform an image processing to detect one or more defective areas of said product ; to perform a first classification of said product based upon said detected defective areas ; toassign a tone to said product ; and to perform a second classification of said product based upon the assigned tone.
10. The image capturing and processing system for superficial defect research and tone evaluation of industrial products according to claim 9, characterized in that said processing of said images comprises the steps of : calculating a plurality of decreasing resolution images starting from an initial image ; performing, on each of said decreasing resolution images, a derivativetype convolution operation so as to obtain respective filtered images ; determining, for each resolution, a set of lower threshold points and a set of upper threshold points ; comparing, for each resolution, each point of said filtered images to said lower and upper threshold points, so as to detect outofthreshold points representing defective points ; and grouping said defective points in defective areas.
11. An image capturing and processing system for superficial defect research and tone evaluation of industrial products according to claim 9 or 10, characterized in that said first classification of said product based upon detected defective areas is such as to divide said areas into Frame defective areas and Inner Area defective areas.
12. An image capturing and processing system for superficial defect research and tone evaluation of industrial products according to claim 11, characterized in that said first classification of said product is performed based upon cavitytype, spottype and long printtype defects for Inner Area defective areas, and based upon chippingtype and clefttype defects for Frame defective areas.
13. An image capturing and processing system for superficial defect research and tone evaluation of industrial products according to any of the claims 9 to 12, characterized in that said control means are such as to perform a third classification of said product based upon other products having a similar number and similar sizes of defective areas.
14. An image capturing and processing system for superficial defect research and tone evaluation of industrial products according to any of the claims 9 to 13, characterized in that said control means are such as to generate a product grouping code apt to be utilized in an automatic machine.
Description:
IMAGING SYSTEM FOR DETECTING SURFACE DEFECTS AND EVALUATING OPTICAL DENSITY DESCRIPTION The present invention refers to a real-time image capturing and processing system and method. These images generally come from image transducers such as linear videocameras with digital output and are related to continuous-type plane industrial products such as rolled metallic sections or fabrics, or discrete products with plane surface such as metallic, ceramic or plastic pieces. The system and method according to the present invention are such as to enable evaluating the quality of the inspectioned surface, by classifying the same both in terms of the detected defects and in terms of the measured tone variations, where tone indicates the gradation of the grey intensity.

Systems for inspecting surfaces by means of recording and processing images related to the produc t to be examined are known.

The main disadvantage of these methods is to enable detecting only a particular defect typology on a particular product, such as for example dents or polishing defects on body parts. Furthermore, the gravity evaluation of the detected defects is usually performed by a specialized operator, with consequent errors due to natural sensorial limitations of men.

The present invention overcomes to these problems of prior art since it provides an image capturing and processing method of an industrial product, for superficial defect research and tone evaluation of said industrial product, characterized in that it comprises the following steps : -capturing digitalized images of said product ; -performing a processing of said digitalized images to detect one or. more defective areas of said product ;

-performing a first classification of said product based upon said detected defective areas ; -assigning a tone to said product ; and -performing a second classification of said product based upon the assigned tone.

Furthermore, an image capturing and processing system for superficial defect research and tone evaluation of industrial products is provided comprising : -a linear lighting device ; -a first frame apt to rotate around a frame rotation axis, whereon said linear lighting device is placed ; -a first image transducer ; -a second frame apt to rotate around said frame rotation axis, whereon said first image transducer is mounted ; -a second image transducer ; -a third frame apt to rotate around said frame rotation axis, whereon said second image transducer is mounted ; -means for capturing and digitalizing images detected by said first and second image transducer ; -a processing unit, apt to process the images captured and digitalized by said means for capturing and digitalizing images ; and _ -control means, said first image transducer being positioned so as to be sensible to the intensity of light diffused by said industrial products, and said second image transducer being positioned so as to be sensible to the intensity of light reflected by said industrial products.

A first advantage of the present invention is to enable recognizing defects belonging to a high number of typologies on any product type.

A second advantage is then to enable at the same time both a classification based upon the detected defects and a classification based upon a tone measure.

It is then possible to automatically decide whether a product satisfies predetermined quality criteria, which combine evaluations referred to the defect entity with evaluations referred to the tone.

Furthermore, an additional advantage is that by means of the present invention an additional classification of the examined products is further performed, expressed by a code apt to be provided as input of an automatic machine for sorting the products, in case connected to the system.

Other advantages, features and application modes of the present invention will be evident from the following detailed description of a preferred embodiment, given by way of example and not for limitative purpose, by referring to the figures of the enclosed drawings, wherein : figure 1 schematically shows a perspective view of the system according to the present invention ; figure 2 shows the connections of the image transducers of Fig. 1 with a computer ; and figure 3 shows a block diagram of the image capturing and processing method according to the present invention.

By firstly referring to the figure 1, a base 1 is present comprising two supports 50, vertically placed. On these supports 50 a first frame 2 is revolving mounted, which may rotate around the X-axis. On the frame 2 a support 9 is mounted, whereon a first image transducer 3 (hereinafter designated with videocamera 3) is installed, revolving both around the A-axis and the B-axis of the figure. On the supports 50 a second frame 5 is further mounted, revolving too around the X-axis. On the frame 5 a support 10 is mounted whereon a second image transducer 4 (hereinafter designated with videocamera 4) is installed, revolving both around the C-axis and around the D-axis of the figure.. On the supports 50, at last, a third frame 6 is mounted, revolving too around the X-

axis. On the frame 6 a linear lighting device 7 is fastened.

Both the videocameras 3 and 4 result focalized on an inspection line 8, contained in the XY-plane. The XY- plane represents also the inspection plane whereon the surface of the industrial product to be examined flows.

To make exposition easier, this surface will be considered plane. As it can be noticed from the figure, the frames 2 and 5 are respectively rotated by angles a and P with respect to the Z-axis, that is with respect to the normal to the inspection plane. The frame 6, instead, is rotated by an angle-P with respect to the Z-axis.

By means of the up to now construction, the videocamera 3 results sensible to the intensity of the light diffused by the surface to be inspectioned, by so generating diffusion images, whereas the videocamera 4 generates reflection images, since it is specularly placed with respect to the lighting source 7.

This videocamera arrangement results particularly advantageous since it allows to detect different defect typologies. In fact, a defect consisting for example in a surface deformation will cause a change in direction of the light reflected by the surface itself. This defect will be highlighted by the image captured by the videocamera 4. On the contrary, a defect consisting for example in a stain or in a tone variation will cause a different light absorption by the surface, therefore a different light diffusion at the defective area will be obtained. This defect, instead, will be hightlighted by the image captured by the videocamera 3.

The value of the angles a and ß varies according to the type of surface to be analized and to the type of the defects to be detected.

The following figure 2 illustrates the connection of the videocameras 3 and 4 with a pair of capture cards 11 and 12 to be installed on a computer 13 apt to perform

the subsequent procedures which will implement the processing method according to the present invention.

The capture start and end are driven by a signal coming from a sensor 14 apt to detect the presence/absence of the material to be inspectioned.

During capture, real-time analysis procedures on data acquired by the videocameras are performed by means of the computer 13. These data are stored into the cards 11 and 12, or in the computer memory.

The image capturing and processing method provides the advantageous performing of some service procedures, both apt to set the parameters necessary for the capture and also to calculate equalization coefficients and lower and upper threshold values to be later utilized in the defect and tone evaluation of the industrial product analized time by time.

By referring to the block diagram of figure 3, on the right side of the figure, the service procedures P1 to P5 can be noted, which will be hereinafter described in details.

A first procedure PI is of interactive type and enables to set out, as operating parameters, both a configuration file in the capture cards and also the exposure time for the videocameras. The setting of the configuration file is performed according to parameters set by the manufacturer.

Furthermore, for each connected videocamera, the procedure PI allows to set the following four operating parameters : 1. The position of the image points which detect a first grey reference, hereinafter designated with the term grey reference 1, the produced field and a second grey reference, hereinafter designated with the term grey reference 2.

From the practical point of view, the detection of the grey references 1 and 2 is performed by positioning the corresponding cursors, so as to detect the initial

image points of each corresponding intervals. A third common cursor will be then positioned so as to define the width of each of these intervals.

The produced field, representing the whole area taken into consideration, is instead localized by two cursors apt to detect the start image point and the end of this field, respectively.

2. The opening of the videocamera lens diaphragm, which may be adjusted for example by visualizing a videocamera signal on the computer monitor.

3. The light intensity of the lighting source, adjusted for example by controlling the power supply voltage of this source.

4. The videocamera focusing. The adjustment of this parameter is performed by visualizing a bar on the monitor, the length thereof is proportional to the variance of the grey tones of the image points of a line.

The greater is the length of the visualized bar, the better will be the focusing.

A subsequent service procedure P2, interactive-type too, is instead apt to calculate static equalization coefficients. Such coefficients are necessary for a subsequent correction, by means of an equalization procedure later described, of each image point within a line inside the produced. field, in order to make the capture dynamics uniform. In this way, possible sistemic errors due to the disuniformity of the intensity of light emitted by the lighting source, lens aberrations, and so on, are corrected.

The procedure P2 is of interactive type since it indicates to the operator the action sequence to be performed by showing a series of messages on the monitor thereto the operator, time by time, will have to give confirmation once performed the action. Such procedure, in order to obtain a significative calibration, may advantageously be performed a predetermined number of times, relatively to products of the same type.

For each of these products, the procedure P2 provides the following six steps of : 1. Capturing and digitalizing the product image.

2. Calculating for each line the average of the corresponding image points according to the following formula : wherein : xij represents the image point in position (i, J) ; n is the number of image rows ; and m is the number of image columns.

3. Normalizing the image points of each column with respect to the previously calculated row average, and summing the so-obtained values according to the following formula : 4. Calculating for each column the average of the values normalized according to the following formula : Nj = Nj/n 1 # j # m 5. Storing these normalized average values in order to utilize them later as coefficients of local equalization of the grey levels during the image processing related to the product to be examined.

6. Calculating and storing the lighting average values Ri and R2 of one or more samples related to the Grey Reference 1 and to the Grey Reference 2 already defined by the preceding procedure P1 :

wherein : Lr is the width of the Grey Reference 1 and 2 fields rl, r2 are the initial points of the Grey Reference 1 and 2 fields.

The capturing and processing method object of the present invention provides, as it will be better illustrated hereinafter, the generation of a predetermined number of decreasing resolution images of the product to be examined, starting from a first product image captured at the maximum resolution.

First of all, a service procedure P3 will be described hereinafter, finalized to detect Sinf (nxn) and Ssup (nxn) threshold values for each nxn used resolution.

These values define, for each resolution, a range of gradient values apt to detect the normality interval of the image points after a derivative filtering procedure, which will be better described by referring to the procedure P11, later described.

The procedure P3 provides, for each image generated at nxn resolution, a first initialization step at a minumum value and at a maximum value of the two thresholds Sinf (nxn) and Ssup (nxn). By starting from the image at lower resolution, the subsequent steps of the procedure P3 are the following : 1. To perform the derivative filtering of the corresponding image according to the modes which will be described in details by referring to the procedure P11.

2. To generate a decreasing list of points at maximum derivative.

3. To highlight on the operator's monitor the area at maximum derivative, together with a request for confirmation if this area is or is not to be considered defective.

In case of affirmative reply by the operator, the involved threshold Sinf (nxn) or Ssup (nxn) is updated to the highlighted value and the area with immediately lower gradient is shown, by repeating the procedure as from step 3 until exhausting the list.

In case of negative reply instead, the current values of Sinf (nxn) or Ssup (nxn) are left unchanged and the procedure starts again as from step 1 by examining the immediately upper resolution image.

Furthermore, at the beginning of each iteraction of the procedure P3, the operator is asked about the presence of three particular defects on an area of the image called Frame or of the linear convolution on the profiles of an area of the image called Inner Area. As far as the Frame and Inner Area definition is concerned, it will be described in a subsequent edge-searching procedure P9.

Based upon the operator reply, the current threshold values designated with Ss, Sc and Sil, respectively related to three defect typologies called"chipping", "cleft"and"long print", respectively, are updated. Such threshold values start from a null initial value and are updated at the maximum value of linear convolutions on the Frame or of linear convolution on the profiles of the Inner Area, according to what is provided by the defect- classifying procedure P17, to be described hereinafter.

Similarly to what has been described by referring to the procedure P2, the procedure P3 too may be repeated a prefixed number of times relatively to products representative of a particular product typology, so as to make the reached threshold values statistically significative and reliable.

All obtained values, both related to the thresholds

Sinf (nxn) and Ssup (nxn) and to the thresholds Ss, Sc and Sil may be multiplied, in case, by a safety percentage factor, predefined by the operator.

At the end of the up to now described step, the procedure P3 continues an optimization of the Inner Area and of the Frame. This optimization provides the following steps of : 1. Setting a constant A equal to the initial width of the Frame.

2. Performing the hereinafter described procedures of derivative filtering and of defective areas detecting, by utilizing in the last procedure the final thresholds already detected by the present procedure.

3. Reducing the value of this constant A.

4. Repeating the steps 2 and 3 until obtaining a first signalling of defective area. At this time, the A value used in the immediately preceding step may be considered as optimum.

Even in this case the optimization step may be repeated a prefixed number of times relatively to products representative of a particular product typology, by updating time by time the A value whenever the current value exceeds the value obtained in the preceding iteractions.

A procedure P4 for calculating the tone thresholds will be hereinafter described, in order to determine division thresholds among tone classes.

It is an interactive-type learning procedure which comprises the following two steps of : 1. Capturing images related to a pre-established number of products.

2. Identifying each tone class according to a method chosen between the following two ones : a) k-average method. Such first method provides the identification of each tone class with a value equal to the average value of a reference sample population of this class. Each tone class is characterized by a vector

x of statistical parameters comprising the average value of grey tone, the variance of the grey tone, the median of the grey tone and the average values of the gradients of each image at resolution nxn. At last, even the covariance matrixes of said parameters related to each tone class are filed in a feature file. b) Method of the nearest k points. In such second method, the identification of a class is calculated by the distribution of the distances between class samples, by choosing as significative value the appropriate percentile, for example the median value.

A subsequent procedure P5 is then so as to calculate the defect gravity thresholds. This procedure visualizes a choice menu which allows the operator to select a prearrangement mask for each defect class. These classes will be defined in a defect-classifying procedure P17.

With regard to the selected mask, the user inputs the limit values, corresponding to each gravity class, which define the dimensional and frequency ranges of the particular defect. Furthermore, the user, for each gravity class, may input a code which could advantageously be utilized to drive product selecting and sorting machines, in case connected to the described system.

By still referring to the Figure 3, the capturing and processing method of the images related to the product whose surface is desired to be inspectioned will be hereinafter described by referring to the procedures shown on the left side of the Figure.

Particularly, it is noted the presence of a first procedure P6, the task thereof is to capture, digitalize and store an image of the product to be examined coming from each of the provided image transducers. Each of these operations is performed by means of known modes.

In the described embodiment and conforming to what indicated in the figures. 1 and 2, the presence of two image transducers is provided, in particular two linear

videocameras with digital output. The first videocamera is positioned so as to result sensible to the intensity of light diffused by the surface of the product under examination, whereas the second videocamera is positioned so as to result sensible to the intensity of light reflected by the surface of the product under examination, being placed specularly to a linear lighting device.

After the capturing procedure P6, a procedure P7 is performed, hereinafter described, apt to normalize the value associated to each image point with respect to a parameter which is dependant on grey references in order to eliminate drift errors of the capturing and lighting system calibration. The procedure P7, therefore, normalizes the current values of grey level, by utilizing the parameters calculated by the already described procedure P2, according to the following linear transformation : yv= Gxi, + O l<i<number of rows per image 1<j<number of points per image row G= R2-Ri X2-XI O=Ri-GXi wherein : xij is the current value of the image point with coordinate (i, j) ; Yij is the normalized value of the point with coordinate (i, j) ; R1 and R2 are the calibration values of the Grey References 1 and 2 ; and X1 and X2 are the current values of the Grey References 1 and 2.

After that, an equalization procedure P8 is performed, the task thereof is to eliminate systematical errors of the capturing and lighting system. To this

purpose the procedure P8 performs a linear transformation of the grey levels measured for each image point, according to the following equation : <BR> <BR> <BR> <BR> <BR> <BR> zej= Nj yij 1<i<n<BR> <BR> <BR> <BR> <BR> 1<j< m wherein : n is the number of rows of the image ; zij is the equalized value of the image point with coordinate (i, j) ; N is the coefficient of local equalization of the j-th column already calculated by the procedure P2 ; and Yij is the normalized value of the image point with coordinate (i, j).

After this equalization procedure, a procedure P9 is performed to research the edges of the product under examination inside the produced field already detected by the service procedure P1. For example, by processing images related to descrete products, this procedure researches the edges of an object wholly contained in the capured image.

The edges are researched by scanning the rows and columns of the captured image, so as to define for each row and each column a start product coordinate and an end product coordinate. Such coordinates are then stored into vectors exactly representing the product punctual edges.

Starting from these vectors, the product average edges are calculated, obtained by digitally filtering the coordinates of the punctual edges by means of the following low-pass filter : wherein : Av (k) is the average edge at the k position ; Loc (k) is the punctual edge at the k position ;

and is a predefined constant.

The intersection points between the so calculated average edges define the product vertexes. Starting from these vertexes, the product Inner Area and Frame zones will be defined as follows : <BR> <BR> <BR> <BR> <BR> <BR> InnerArea V #(i,j) # i # [ObiUPP,ObiLOW], j # [AbiLF,AbiRG]<BR> <BR> <BR> <BR> <BR> Frame # #(i,j) # i # [Loc(i,j), ObiUPP], j # [AbiLF,AbiRG]<BR> <BR> <BR> <BR> i # [ObiLOW,Loc(i,j)], j # [AbiLF,AbiRG]<BR> <BR> <BR> <BR> <BR> l (E (ObiUPP,ObiLOW], j # [Loc(i,j), AbiLF]<BR> <BR> <BR> <BR> <BR> i # [ObiUPP,ObiLOW], j # [AbiRG,Loc(i,j)] ObiUPP = max{Yv sup upp,Yv sup RG} + # ObiLOW = min {YY if LF, Yv if RG}-# AbiLF = max{Xv sup LF,Xv inf LF} + A AbiRG = max {Xv p RG, Xv inf RG} - # wherein : AbiLF left abscissa of the product edge AbiRG right abscissa of the product edge ObiuPP upper ordinate of the product edge ObiLOwlower ordinate of the product edge A subsequent procedure P10 herebelow described is so as to generate, as previously mentioned, a predetermined number of images of the product to be examined. Such images are obtained starting from a first product image captured (by means of the procedure P6) at the maximum resolution, by means of subsequent resolution reductions.

Starting from a first image at the maximum resolution, an image at lower resolution may be derived from this first image by performing an average operation on nxm image point groups. By supposing in a simplificative way that n=m and that both are equal to a power of 2, each image at lower resolution may be generated from the one at an immediately higher

resolution by making the average of groups of 2x2 points at a time, obtaining an image having a resolution equal to the half, for each coordinate, of the resolution of the image therefrom it derives. In this particular case, the calculation algorithm is fixed, that is : 1 2h+12k+1 imagewidth x2n x 2n (hf k) = xn x n (i J) O < ha k < 4 i=2hj=2k n i=2hj=2k wherein : X-2n x 2n, X-n x n points of the images at 2nx2n and nxn resolution After that, a procedure P11 of derivative filtering is performed, wherein each point of each image produced by means of the preceding procedure P10 is subjected to derivative filtering by means of the following formula : dij - 4xij-(xi,j + 1 + xi,j - 1 + xi-1,j + xi + 1,j) ObiUPP < < ObiLOW AbiLF < j < AbiRG wherein : xii image point with coordinate (i, j) AbiLF left abscissa of the product edge AbiRG right abscissa of the product edge ObiUPP upper ordinate of the product edge ObiLOw lower ordinate of the product edge The result of the now described filtering procedure is provided at the subsequent procedure P12, the task thereof is to detect the defective areas of the product under examination. This procedure, for each image at each resolution, compares the filtering result of the preceding procedure P11 to the corresponding thresholds previously calculated by the procedure P3. In fact, as already seen, at each resolution (n x n) two thresholds Sinf (n x n) and Ssup (n x n) have been calculated, which define an interval within the filtering result is considered conforming to the good-quality product.

If, for each image point, the filtering result falls outside this interval, the image point is considered belonging to a defective area and as such it is marked in the corresponding memory position wherin the image is stored.

A procedure P13 of"thresholding"the defective areas is then performed wherein, for each produced image, at each defective point, a local threshold is calculated, as defined by the following equation : <BR> <BR> <BR> <BR> <BR> d<BR> <BR> dij<BR> <BR> <BR> <BR> 0<k<_1 wherein : sij is the local threshold of the the defective image point with coordinate (i, j) ; xij is the grey level of the defective image point with coordinate (i, j) ; dij is the derivative filtering value of the defective image point with coordinate (i, j) ; and k is a predefined constant.

Then, on, the image at maximum resolution, an area of (2nx2n) points is determined, centered on the defective area detected by the point with coordinate (i, j) of the image at resolution (nxn). Inside this area each point of the image at the maximum resolution is compared to the local threshold sij calculated as above described.

The points over this threshold are considered defective.

The points adjacent one to the other will be then grouped into objects disjointed therebetween by means of a joining procedure P14 hereinafter described.

By referring to the image at the maximum resolution, and being designated with x (i, j) the current defective point and with x (i, j-1), x (i-l, j-1), x (i-l, j), x (i-l, j+1) the corresponding adjacent points defining a set I of the

adjacent points x (i, j), the procedure P14 scans point by point the whole image by deciding time by time the operation to be performed based upon the following conditions : 1. To create a new object if : # xij # I # xij = 0 2. To add the point with coordinate (i, j) to the current object A if : [(xi - 1,j + 1) + (xi-1, j # xi-1, j + 1) + (xi-1,j-1 # xi-1, j + 1) + (xi,j - 1 # xi-1,j + 1)] # A 3. To join two objects A and B if [(xi, A) x (xi-, j+ E B)] + [(xs-l, j-l E A) x (xi l, j+s E B)] =TRUE wherein the symbols + and x mean logic OR and AND respectively.

The joining operation of the defective adjacent points is performed by taking into account the sign of the corresponding gradient. This means that the adjacent points are grouped only if they have the same sign of the gradient.

In this way two lists of disjointed objects are generated, the first thereof contains points with positive gradient and the second thereof contains points with negative gradient.

The following procedure P15 is so as to calculate the following parameters of the objects : normalized position, area, perimeter, involved resolutions, distance from the defect threshold, capturing angle.

The normalized position is given by the four coordinates which detect the extreme points of the diagonal of the rectangle surrounding the object, divided by the maximum sizes of the product along the two reference axes.

The area is given by the number of image points constituting the object.

The perimeter is calculated as sum of the boundary image points of the object.

The involved resolutions are represented by a binary variable the bit thereof, starting from the least significative one, indicate that the object has been detected by the derivative operator at the corresponding resolution nxn.

The distance from the defect threshold represents the difference between the maximum value of the defect grey intensity and the value of the localized local threshold.

The capturing angle is represented by a binary variable, the first four bit thereof indicate the contrast type and the view angle of the object, that is : White in reflection, White in diffusion, Black in reflection, Black in diffusion and the combinations thereof.

Once generated these two lists of disjointed objects, an aggregation procedure of the near objects is performed, designated with P16. Such procedure joins the objects near one to the other into macro-objects composed by several disjointed objects deriving from both the lists generated by the preceding procedure P14. Such lists are fused into a single list grouping objects adjacent one to the other.

The procedure, at last, also performs a grouping of macro-objects deriving from images captured by videocameras placed at different angulations with respect to the product, but they refer to the same defect related to te same product area.

The procedure P16 comprises the following steps of : 1. Collecting in a single list both the objects with positive gradient and those with negative gradient.

2. Normalizing the positions of the objects with respect to the product. The normalization is performed by dividing the coordinates which detect the position of an object by the cross and longitudinal size of the product.

3. Joining the objects into macro-objects if there is the following condition : d (Or-O) 5 sa i, j E list of obj ects wherein : Oi, 0j objects of the list ; sa preset aggregation threshold ; and d () distance between the sides of the rectangles surrounding the objects Oi, Oj 4. Recalculating the parameters of the macro- objects starting from the ones previously calculated by the procedure P15. In this case, the parameters are obtained in the following way.

The normalized position is represented by the coordinates of the extremes of the rectangle surrounding the macro-object.

The area and the perimeter are calculated as sum of the image points of the component objects, respectively.

The involved resolutions are given as logic OR of the involved resolutions of the starting objects.

The distance from the defect threshold is given by the maximum of the respective starting variables.

The capturing angle is given as logic OR of the capturing angles of the starting objects.

After having detected all the areas to be considered defective, the method at issue provides a classification of the detected defects. This classification is performed by a procedure designated with P17 and results to be diversified depending on the fact that the defects are localized in the Inner Area or in the Frame.

As far as the Inner Area is concerned, the areas are classified according to three general classes of defects, and more precisely : -"cavity"defect : such cavity can be positive or negative according to the direction of the local deformation of the product surface with respect to the inspection plan ; -"spot"defect : the defect does not produce a

deformation of the product surface ; and -"long print"defect : the area involved by the defect extends along a longitudinal or cross strip of the surface for the whole extension of the product.

The assignment of a class to each area considered defective is performed based upon the occurrance of one of the following conditions : -if an area results to be defective both during processing of the image in reflection and during processing of the image in diffusion, this area will be assigned the cavity defect class ; -if an area results defective during processing of the image in diffusion only, it will be assigned the spot defect class ; and -the area detected by the following calculation will be assigned the long-print defect class. Such calculation provides the following steps of : 1) Calculating, starting from the image at the maximum resolution and by considering the points belonging to the Inner Area only, the row and column average profiles, by projecting the related image points onto the image axes, by accumulating the values of grey levels and by dividing the total sum of each row and each column by the number of points accumulated for each row and each column : i, j E InnerArea wherein : Pr row profile Pc column profile nr number of image points on the i-th row ric number of image points on the j-th column xij image point with coordinate (i, j)

2) Calculating on these average profiles a convolution with a monodimentional 3xm nucleous according to the formula : m = 2, 4, 6, 8... k E AverageProfile wherein : m size of the convolution cell ; p average profile point of the row or column ; Pd profile point after convolution ; and n, m values presettable by the operator.

3) Comparing the obtained convolution point to the threshold value Sil defined in the procedure P3. Should the convolution value result greater than this threshold, the corresponding average profile point will be marked as defective.

4) Grouping the adjacent defective points of the average profile, positioning them on the respective axis of the image at the maximum resolution, in order to highlight a rectangle the coloured sides thereof have sizes equal to each grouping and to the whole length of the Inner Area, respectively. The area of this rectangle is considered wholly belonging to the long-print defect class.

As far as the Frame is concerned, the classification of the defective areas is performed compared to two general categories of defects detected by means of specific calculation processes, in particular : -"chipping"defect : in this case the process provides the following steps of : a) Accumulating, for each row and each column of the image at the maximum resolution, the numerical values of the image points, being in the interval [i, j] starting

from the average edge until the Inner Area and belonging to the row n or to the column m, dividing the obtained result by the numerosity of each interval and storing the result in specific vectors. <BR> <BR> <BR> <BR> <BR> <BR> <BR> <P> Ejxij<BR> <BR> <BR> Srow (i, l) = LFAverageEdge < j < LFSideInnerArea interv. j Ey/ Srow (i, 2) = RGAverageEdge 2 j 2 RGSideInnerArea interv. j <BR> <BR> <BR> <BR> <BR> E/<BR> <BR> Scolumn (j, l) = UPPAverage Edge < i < UPPSideInn erArea interv. i <BR> <BR> <BR> <BR> <BR> #ixij<BR> <BR> <BR> Scolumn (j,2) = LOWAverage Edge # i # LOWSideInn erArea interv. i b) Making a convolution of said vector of segment average values with a monodimensional 3xm or mx3 nucleous, depending on whether it is a horizontal or vertical side of the Frame, by means of the formula of the point 2) of the preceding case. c) Comparing the so obtained convolution value to the threshold Ss valued defined in the procedure P3.

Should this convolution value be greater than this threshold, the corresponding point will be marked as defective. d) Grouping the adjacent defective points in order to detect the area characterized by the chipping defect.

-"cleft"defect : in this case the process provides the following steps of : a) Calculating, starting from the image with 2x2 resolution, the convolution with a monodimensional 5x1 or lx5 nucleous, depending on whether it is a horizontal or a vertical side of the Frame, by applying the formula :

Xd (i, j) = x (i, j-2) + x(i,j-1) + x(i,j + 1) + x(i,j + 2) - 4x(i,j) horizontal side xd (i, j) =x (i-2, j) +x (i-l, j) +x (i+l, j) +x (i+2, j)-4x (i, j) vertical side x (i, j) eFrame wherein : x (i, j) image 2x2 point of the frame Xd (i, j) image point after convolution b) Comparing the obtained convolution value to the threshold Sc value defined in the procedure P3. Should this convolution value be greater than this threshold, the corresponding point will be marked as defective. c) Grouping the adjacent defective points in order to detect the area characterized by the cleft defect.

Apart from the defect typologies, it is necessary to classify the product based upon the detected tone too.

Hereinafter a procedure P18 of tone assignment will be described, aiming at performing this classification.

A specific configuration mask allows the operator to decide which method is to be adopted for the calculation, to be chosen between the"average k method"and the "nearest k points method". These methods will be hereinafter described, more particularly : i) The average k method provides the calculation of the distance of an unknown sample from the average values representative of each'tone class, by means of the covariance matrixes stored by the procedure P4. The metrics used to calculate the distance is the Mahalanobis'one, as defined as : d = x V x wherein : x vector of the statistical parameters xT transposed vector of the statistical parameters V-1 inverted covariance matrix

The value of minimum distance is further compared to a pre-established threshold value and, in case of exceeding, the unknown sample is not assigned any tone class. This lack of assignment may be defined as"not- conforming-tone"defect and be used later as an additional classification code. Should this lack of assignment repeat for a prefixed number k of the last n products under examination, a"possible tone change"is signalled to the operator. The values k and n may be advantageously preset by means of an interactive mask. ii) The method of the nearest k points consists, instead, in calculating the distances of an unknown sample from each reference sample used during the procedure P4 and therefore in arranging in increasing order the obtained distances.

In this way the distance is calculated, contrary to the preceding case, according to an Euclidan metrics, therefore the covariance matrix V-1 will be in this case a unitary matrix.

Designated with k the value of a parameter presettable by the operator, the method provides : to select samples corresponding to the first k distances between the-previously calculated ones and to calculate the frequency of the appearance of the tone classes among the selected samples. The class having the maximum frequency will be assigned to the unknown sample.

Furthermore, the distance between the unknown sample and the most distant point chosen among the reference samples of the assigned class, is compared to the threshold calculated by the procedure P4. Should this threshold be exceeded, the unknown sample is not assigned any tone class. The lack of class assignment may be used as in the preceding case.

In a subsequent procedure P19 the gravity of the detected defects is then evaluated by taking into consideration the following parameters : a) defect classification ;

b) defect size (area or length or width) ; and c) number of defects belonging to the same class detected on the product under examination.

For each product and each class of defects, the belonging to one of the intervals defined by the procedure P5 is then calculated and consequently the product is assigned the corresponding gravity class defined by the procedure P5.

Furthermore, a sorting code is automatically associated to the product under examination, according to the configuration performed by the procedure P5, apt to be provided to a possible automatic selection machine.

At last, a procedure P20 called"Output of the results"will be hereinafter described. This procedure is performed at the end of the inspection of each product and makes available, on a computer's communication port, the following data : -list of the defects detected on the product, each defect being described by the following parameters : a) parameters calculated by the procedure P15, that is : normalized position, area, perimeter, involved resolutions, distance from the threshold defect, capturing angle ; b) defect classification code calculated by the procedure P17 ; and c) tone classification code calculated by the procedure P18 ; -gravity class calculated by the procedure P19 ; and -code for a selection machine calculated by the procedure P19.

The present invention has been so far described according to a preferred embodiment thereof, shown by way of example and not for limitative purposes.

It is to be meant that other embodiments may be provided, all comprised within the protective scope of the same.