Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE RECOGNITION OR RECONSTRUCTION WITH PRINCIPAL COMPONENT ANALYSIS AND WITH EXCLUSIONS OF OCCLUSIONS AND NOISE
Document Type and Number:
WIPO Patent Application WO/1999/027486
Kind Code:
A1
Abstract:
An image is processed for recognition or verification purposes (such as the image of a person's face), or in order to reconstruct the image. Processing means (5) serve to identify picture elements representing occlusions or noise and exclude them from the comparison process (10) or from the reconstruction. This is achieved by reference to stored data (8) defining for each particular picture element position an acceptable range of values, these stored data being derived from a reference population of images. The verification or reconstruction may use the method of principal component analysis by reference to stored eigenpictures. The range data can be estimated from the eigenpictures and corresponding eigenvalues rather than calculated from the original reference images.

Inventors:
RAMOS SANCHEZ MARCIAL ULISES (NL)
Application Number:
PCT/GB1998/003517
Publication Date:
June 03, 1999
Filing Date:
November 25, 1998
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BRITISH TELECOMM (GB)
RAMOS SANCHEZ MARCIAL ULISES (NL)
International Classes:
G06V10/42; (IPC1-7): G06K9/00; G06K9/46; G06K9/52
Other References:
LEONARDIS A ET AL: "Dealing with occlusions in the eigenspace approach", PROCEEDINGS 1996 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CAT. NO.96CB35909), PROCEEDINGS OF IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, SAN FRANCISCO, CA, USA, 18-20 JUNE 1996, ISBN 0-8186-7258-7, 1996, LOS ALAMITOS, CA, USA, IEEE COMPUT. SOC. PRESS, USA, pages 453 - 458, XP000640265
HUTTENLOCHER D P ET AL: "Object recognition using subspace methods", COMPUTER VISION - ECCV 96. 4TH EURPEAN CONFERENCE ON COMPUTER PROCEEDINGS, PROCEEDINGS OF FOURTH EUROPEAN CONFERENCE ON COMPUTER VISION. ECCV 96, CAMBRIDGE, UK, 14-18 APRIL 1996, ISBN 3-540-61122-3, 1996, BERLIN, GERMANY, SPRINGER-VERLAG, GERMANY, pages 536 - 545 vol.1, XP002052100
Attorney, Agent or Firm:
Lloyd, Barry George William (Holborn Centre 120 Holborn London EC1N 2TE, GB)
Download PDF:
Claims:
CLAIMS
1. A method of processing a digital image of an object comprising the steps of : extracting, from the image, data representing a plurality of pixels of the image; for each pixel for which data is extracted, comparing the extracted data with a predetermined value of the data for the associated pixel to form a comparison result; reconstructing an image of the object from those pixels for which the comparison result is less than a predetermined threshold, which threshold is dependent upon the expectation value of the data for the associated pixel.
2. A method according to claim 1 in which the predetermined value of the data for the associated pixel is the average value of that pixel in a reference data set.
3. A method according to claim 1 or 2 in which the reconstructed image and reference data in the form of a plurality of eigenvectors of a reference set of like objects are used to estimate eigencoefficients for the image.
4. A method of calculating a measure of an expectation value of data associated with a pixel of an image of an object, the method comprising; from a plurality of images of similar objects within a reference data set, calculating a plurality of eigenvalues/1,. and eigenvectors ej for the data set; and calculating, for each of a plurality of pixels ri, where Rj = the eigenvalue of the j'''eigenvector, e jr = the i'''pixel of the j"' eigenvector and M is the number of eigenvectors.
5. A method according to claim 4 in which the measure also depends on the eigenvalue the(M+1)theigenvector.of.
6. Apparatus for processing an image of an object comprising: means (8) for storing reference data derived from a set of examples of images similar to the object, the reference data including, for each of a plurality of picture element positions, predetermined range information for defining a permitted range of values therefor; means (5) for comparing each of a plurality of picture elements of the image to be processed with the predetermined range information for the corresponding picture element position to form a comparison result indicating whether or not the picture element falls within the permitted range; and means (9,14) for reconstructing an image of the object from only those pixels for which the comparison result indicates that the picture element falls within the permitted range.
7. Apparatus according to claim 6 in which the reference data further comprises a set of eigenvectors derived from the set of example images and the reconstructing means comprises means (9) to resolve the image against the eigenvectors, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, into a set of coefficients and means (14) to form a reconstruction of the image using the coefficients and the eigenvectors.
8. Apparatus for verifying or recognising an image of an object comprising: means (8) for storing reference data derived from a set of examples of images similar to the object, the reference data including, for each of a plurality of picture element positions, predetermined range information for defining a permitted range of values therefor; comparing means (5) for comparing each of a plurality of picture elements of the image to be verified or recognised with the predetermined range information for the corresponding picture element position to form a comparison result indicating whether or not the picture element falls within the permitted range; and means (10) for comparing the image of the object, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, with comparison data defining at least one image to determine whether the image of the object resembles the, or an, image defined by the comparison data.
9. Apparatus according to claim 8 in which the reference data further comprises a set of eigenvectors derived from the set of example images and the comparing means comprises means to resolve the image against the eigenvectors, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, into a set of coefficients and means (10) to compare the coefficients with coefficients which form said comparison data.
10. Apparatus according to claim 6,7,8 or 9 in which each item of the predetermined range information comprises the average value of the picture elements in the set of example images and an expected deviation range, and the comparing means (5) is operable to determine whether each picture element lies within that deviation range of said average value.
11. An apparatus according to claim 10 in which each expected deviation range is proportional to the standard deviation within the set of example images at the respective picture element position.
12. An apparatus according to claim 10 when dependent on claim 7 or on claim 9 in which each expected deviation range is proportional to an estimate of the standard deviation within the set of example images at the respective picture element position, the estimate being determined by calculating, for each of a plurality of pixels r ;, where Ai = the eigenvalue corresponding to the j"'eigenvector e. = ri"* pixel of the j"'eigenvector and M is the number of eigenvectors in the set.
Description:
IMAGE RECOGNITION OR RECONSTRUCTION WITH PRINCIPAL COMPONENT ANALYSIS AND WITH EXCLU- SIONS OF OCCLUSIONS AND NOISE

This invention relates to image processing. One aspect of particular interest is the reconstruction of occluded images. Another is image recognition, and in particular face recognition, which is potentially useful as an automatic identity verification technique for restricted access to buildings, computer systems, bank accounts etc.

However, the task, for a machine, is a formidable one since in many respects a person's facial appearance can vary widely over time.

In many verification applications, a user carries a card including machine- readable data (stored magnetically, electrically or optically). One particular application of face recognition would be to prevent use of such cards by unauthorised personnel.

For this purpose, face identifying data of the correct user of the card is stored on the card, the data on the card is read out and a facial image of the person seeking to use the card is captured. The image is analysed and the results of the analysis are compared with the stored data on the card. If a good match is found, the person is allowed access to the system. However, the storage capacity of such a card is typically only a few hundred bytes, which is very much lower than the space needed to store a recognisable image as a frame of pixels.

Several encoding analysis techniques are known to reduce the amount of data necessary to store an image. The Karhunen-Loeve Transform (KLT) is well known in the signal processing art for various applications. This is also known as principal component analysis (PCA). This appearance-based method represents images in a compact way by exploiting their statistical variability, as originally described in Sirovitch and Kirby,"Low dimensional procedure for the characterisation of human faces"J. Opt. Soc. Am., Volume 4 number 3 pages 519 to 524, March 1987.

Eigenpictures are generated to obtain a low-dimensional yet accurate representation of human faces. In this paper, images of substantially the whole face of members of a reference population were processed to derive a set of M eigenvectors, each having a picture-like appearance (these are also known as eigenpictures). In subsequent recognition, a given test face (which need not belong to the reference population) is characterised by its position in the M-dimensional space defined by these Eigenvectors.

Generally, all the images for the members of a reference population have the same general appearance i. e. for face recognition, all the images of a reference population are captured without the person having any occlusion to the face e. g. no spectacles or facial hair. If an allowed user wearing glasses or having a moustache tries to access the system, it is likely that they will be rejected since the glasses or moustache do not form part of the reference data set and could result in significant errors.

In accordance with the invention there is provided a method of processing a digital image of an object comprising the steps of : extracting, from the image, data representing a plurality of pixels of the image; for each pixel for which data is extracted, comparing the extracted data with a pre-determined value of the data for the associated pixel to form a comparison result; reconstructing an image of the object from those pixels for which the comparison result is less than a pre-determined threshold, which threshold is dependent upon the expectation value of the data for the associated pixel.

Thus, for reference data representing examples of the object, an image is reconstructed without pixels which are widely separated from the main cluster of the pixel in the reference data.

Preferably the pre-determined value of the data for the associated pixel is the average value of that pixel in a reference data set. Eigencoefficients for the image may be estimated from the reconstructed image and reference data in the form of a plurality of eigenvectors representing a reference set of images of like objects.

According to another aspect of the invention there is provided apparatus for processing an image of an object comprising: means for storing reference data derived from a set of examples of images similar to the object, the reference data including, for each of a plurality of picture element positions, predetermined range information for defining a permitted range of values therefor; means for comparing each of a plurality of picture elements of the image to be processed with the predetermined range information for the corresponding picture element position to form a comparison result indicating whether or not the picture element falls within the permitted range; and

means for reconstructing an image of the object from only those pixels for which the comparison result indicates that the picture element falls within the permitted range.

Preferably the reference data further comprises a set of eigenvectors derived from the set of example images and the reconstructing means comprises means to resolve the image against the eigenvectors, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, into a set of coefficients and means to form a reconstruction of the image using the coefficients and the eigenvectors.

In another aspect the invention provides an apparatus for verifying or recognising an image of an object comprising: means for storing reference data derived from a set of examples of images similar to the object, the reference data including, for each of a plurality of picture element positions, predetermined range information for defining a permitted range of values therefor; means for comparing each of a plurality of picture elements of the image to be processed with the predetermined range information for the corresponding picture element position to form a comparison result indicating whether or not the picture element falls within the permitted range; and means for comparing the image of the object, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, with comparison data defining at least one image to determine whether the image resembles the, or an, image defined by the comparison data.

Preferably the reference data further comprises a set of eigenvectors derived from the set of example images and the comparing means comprises means to resolve the image against the eigenvectors, using only those pixels for which the comparison result indicates that the picture element falls within the permitted range, into a set of coefficients and means compare the coefficients with coefficients which form said comparison data.

Each item of the predetermined range information may comprise the average value of the picture elements in the set of example images and an expected deviation

range, the comparing means being operable to determine whether each picture element lies within that deviation range of said average value. Each expected deviation range can be proportional to the standard deviation within the set of example images at the respective picture element position; or may be proportional to an estimate of the standard deviation within the set of example images at the respective picture element position, the estimate being determined by calculating, for each of a plurality of pixels r ;, where Rj = the eigenvalue corresponding to the j"'eigenvector e. == ri-""pixel of the j"'eigenvector and M is the number of eigenvectors in the set.

The invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows an embodiment of a system incorporating the invention; Figure 2 shows a second embodiment of system incorporating the invention; Figures 2a & b shows an example of outlying pixels identified according to the invention; Figure 3 shows examples of images reconstructed using prior art techniques and the technique of the invention.

Figure 1 shows an embodiment of the invention suitable for credit card verification of a user of a terminal. A video camera 1 receives an image of a prospective user of the terminal. Upon entry of the card to a card entry device 2, the analogue output of the video camera is digitised by an A/D converter 3, and, if desired after preprocessing 12 such as noise filtering (spatial or temporal and brightness or contrast normalisation), is sequentially clocked into a frame store 4. The image stored in the frame store may be considered as a column vector 6 consisting of individual pixel <BR> <BR> <BR> <BR> values ok (k=l... K) where K is the total number of picture elements (pixels) in the image. A video processor 5 (for example a suitably programmed digital signal

processing chip) is connected to access the frame store 4 and process the digital image therein.

More particularly, the video processor serves to identify those pixels of the image which are considered likely to represent occlusions or errors ("outliers") which do not lie within the normal range of pixel values so that these may be excluded from further consideration. In the interests of reducing the amount of computation required, this and subsequent processing may be based on, rather than the full set of K pixels, a <BR> <BR> <BR> <BR> sub-sample of this set, defined as those pixels °k for which k=rj, where R= [rJT (i=1... I) are a set of indices, although this is not of course essential.

A reference data store 8 contains-as is conventional for PCA systems-a plurality (e. g. 100) of eigenpictures (discussed in further detail below) which have previously been derived in known manner from a representative population of faces, and <BR> <BR> <BR> <BR> an average image 6-that is, the average of the pixel values of the images from which the database has been generated. Note that it is neither necessary nor indeed usual that the population from which the database is derived includes the actual individuals to be recognised, provided that the population is representative; typically the database might have been generated from 10,000 images.

The video processor 5 is connected to receive the average image 0 from the reference data store 8 and the first step which the processor performs is to subtract this average image from the image 6 stored in the frame store to form a difference image, or caricature, # ; i. e. <BR> <BR> <BR> <BR> <BR> <P> Ok =ok-Ok or<BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> #ri#ri=#ri- The second function of the video processor 5 is to compare each element of the caricature ¢ (or in the case of a subsampled picture, each element defined by R) with the corresponding one of a set of threshold values tk (k=1... K): if the modulus of the element exceeds the threshold (i. e. if 0; > t2) then it generates a validity signal indicating that this element is deemed to be an outlier and therefore to be excluded from subsequent processing. The values tk are fixed, and are stored in the reference data store

8; the determination of these values is discussed below. The values of the caricature elements and the validity signals are transferred to an image buffer 7.

A transform processor 9 (which may in practice be realised as processor 5 acting under suitable stored instructions) computes co-ordinates or components of the image (disregarding those elements signalled as outliers) with reference to each eigenpicture to give a vector of M, typically 100, coefficients a ; (i= l... M) using the principal component analysis method. The card entry device 2 reads from an inserted credit card the 100 eigencoefficients which characterise the face of the authorised user of that card, and these are input, via a buffer 13 to a comparator 10 (which may again in practice be realised as part of a single processing device) which measures the distance between the two sets of coefficients. The preferred metric is the Mahalanobis distance, although other distance metrics could easily be used. If this distance is less than a predetermined threshold, correct recognition is indicated to an output 11. Otherwise recognition failure is signalled. This apparatus could be modified for recognition as opposed to verification by replacing the card reader 2 by a database in which are stored a plurality of sets of coefficients, each set corresponding to a respective face (or other object) to be recognised; the task of the comparator 10 then becomes that of determining which set has the smallest distance from the set of coefficients output by the processor 9.

The values to be used for the threshold values tk require explanation. It is important to appreciate that these are fixed-i. e. they do not depend on the image under consideration. However there is a separate threshold value tk for each pixel-or, in the case of subsampling, for each pixel of the set R. The threshold value serves to represent a typical"normal"range of variation of that particular pixel position within the image, and may be some small multiple (e. g. 3 ( 1.73) of the standard deviation of the luminance values of that pixel over the reference population (that is, the same reference population from which the mean values SA were obtained). In the event that it is not possible or desirable to calculate this, an alternative method will be described.

Firstly, however, the method of principal component analysis will be described in more detail, which will serve also to explain details of the generation of the data stored in the reference data store 8, operation of the transform processor 9, the

derivation of the coefficients to be stored on the credit card, and the operation of aspects of Figure 2.

In a training phase, images are captured of objects within the desired data set.

In the example discussed herein the data set are the faces of clean-shaven Caucasian males, however clearly the invention is not limited to this data set. The images may be stored as grey level measurements for each pixel (or the Fourier transform thereof).

Let the images be represented by 0 (-') which is column vector vector elements are the pixel values of the image taken in raster-scan order, J is the total number of images and j (j=1,..., J) is an index number identifying the particular image.

An average face image 0 is then formed: and the deviation 0 from the average, or caricature, is formed for each face: <BR> <BR> <BR> <BR> i. e. #(j) (2) The covariance matrix C is then calculated: Matrix C is symmetrical and non-negative and its eigenvalues A and orthonormal eigenvectors e are: Ce () Le<")(4) Where k (n) the the biggest eigenvalue eigenvalue and is the corresponding eigenvector. The eigenvector is also known as an eigenpicture or eigenface.

Solving equation 4 for (C-R = (9, enables the eigenvalues A (n) and hence the eigenpictures e (") to be determined.

The faces of the data set are therefore represented by a set of eigenpictures.

Say the data set represents 10,000 images, it is usual for only the first M, say 100, eigenpictures to be calculated and stored. This is usually sufficient to distinguish the individual faces.

For any member S (i) the data set, set, best approximation approximation the the dimensional space will be given by: Where a,, is the n-th eigencoefficient of the jth image, typically obtained by projection of the input face onto the corresponding eigenface i. e.: where (x, y) is the scalar of product of vectors x and y.

Therefore the reference data for the population comprises a set of eigenpictures and eigenvalues representing the data set (or population) and a set of eigencoefficients for each member of the set. An approximation B of a member of the set can be reconstructed by substituting the eigencoefficients and eigenpictures into equation 5.

Thus far the use of PCA is conventional. However, according to one embodiment of the invention, further reference data are also generated which represent the mathematical expectation of the eigencoefficients a (") for each pixel of an image.

This reference data is also stored in the data reference store 8.

For an unknown image, there are no eigencoefficients available. It is therefore conventional to obtain the coefficients through projection of the unknown image onto each of the existing eigenfaces, as in equation 6 above. This method however is not robust since it cannot handle problems relating to occlusion or so-called outliers e. g. samples of an image which lie outside the normal range of pixel values.

As mentioned above, an approximation i of an image can be reconstructed using equation 5 i. e.

A reconstruction error E (r) may then be calculated:

where r = (......) is a set of sampled image points, so that 0,.; is the pixel of image #and, likewise, ejristands for the corresponding pixel in the j-th eigenface.

The coefficients a(n) are thus calculated by the transform processor 9 so as to minimise the error E, by a least-squares method or by a hill-climbing method, described below.

The same methods may be used to generate the coefficients to be stored on the credit card.

However, the minimisation of Equation 9 will only produce correct ai values if the set of points/ does not contain outliers: noisy pixels or points belonging to occluded areas. E (r) is therefore minimised in a robust manner by taking into account the pixelwise error distribution to prune out outliers, as discussed earlier.

Consider now the estimation of the standard deviation of the reference population (or, rather, its square, the variance). <BR> <BR> <BR> <BR> <P> As lei,..., eN], where N is the total number of eigenvectors, is a complete<BR> <BR> <BR> <BR> <BR> basis of the reference population, the r ;-th pixel of the caricature + can be perfectly represented as: where the first sum on the right hand side extends over the eigenspace and corresponds to the so-called DIFS (discrepancy in the face space), and the second one stands for the residual error due to face appearance variability outside the face space, which we will refer to as DOFS (for discrepancy outside the face space).

An expectation of the pixelwise quadratic reconstruction error can be obtained (taking into account the decorrelation properties of the KLT expansion) as Now the expected second order moment (i. e. its variance) associated with the j- th eigenvector is just the j-th eigenvalue, i. e.

so that the first term of (16) becomes: DIFS is based on the mathematical expectation (not the actual value) of the ay coefficients. Expectation is defined as follows: Let x be a random variable with a probability density function fx (x). Its mathematical expectation <x> is defined as follows: Analogously, for the second order moment (quadratic terms): DIFS2 has no dependence on the image under consideration. It depends on the actual pixel ri of the eigenpictures being sampled. This is because it is based on a <BR> <BR> <BR> <BR> mathematical expectation, which involves removing the dependence on A through integration (see definitions of the mathematical expectation above). The last equality holds because of the properties of the a coefficients.

The process followed to compute DIFS2 for a given pixel ri consists of the following steps: For each eigenface ei (for i between 1 and M), take the value of the ri pixel and square it Multiply the above value by its corresponding eigenvalue Ili Sum for all j between 1 and M As far as DOFS is concerned, the process is exactly the same and, thus, we have:

However, neither the Ry nor the ej are usually available for j > M. Therefore, it is necessary to estimate the term. The assumption made is that of the decay and the stabilisation of higher order eigenvalues. We shall assume that they are approximately <BR> <BR> equal, ie - < ?, for j>M, where a is a constant. As the eigenvalues are ordered in descending order, #M+1 is an upper bound of the higher order eigenvalues (j>M), and, assuming the distribution of eigenvalues is flat enough beyond M, #M+1 can also be <BR> <BR> considered a reasonable estimate of 6. As the/\., are now supposed to be approximately constant, they can be factored out from the summation symbol: The eigenfaces are orthonormal vectors, i. e. they are orthogonal and their norm is unity, which can be summarised as: Where <ek, el> stands for the scalar product of vectors ek and el. Hence, if we create a square matrix E where the columns are the N eigenfaces, that matrix will be <BR> <BR> <BR> <BR> orthogonal, i. e. EET=I, where I is the identity matrix, so that the transpose of E is also its inverse matrix. Because of the uniqueness of the inverse matrix, the opposite also holds and we will have ETE=I, ie ET is also an orthogonal matrix. This implies that, if we take any column of ET, square all the elements, and sum them up, the value of that sum would just be one. Consequently: And, thus, So, in order to estimate DOFS2, we just need #M+1. If however#M+1 is not available, XM could be used instead with very little additional error.

Therefore,

DIFS and DOFS allows the estimation of outliers or pixel errors when there is no knowledge about the actual ay coefficients for a face. Computing the coefficients in a robust manner is a computationally expensive process and the deployment of DIFS allows the estimation of whether a pixel is or is not an outlier (noise, occlusion, etc) by taking into account the statistical variability of the grey-scale values and the magnitude of its discrepancy with reference to its expected value.

Once the values of the DIFS and DOFS have been calculated, they are stored in the data reference store 8.

In use, the DIFS and DOFS values are used to determine whether a pixel of an unknown image is an outlier. Thus, 4S, ; is then compared by the comparator 10 with a <BR> <BR> threshold dependent on (DIFS + DOFS) e. g. 02 >3 (DIFS 2 + DOFS 2). Or alternatively, Of course, the actual threshold value could be stored in the store 8, rather than the DIFS and DOFS values. If 0,, is greater than this threshold, the pixel is considered an outlier and is discarded. All of the pixels may be considered or the selection R thereof, which may be random.

Outlier rejection as shown can be used to provide an initial estimate of the aj coefficients using equation 5 without taking into account the pixels which were discarded in the outlier determination step. A Dynamic Hill Climbing (DHC) algorithm generates random estimates of the an coefficients. An example of such a DHC can be found in the paper"Dynamic Hill Climbing"by M. De La Maza and D. Yuret, AI Export, pages 26-32, March 1994. The algorithm proceeds by measuring the direction in which the error function is minimised and altering the coefficient estimates in that direction. A number of restarts is used and the solution with the minimum value of the error function is selected as the set of ay coefficients representing the face.

Referring now to Figure 2, this shows an apparatus which serves not for identity verification but, rather, for the enhancement of an image of a face. Items 1,3, 12,4,5,7,8 and 9 are identical both in construction and operation to those items of Figure 1 having the same reference numerals. Here, however the object is to regenerate an enhanced image which is free from, or at least, less affected by, the presence of outliers. The coefficients ai output by the transform processor 9 are received by an

inverse transform processor 14, which uses them to form a weighted sum of the eigenvectors e from the reference data store 8, in accordance with equation 8. The face is thus reproduced using the a,, coefficients and the eigenpictures.

DIFS2 (r ;) can be computed offline for a given sample of image pixels and used together with DOFS to determine the validity of the reconstruction error associated with a particular pixel of the random sample. An outlier map can be generated by picking out those pixels exceeding a threshold based on the previously computed expectations.

Figure 2a shows an image in which sections over the eye and chin have been occluded and Figure 2b shows an image of a face having spectacles and a moustache. In Figures 2a and 2b, the solid black sections of the face represent the occluded areas and the white pixels represent those pixels of the face exceeding the threshold.

We can see how in both cases outlier detection is successful and occluding areas are clearly identified.

Other data may also be incorporated into the recognition process e. g. head measurements and recognition results may be combined as is well known.

Figure 3 shows the effect of the invention in image reconstruction compared with projection. Figure 3a is the original image. Figure 3b shows the image reconstruction of the original image using conventional projection techniques. Figure 3c shows reconstruction of the original image using the apparatus of Figure 2. Figure 3d is an image with heavy occlusion. Figure 3e shows a reconstruction of an image using conventional projection techniques when the occluded image of Figure 3d is received.

Figure 3f shows a reconstructed image of the occluded image using the apparatus of Figure 2. As can be seen from Figures 3c and 3f the technique enables a good reconstruction of an image even if the test image (Figure 3d) is occluded. The first 100 coefficients were used in each case. As can be seen in Figure 3e, when conventional projection techniques are used the signal error owing to the presence of outliers is scattered across and distorts the whole face. In Figure 3f the face occlusion effects are nearly completely eliminated and face appearance is preserved. On the other hand, Figures 3b and 3c show that the method also provides good results in the absence of occlusion.