Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBJECT SEPARATION IN MULTISPECTRAL IMAGE PROCESSING
Document Type and Number:
WIPO Patent Application WO/1996/002043
Kind Code:
A1
Abstract:
Processing multispectral images to separate artificial objects from natural objects by background discriminant transformation. The processing involved identifying on the image an object of the natural, (or background class), identifying an object of the artificial class, calculating a mean vector and co-variance matrix of both areas, and calculating transformation vector for maximising the co-variance of the artificial class relative to the natural class, and applying the transformation vectors to the image to produce a transformed image.

Inventors:
SHETTIGARA VITTALA K (AU)
Application Number:
PCT/AU1995/000412
Publication Date:
January 25, 1996
Filing Date:
July 07, 1995
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
COMMW OF AUSTRALIA (AU)
SHETTIGARA VITTALA K (AU)
International Classes:
G06T5/00; G06V20/13; (IPC1-7): G06T5/00
Other References:
INTERNATIONAL JOURNAL OF REMOTE SENSING, 1991, Vol. 12, No. 10, pages 2153-2167, K.V. SHETTIGARA, "Image Enhancement Using Background Discriminant Transformation".
JAPIO, JPAT ONLINE ABSTRACT, Accession No. 93-342348; & JP,A,05 342 348 (TSUBAKIMOTO CHAIN CO.) 24 December 1993.
JAPIO, JPAT ONLINE ABSTRACT, Accession No. 93-135172; & JP,A,05 135 172 (OLYMPUS OPTICAL CO. LTD.), 1 June 1993.
JAPIO, JPAT ONLINE ABSTRACT, Accession No. 90-224466; & JP,A,02 224 466 (MINOLTA CAMERA CO. LTD.) 6 September 1990.
JAPIO, JPAT ONLINE ABSTRACT, Accession No. 89-200357; & JP,A,01 200 357 (DAINIPPON PRINTING CO. LTD.), 11 August 1989.
PROCEEDING OF THE 1994 INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, Vol. 4, IEEE, PISCATAWAY, NJ, USA, pages 2372-2374, SMITH M. et al., "A New Approach to Quantitative Abundance of Materials in Multispectral Images".
Download PDF:
Claims:
CLAIMS.
1. A method of processing multispectral images to separate artificial objects from natural objects by Background Discriminant Transformation (BUT) including the steps of : applying a linear transformation to an image containing objects in a natural class (being the background class) and objects in an artificial class to produce a transformed image in which artificial class variance is maximised relative to natural class variance; and displaying the transformed image on a colour display means.
2. The method of claim 1 wherein the step of applying the linear transformation to the image further includes the steps of : selecting a first area of the image that contains primarily objects in the natural, class; selecting a second area of the image containing objects of both classes in which artificial objects are to be enhanced relative to natural objects; calculating a mean vector and co variance matrix of the first area which is the mean vector and co variance matrix of the natural class; calculating a mean vector and co variance matrix of the second area which is the mean vector and co variance matrix of the area of the image to be enhanced; calculating a mean vector and co variance matrix of the artificial class from the mean vector and co variance matrix of the natural class and the mean vector and co variance matrix of the area of the image to be enhanced; calculating transformation vectors for maximising the co variance of the artificial class relative to the natural class; and applying the transformation vectors to the image to produce a transformed image.
3. The method as in either claim 2 wherein the images are recorded from a multispectral sensor in the form of multiple data bands.
4. The method as in claim 4 wherein there are preferably three or more spectral bands.
5. The method as in claim 3 wherein the step of applying the transformation vectors to the image involves applying the transformation vectors to the multiple data bands comprising the image to produce transformed data bands.
6. The method as in claim 3 wherein the method includes the step of displaying the transformed image on a colour display means.
7. The method as in claim 6 wherein the same number of new bands as the original bands are transformed in the multispectral image except that in two transformed bands the information from the natural class and the artificial class dominate respectively and in other bands the ratio of information content of these two classes decrease, said two bands and one other band form an effective display on a colour monitor for separating the natural and artificial classes.
8. The method as in claim 7 wherein the step of displaying the transformed image involves displaying transformed data bands and includes the steps of : selecting a first transformed data band containing maximum artificial class information; selecting a second transformed data band containing maximum natural class information; selecting a third transformed data band that has the second highest ratio of variances of artificial class information to natural class information; and applying the first second and third transformed data bands to colour guns of the colour display means.
9. The method as in claim 7 wherein the step of displaying the transformed image in a colour monitor, involves displaying transformed data bands and includes the steps of : selecting a first transformed band containing maximum artificial class information for red colour guns of the colour display means; selecting a second transformed data band containing maximum natural class information for green colour guns of the colour display means; selecting a third transformed data band that has the second highest ratio of variances of artificial class information to natural class information for blue colour guns of the colour display means.
10. The method as in claim 8 wherein the colour display means is an RGB monitor and the data of the three selected bands are applied to the three colour guns.
11. The method as in claim 10 wherein the transformed databands have been respectively transformed by three precomputed filters, the three filters transform each of the databands, the three filters feed data to the three colour guns of the RGB monitor, each filter has a number of stored floating point coefficients and the number of coefficients is the same as the number of input bands of the image, each filter modulates each input band with a filter coefficient and then adds them to produce the new transformed band.
12. The method as in claim 11 wherein the transformed band created by first filter, the last filter and the second filter are displayed in a colour monitor in red, green and blue colour respectively such that artificial objects are made to appear in reddishpinkish colour and natural objects will appear greenish or bluish.
13. The method as in claim 12 wherein pixels of artificial objects in enhanced images are labelled by providing seed pixels for reddishpinkish objects and natural objects are labelled by providing seed pixels in greenishbluish areas.
14. The method as in claim 13 whereby the seed pixels are applied by the clustering program ISODATA.
15. The method as in claim 14 wherein the method is followed by the step of drawing the boundaries of the objects.
16. The method as in claim 15 wherein the images are put into a GIS.
17. The method as in any of the preceding claims wherein the step of displaying the transformed data sets may include further processing such as stretching and inverting so as to achieve optimal display of the transformed image.
18. In a further form, the invention resides in an apparatus for the separation of artificial objects from natural objects by processing multispectral images in the form of multiple data bands containing objects in a natural class and objects in an artificial class comprising : image display means adapted to display colour images of a selected area; background selection means associated with the image display means and adapted to delineate selected subareas of the image comprising primarily natural class; matrix calculation means adapted to calculate mean vectors and co variance matrices for selected images or subimages and to calculate a mean vector and co variance matrix for the artificial class; transformation vector calculation means adapted to calculate transformation vectors by maximising the variance of the artificial class relative to the variance of the natural class; transformed data band generating means adapted to generate transformed data bands from the transformation vectors and the original data bands, said transformed data bands being displayed on the image display means.
Description:
OBJECT SEPARATION IN MULTISPECTRAL IMAGE PROCESSING

This invention relates to the general field of image processing and in particular to a method of processing images to separate artificial objects from natural objects. The method can be applied in real time or near-real time and can operate semi-automatically.

BACKGROUND ART

Processing of remotely obtained images has received considerable attention in recent years, particularly since the general availability of satellite images such as those known as Landsat. One aspect of the processing is the separation of artificial (man-made) objects from natural objects (surroundings). For mapping, terrain analysis and environmental monitoring the image processing can be done off-line. In the case of surveillance or reconnaissance it may be necessary to perform processing and analysis in real time or near real time.

A pixel-by-pixel method of image analysis for image segmentation, target recognition and image interpretation has been described in British Patent Number 2264205 assigned to Thomson CSF. The method relies on characterisation of texture by determining the modulus and orientation of the gradient of the luminance of each element The method is not suitable for colour object separation.

An adaptive image segmentation process for object recognition is described in United States Patent Number 5408095 assigned to Honeywell Inc. Operation of this process requires the input of segmentation control parameters and external variables which require extensive prior knowledge of the imaged area. This process is not suitable for real time analysis because of the heavy reliance on prior knowledge.

McKeown (Philosophical Transactions of the Royal Society of London, 324, 1988) has described a knowledge based system for detecting and analysing man-made structures in remotely sensed images. The procedure requires the integration of spatial knowledge with image analysis. Ormsby ( International Journal of Remote Sensing, 13, 1992) has described a process using statistical measures of separation (divergence) between natural and artificial objects to recommend different Landsat TM bands to use for separating objects.

The inventor is aware of other image processing methods based on models in which objects are represented by chains of straight lines and arcs. These methods all require extensive knowledge of the area being investigated and are computationally expensive.

They are useful for post-mission analyses but are not suitable for real time or near-real time image processing.

It is known that natural and artificial objects exhibit different spectral characteristics in different wavelength bands. This suggests that a differencing technique can be used to differentiate artificial objects from natural objects. Such an approach is described below with reference to FIG 1 and FIG 2. The problem with such an approach is that the processed image will lack the detail of the original image because of the image differencing. Furthermore, the process only considers two bands out of three or more bands at a time.

OBJECT OF THE INVENTION

One object of the present invention is to provide an image processing method for separating artificial objects from natural objects. The method is applicable to multispectral data consisting of optical and/or infrared bands and/or synthetic aperture radar.

A further object of the invention is to provide an object separation method which can separate objects in real time or near-real time.

A still further object of the invention is to provide an image processing method which is useful for detecting small objects in images.

A yet further object of the invention is to provide a method of image processing which is suitable for fusing dissimilar images.

Yet another object is to provide the public with a useful alternative to existing image processing techniques.

DISCLOSURE OF THE INVENTION

In one form, although not necessarily the only or indeed the broadest form, the invention resides in a method of processing multispectral images to separate artificial objects from natural objects by background discriminant transformation including the steps of : applying a linear transformation to an image containing objects in a natural class, being the background class, and objects in an artificial class to produce a transformed image in which artificial class variance is maximised relative to natural class variance; and displaying the transformed image on a colour display means.

SUBSTTTUTE SHEET (Rule 26)

The step of applying a transformation to the image preferably further includes the steps of : selecting a first area of the image that contains primarily objects in the natural class; selecting a second area of the image containing objects of both classes in which artificial objects are to be enhanced relative to natural objects; calculating a mean vector and co variance matrix of the first area which is the mean vector and co variance matrix of the natural class; calculating a mean vector and co variance matrix of the second area which is the mean vector and co variance matrix of the area of the image to be enhanced; calculating a mean vector and co variance matrix of the artificial class from the mean vector and co variance matrix of the natural class and the mean vector and co variance matrix of the area of the image to be enhanced; calculating transformation vectors for maximising the variance of the artificial class relative to the natural class; and applying the transformation vectors to the image to produce a transformed image.

The step of selecting the second area of the image may be simplified by including the step of selecting the whole image.

The method is preferably implemented on a system with known image display means which incorporate means for the selection of areas of an image. Typically, selection of an area is done by drawing a region on a screen under control of a cursor control device such as a mouse or trackball.

The basis of the invention is the idea that an image of an area can be modelled as having two classes, namely background class and non-background class. The bulk of the image is background class and the objects to be enhanced are non-background class.

The preferred method requires the user to identify the background class that needs to be suppressed in order to enhance the useful information. The background class is visually chosen by identifying a few training areas. The method requires mean vectors and co variance matrices of the background class and of the whole image. The procedure automatically computes the percentage coverage of the background class in the image and also computes the mean vector and co variance matrix for the non- background class. The mathematical theory behind the invention is described in "Image enhancement using background discriminant transformation" published by the author in International Journal of Remote Sensing, 1991, Vol 12, No. 10, pgs 2153-2167 which is incorporated herein by reference.

To separate artificial objects from natural objects in the preferred method the natural objects are chosen to comprise the background class and artificial objects are non- background. In order to enhance the detectability of objects in the artificial class, the coordinate axes are rotated so as to reduce the natural class variability and to increase the artificial class variability. In practical terms this means maximising the co-variance of the artificial class relative to the natural class.

In preference the images are recorded from a multispectral sensor in the form of multiple data bands. There are preferably three or more spectral bands.

The step of applying the transformation vectors to the image preferably involves applying the transformation vectors to the multiple data bands comprising the image to produce transformed data bands.

The procedure preferably computes the same number of new bands (axes) as the original bands in the multispectral image. However,the information content of artificial objects decrease, in relation to natural (background) class as the sequence number of the axes increases. That is, in the first and the second new bands the artificial objects dominate and in the last new band natural objects dominate. This is where the original image has three or more bands.

In preference the step of displaying the transformed image in a monitor involves displaying transformed bands and includes the steps of : selecting the first transformed band containing maximum artificial class information; selecting the last transformed band containing maximum natural class information; selecting a second transformed band that has the second highest ratio of variances of artificial class information to natural class information; and applying the first second and thrid transformed bands to colour gunds of the colour display means.

The colour display means is preferably an RGB (Red, Green, Blue) monitor and the data of the three selected bands are applied to the three colour guns.

In preference therefore the step of displaying the transformed image in a colour monitor involves displaying transformed bands and includes the steps of : selecting the first transformed band containing maximum artificial class information for red colour guns of the colour display means; selecting the last transformed band containing maximum natural class information for green colour guns of the colour display means; selecting a second transformed band that has the second highest ratio of variances of

artificial class information to natural class information for blue colour guns of the colour display means.

Displaying the transformed data sets may include further processing such as stretching and inverting so as to achieve optimal display of the transformed image.

One advantage of the preferred form of the present invention is that the background is chosen by the user depending on the application. This provides control on the type of enhancement the user requires,

Another advantage of the preferred embodiment is that the method is scale invariant. The gain and offset of the sensor system do not affect the quality of the transformed image obtained from the method. This means that images of the same area taken by different instruments or images taken of the same area at different times can be merged without affecting the analysis significantly. Furthermore, data from dissimilar sensors can be fused.

In a further form, the invention resides in an apparatus for the separation of artificial objects from natural objects by processing multispectral images in the form of multiple data bands containing objects in a natural class and objects in an artificial class comprising : image display means adapted to display colour images of a selected area; background selection means associated with the image display means and adapted to delineate selected sub-areas of the image comprising primarily natural class; matrix calculation means adapted to calculate mean vectors and co variance matrices for selected images or sub-images and to calculate a mean vector and co variance matrix for the artificial class; transformation vector calculation means adapted to calculate transformation vectors by maximising the variance of the artificial class relative to the variance of the natural class; transformed data band generating means adapted to generate transformed data bands from the transformation vectors and the original data bands, said transformed data bands being displayed on the image display means.

The apparatus may consist of a number of purpose-built modules designed for fast matrix processing such as array processors. Alternatively the apparatus may be a general purpose computer programmed so as to perform each of the tasks of the individual means.

The preferred method involves the application of three pre-computed filters (transforming vectors discussed earlier) to an incoming stream of image data. The three

filters feed data to the three colour guns of a colour display means. Each filter has a number of stored floating point coefficients and the number of coefficients is the same as the number of input bands of the image. Each filter modulates each input band with a filter coefficient and then adds them to produce a new synthetic band or image. Similarly, for each filter a new image is created. Images created by first filter, that last filter and the second filter are displayed in a colour monitor in red, green and blue colour respectively. If the images are displayed in this way artificial objects can be made to always appear in reddish-pinkish colour. The natural objects will appear greenish or bluish. Because this process is not computationally intensive it can be implemented in hardware and operate in real time.

As the artificial objects appear in distinctly different colour compared to natural objects, the pixels of artificial objects in images enhanced by the present invention can be labelled by providing seed pixels for reddish-pinkish objects. The natural objects can be labelled by providing seed pixels in greenish-bluish areas. The well-known procedure like ISODATA clustering programs can be used for this purpose.

Once the pixels of artificial objects are labelled, the boundaries of the objects can be drawn by any one of a number of image processing programs, which process is called vectorising the image. These vectors form the objects which are then ready to be put into a GIS (Geographic Information System).

BRIEF DESCRIPTION OF THE DRAWINGS

To further assist in understanding the invention reference will be made to the following drawings in which :

FIG 1 shows the typical spectral profiles of vegetation and artificial objects,

FIG 2 is a plot of mean spectral differences of natural and artificial objects,

FIG 3 shows a scenario for real time object separation and detection,

FIG 4 shows a comparison of a satellite image before and after BDT, and

FIG 5 shows the result of extracting artificial objects from an aerial photographic image, it also shows the extration of artificial objects as vectors for data entry into GIS.

In FIG 2 of the drawings the labels refer to:-

A Asbestor surface

B Concrete Surface

C Open fields and grasslands

D Dark Bitumen

E Light Bitumen

F Sugar Gums

G Dull metallic surfaces

H Bright metallic surfaces

I Ovals

J Aleppo pines

DETAILED DESCRIPTION OF THE DRAWINGS

It is to be understood that the following description is of the preferred embodiments of the invention and is merely illustrative and does not limit the scope of the invention.

The invention is based on the spectral characteristics of objects and the discovery that artificial and natural objects have different spectral characteristics. The different spectral characteristics can be exploited as a basis for object separation during image processing. Vegetation generally comprises the bulk of the natural component of images. A characteristic spectral profile of vegetation is shown in FIG 1 with a characteristic spectral profile of artificial objects. The vegetation profile shows a slight decline in reflectance from the green to the red followed by a sharp rise from the red to near infrared. In contrast the spectral profile of an artificial object displays a steady decline in reflectance as the wavelength increases.

From FIG 1 it is evident that natural objects are good reflectors in the infrared and artificial objects are good absorbers. This suggests that a differencing technique may be used to separate natural and artificial objects. FIG 2 shows a plot of a variety of natural and artificial objects which are classified by differencing the infrared band with the red band and the green band. The equations used for this purpose are :

(infrared - red) + 128 and (infrared - green) + 128.

As discussed above this method of image processing is evident from the prior art but does not meet the objects of the present invention.

The invention employs background discriminant transformation to suppress the dominance of the background in multispectral image space thereby enhancing non- background objects. Multispectral images are obtained from sensors such as Systeme probatoire de l'Observation de la Terra satellite (SPOT) and Landsat. The method assumes that images obtained from these sources consist of two main classes : background and non-background. The multispectral images are linearly transformed with the linear transformation coefficients being computed to maximise the variance (information content) of the non-background objects relative to the background objects.

FIG 3 depicts the implementation of the invention in a surveillance scenario. Images from one or more scanners are received at a ground station. The received images are in bands (channels) of data (a Daedalus scanner provides 11 bands of data). In conventional processing an operator selects three of the 11 bands for display on a colour monitor. The number of possible combinations is 990 and it is impractical to scan every combination for the best object separation. By applying a background discriminant transformation three optimal bands for object separation are computed and displayed using linear combinations of all of the original 11 bands. Because the image transformation is not computationally intensive it can be done in real time.

In order to best exemplify the operation of the invention a computer program listing which calculates the vectors for background discriminant analysis is appended hereto.

The invention has been applied to a SPOT image of a harbour. FIG 4a shows the original SPOT image and FIG 4b shows the enhanced image. In the original image

includes features such as the main harbour, a series of islands and some ships outside the harbour in the ocean. The presence of small ships is difficult to detect

The ocean was selected as the natural class for application of the invention with everything else being considered as artificial class. As can be seen in FIG 4b a large number of ships are visible that were not detectable in the unenhanced image. A number of pontoons and piers are also visible which were not visible in the unenhanced image.

The invention can be also used on colour aerial photographs, as shown in Figure 5. In figure 5a the red band of a colour photograph of an urban area is shown in back and white. The colour aerial photographs was transformed using this invention. All artificial objects appeared in pinkish colour in the transformed image.

Some samples of artificial objects (pinkish pixels) and some samples of natural objects (greenish-bluish pixels) were taken from the transformed image and process through the ISODATA clustering algorithm (in Decision Estimation and Classification - An Introduction to Pattern Recognition and Related Topics, C. W. Therrien (J Wiley & Sons (eds) 1989)). This algorithm labelled all artificial objects as 1, which are shown in white in figure 5b, and all natural object pixels as 2, which are shown in grey in figure 5b.

After this stage, using vectorising software boundary lines (vectors) are drawn around the artificial (white) objects as shown in black in figure 5b. These lines are sent to a GIS as a line drawing. This exercise may be considered a semiautomatic digitising of artificial objects.

As can be seen an enhanced image with a highly differentiated representation of the two classes of image can be achieved.

ANNEX

C NAME BDAMAN.F

C PURPOSE To άeteπnine vectors for Background discriminant c analysis. The Qudratic term (v'*Cn*v)/(v'*Cb*v) is c maximised. c Provision is also made to determine orthogonal c directions to the first BDA axis. This is because c the BDA axes are not orthogonal and do not account c for all the variance

C LANGUAGE FORTRAN

C AUTHOR K.V.SHETTIGARA

C DATE 10-04-88 (first created as transf.ftn) c Modification 18-2-89; bdamani created 29/6/92ie c COMPATIBILITY Made compatible to HYBMAN programmes Standard c including common blocks

C MAJOR CHANGES One of the major changes from TRANSF.FTN is byte c array for files is replaced by character array and c new I O routines in IOROUTS.FOR σeated c signature file The signature file is INTERGRAPH stat file of training c areas. Whole area statistics is classl and c background class stat is class2 c NEED TO DO No error trapping is incorporated (i.e IER not used )

C-

C

C DECLARATIONS

C

PARAMETER (MAXF=16,MASQ=256)

CHARACTER*i iNSIGF,OUTSff,OUTFII_≡,COMMENTS,CLASS(MAXF) COMMON /BLK4/ IPIX LIN,IOPT,IOPTP

COMMON /BLK5/ SM1(MAXF),SM(MAXF),SAMP(MAXF)

COMMON /BLK6/ SC1(MAXF,MAXF),SC2(MAXF,MAXF),SI1(MAXF,MAXF),

1 STDEVl(MAXF,MAXF),STDEV2(MAXFJvlAXF)

COMMON BLK7/ VECTM(MAXF),VECT01(MAXF,MAXF),AMBDA(MAXF),

1 GAIN(MAXF),OFFSET(MAXF),VECTO(MAXF)

COMMON /BLK8/ AAA(MAXF,MAXF),\ΕCT02(TvlAXF,MAXF),AMBDAT(MAXF),

1 NUMV(MAXF),AAD(MAXFJMAXF)

COMMON BLK9/ NF.NF1 J^F2,IOPTSTR,IOPTRK,NCOMPl,NCOMP2,MODVEC

*

* following common blocks are special to this programme....

COMMON /BLKA/ PERC0,PERC1,PERC2

COMMON /BLKB/ iTASK,ICOMP,IFΗ-E,rmKIN,ITRKO,NF,MAXFIL,NCOMPO

COMMON /BLKC/ IOPTlA,IOPT2AIOPT3A,IOPT3B,INVERT,IDECOR,NEGLEC

* bi-SsϊO VECΪΪi αv A ' sQ)!vΕ ' c α^ASQ),VΕCTl( IAXF)

DIMENSION VECTOR(MAXF,MAXF),RMATRX(MAXF,MAXF,MAXF)

INTEGER NCLASSLP,LO,IPOS

EQUIVALENCE (VECT11,VΕCT01),(VECT22,VECTO2)

LOGICAL EX

CHARACTER*1 YES,YES1,YES2,YESN0,N01,N02 DATA YES l,YES2,N01,N02/'Y7y'N, , n7 DATA NR.LO.LP /21.3 32/ DATA _NSIGF(1:12),INSIGF(13:13) /'12clihs.stat','0V DATA OUTSIF(l:8),OUTSIF(9:9) ΕDA.OUTl'.OV DATA OUTFILE(l:8),OUTFILE(9:9) / , BDA.OUT2',O7

C- C

C PROCESS

C

30

EXIST *** ' 35

38 IPOS*=38

OPEN (UNΓΓ-=NR^AME=INSIGF,TYPE=OLD * RR--7777) REWIND (MR)

WRITE(6,*) ' GIVE THE APPROXIMATE BACKGROUND AREAL COVER IN % ' READ(5,*) PERCl

PERC2 = lOO.-PERCl PERCO = PERC1 PERC2 PERC5 = PERCO c ! just needs to be intiatated to keep compiler happy

40 IPOS=40

WPJTE(6,'(A)') ' Give the name of output vector file '

READ(5, , (A)MOSTAT=IOS) OUTSIF

INQUIRE(FILE=0UTSIF XIST*=EX) IF (EX) THEN

WRΠΕ(6,'(A)') ' *** FILE EXISTS *** ' 45 WRΠE(6,'(A)') ' Give new file name or Λ z to exit '

READ(5 ) , (A)',IOSTAT=IOS) OUTSIF

IF (IOS.EQ.-1) GOTO 6666

GOTO 40

END IF 48 IPOS=48

OPEN (UN * TT=LO^AME= UTSIF,TYPE='NEW' RR--7777)

50 IPOS=50

WRITE(6,'(A)') ' Give the name of output LOG file '

READ(5,'(A)') OUTFILE

INQUIRE(FILE--OUTFπ_E£XIST_-EX) IF (EX) THEN

WRrTE(6,'(A)') ' *** FILE EXISTS *** ' 55 WRΠΕ(6,'(A)') ' Give new file name or Λ z to exit '

READ(5,'(A)MOSTAT=IOS) OUTFILE

IF (IOS.EQ.-1) GOTO 6666

GOTO 50

END IF 58 IPOS=58

OPEN (TJNTr=LP^AME=OUTFILE,TYPE='NEW , £RR--7777) C

C Read the signature file. RMATRX is a 3D matrix containing covariance c matrices of NCLASS. VECTOR is a 2D matrix containing MEAN

C vector of each NCALSS. SAMP is a vector containing sample c sizes of each class.

CALL SIGREAD(NR,CLASS,MAXF,NF,SAMP,VECTOR,RMATRX > NCLASS,IER) c IF (IER.NE.0) GO TO 7777

C STORE BACKGROUND AND WHOLE AREA COVARIANCE MATRICES CALL TRPOSE(MAXF,MAXF,RMATRX(l,l,l),l,SCl,XXX,IER) CALL TRPOSE(MAXF,MAXF,RMATRX(l,l,2),l,SC2,XXX,IER)

JFILE = -FILE + 1

WRITE(LP,'(A,I3,A,14A1)') ' INPUT FTLE MFTLE,' MNSIGF

WRΠΈ(LP;(AJ 7 IO.2)')

1 ' PERCENTAGE AREA COVER OF BACKGROUND = \PERC1

WRΓTE(LP,'(/A/)') ' TOTAL AREA SAMPLE SIZE & CLASS MEANS'

WRITE(LP, , (I10,16F10.2) , ) SAMP(l) > (VECTORαi,l),Il=l,NF)

WRITE(LP/(/A/) * ) ' TOTAL AREA COVARIANCE MATRIX'

CALL PRINTO(LP,NF,NF,SCl XXX,IER)

WRΠΕ(LP, , (/A/)') ' BACKGROUND SAMPLE SIZE & CLASS MEANS '

WPJTE(LP, , (I10,16F10.2)') SAMP(2),(VECTOR(I1,2),I1--LNF) WRΓΓE(LP,X/A^') BACKGROUND COVARIANCE MATRIX- CALL PRINTO(LPJ^F,NF,SC2,XXX4ER)

DO 42 I1=1,NF c VECTi i) = VECTOROLl)

VECTO(Il) -= VECTORαi,2>VECTOR(Il,l)

42 CONTINUE ΓREENT= O

C deduct the component due to between group cov. from numerator cov. 44 CONTINUE c ! re-entry with change of PERCO

DO 43 I1=1,NF DO 43 12=11 ,NF

TWEEN1 = 0.0 c ! compensate for earliar deduction

IF (IREENT.GE.1) TWEEN1 = PERC5*VECT0(I1)*VECTO(I2) TWEEN = PERC0*VECTθ i)*VECTO(I2)-TWEENl SCiσiJ2) = SC1(I1,I2)-TWEEN SC1(I2,I1) = SCi i,I2) 43 CONTINUE

WRΓTE(LP,*) ' REDUCED COVARIANCE MATRLX ' CALL PRINTO(LP,NF,NF ) SC1,XXX,IER)

C DETERMINE -1/2 TH POWER OF THE DEVJDING COVARIANCE MATRIX BY

C SINGULAR VALUE DECOMPOSITION I.E. A=VDV ETC. WHERE D IS THE

C DIAGONAL MATRIX CONTAINING EIGEN VALUES. -1/2 OF EIGEN VALUES

C ARE DETERMINED

CALL EIGSYM(NF,SC2,AMBDA,VECT01,XXX,IER)

C DETERMINE INVERSE SQUARE ROOT OF COVARIANCE MATRIX

CALL SVDMAT(NF,VECrθl,l,AMBDA,VECTOl,2,VECTO2,-0.5^O X,IER)

** **

CALL S VDMAT(NF,VECT01,1,AMBDAVECT01,2 > AAA, 1.,XXX,IER) C

JF (IREENTJΞQ.O) THEN WRΓΓE(LP,'(/,A,/)') INVERSE SQRT. BACKGROUND COV. MATRIX

CALL PRINTO(LP,NFJ *,VECT02,XXX,IER)

WRΓΓE(LP,'(/ ) A,/)') ORIGINAL BACKGROUND COV. MATRIX RECOMPUTED CALL PRINTO(LP4VΓF,NF,AAA,XXX,IER)

END IF

C store the non background covariance matrix in AAA CALL TRP0SE(NF,NF,SC1,1,AAA,XXX,IER)

C below SCI matrix is the non-background matrix

CALL MAMULT(NF,NF,NF,VECT02,SC1,VECT01^-XX,IER)

CALL MAMULT(NF,NF,NF,VECT01,VECT02,STDEV1^-XX,IER) C

C NOW STDEV IS THE RAΗO OF MATRICES SECOND/FIRST; WEIGHT MATRIX IF (IREENT-ΞQ.O) THEN WRΓΓECLP,*) [NON-BACKGROUND]/[BACKGROUND] MATRIX

CALL PRINrθ(LP,NF v NF,STDEVl,XXX,IER) END IF

C DETERMINE THE EIGEN VALUES AND VECTORS

CALL EIGSYM(NF,STDEVl,AMBDA, VECT01,XXX,IER)

C sortout the eigen values in descending order

CALL SRTEIG(NF,AMBDAAMBDATJ>JUMVJ^EGAT,IER)

DO 135 I1=1JSTF AMBDA(I1) = AMBDATαi) 135 CONTINUE C C rearrange the eigen vectors in the order determined above

CALL SRTVΕC(TSrFJ * ^F,VECT01,STDEVl,NUMV,IER)

CALL TRPOSE(NF,NF,STDEVl,l,VECT01-XXX,IER)

C

C modify the vector V to [W**-0.5]V

C c IFαTASK.EQ.2.AND.IOPT2A-NE.l) THEN

CALL MAMULT(W,NF,NF,VE(_T02,VECT01,SI1,XXX,IER) CALL TRPOSE(NF,NF,SIl,l,VECT01,XXX,IER) c END IF

CC*

C check if the transfoπning vectors are correct by using the c relationship VAV=rLAMBDA] AND VWV=[I]

CALL TRPOSE0*tf\NF,VECTOl,2,VECTO2,XXX,IER)

CALL MAMULT(NF,NF,NF,VECT02,SC1,SI1,XXX,IER)

CALL MAMULT(NF,NF,NF,SI1,VECT01,STDEV1,XXX,IER)

IF (IREENTJΞQ.O) THEN

WRrrE(LP,'(/,A ) * ) VΆV MATRD.

CALL PR_NTO(LP,NF,NF,STDEVl ? XXX,IER) END IF

CALL MAMULT(NF,NF,NF,VECT02,AAA,SI1^-XX,IER) CALL MAMULT(NF,NF,NF,SI1,VECT01,STDEV1,XXX,IER) IF OREENT-EQ.O) THEN

WRΓΓE(LP,'(/,A^') vwv MATRIX" CALL PRΓNTO(LPJ*JF,NF,STDEVI,XXX,IER)

END IF

IF (mEENTJΞQ.O) NEGATO = NEGAT

IF (IREENT 5Q.2) GOTO 168 ! get out of the iterative loop

IF (NEGAT0.GT.0.AND-NEGAT.GT.0) THEN WRΓΓE(LP,'(A,I4,A)') *** WARNING \NEGAT,' EIGEN VALUES

1 NEGAΉVE **'

WRΓΓE(LP,'(A)') * ** SOME TRANSFORMATION WOULD BE MEANINGLESS ** WRΠT^O. AM.A)') ' *** WARNING '.NEGAT; EIGEN VALUES

1 NEGATIVE ** c WRΠE(6,'(A)') ' **SOME TRANSFORMATION WOULD BE MEANINGLESS** ' c WRΓΓE(6, , (A)') WISH TO CHANGE THEME I PERCENTAGE ? ΓY N] (N> -

WRΠΕ(6,'(A)') ' ** DECREASING % OF BACKGROUND BY 1 % **'

PERC1 = PERC1 - 1.0

IREENT-- 1

PERC2 = 100.-PERC1

PERC5 = PERCO PERCO = PERC1/PERC2 write(6,'(a F10.3)') ' modified background % '.PERC1 write(LP, * (a-P10.3)') ' modified background % \PERC1 GOTO 44 END IF

IF (NEGAT0.EQ.0.ANDJSEGAT.EQ.0) THEN WRΓΓE(6,'(A)') ** INCREASING OF BACKGROUND AREA BY I % **•

PERC1 = PERCl+1.0 IREENT= 1 PERC2 = 100.-PERC1 PERC5 = PERCO PERCO = PERC1/PERC2 write(6,'(aJ J 10.3)') ' modified background % '.PERC1 write(LP,'(aJ*10.3)') ' modified background % '.PERC1 GOTO 44 END IF C we have the final solution _ EENT=2 !Exit condition

IF(NEGAT0-EQ.O.AND JSEGAT.GT.O) THEN ! trace back one step PERC1 = PERC1 - 1.0 PERC2 = 100.-PERC1 PERC5 = PERCO PERCO = PERC1/PERC2 write(6,'(a~F10.3)') ' modified background percent \PERC1 write(LP,'(aJ 7 10.3)') ' modified background percent \PERC1 GOTO 44 END IF 168 JPOS=168 c normalise the vectors

IPOS = 161 DO 161 11=1,NF IADDR = (I1-1)*MAXF + 1

CALL N0RMA1(NF,VECT11(IADDR),VECT22(IADDR),NEGAT,IER) IF (NEGAT.EQ.NF) WRTΓE(LP,*) 1 ' *** VECTOR Ml," IS DEPOLARISED *** '

161 CONTINUE

CALL TRPOSE(NF^F,VECT02,l,VECT01,XXX,IER)

C

163

VECTORS IN COLUMNS *'

WRΠΕ(LO, * (/,A,/)') ' EIGEN VALUES ARE:' WRΠΕ(LO,'(7X,16F10.4)') (AMBDA(I1),I1=1,NF)

WRΓΓE(LO,*) '

WRΠ^(LO,'(/A ) ) ' * BACKGROUND DISCRIMINANT VECTORS IN COLUMNS *' CALL PRINTO(LO,NF,NF,VECT01 XX,IER)

DO U=1J^F

DO IK=1 C covariance of the transformed axis AMBDA=SM1*SC2*SM1

VECTMOK) = VECTM(IK)+SC2uX,IK)*SMiαL)

SM(IK) = SMαK)+SCl(IL,IK)*SMl(IL)

END DO END DO AMBDA(U) =- 0. AMBDAT(D) = 0. DO _L=1,NF

AMBDAOJ) = AMBDA(U)+VECTM(TL)*SM1(IL)

AMBDATOJ) = AMBDAT(U)+SM(IL)*SM1(IL)

END DO

IF(AMBDA(H).LT.0.OR.AMBDAT(U)LT.0) 1 WRITE(LP,'(A)',ERR=6666) ' * WARNING VARIANCE IS NEGATIVE *'

WRΠE(LP, * (A,I4,AF10.2) * ,ERR--6666) 1 -NON-BACGROND VARIANCE ALONG BDC',U,'-- ', AMBDAT(ϋ)

WR_TE(LP,'(A,I4,AF10.2)',ERR=6666) 1 'BACKGROUND VARIANCE ALONG BDC ',U,'-= ',

1 AMBDAOJ)

END DO GOTO 9999

6666 WRΠE(6,*) ' INPUT PARAMETER ERROR * EXITING '

GOTO 9999 C

C 7777 GET FORTRAN ERROR NO. AND CLOSE FILE

C 7777 CALL ERRSNS(IER)

C 9999 RETURN

C

9999 CLOSE(NR)

CLOSEtLO)

CLOSEf P)

EXIT

STOP

c

C

C OUTPUT FORMATS

C

1000 FORMATS \80A1////,5X,'FILE NAME : ',9A1,6X, * FEATURES : ',15,

1 4X^0. OF SAMPLES :',F7.0/////5X,'FEATURE NO. NAME'.IOX,

2 'MEAN'.ISX.'NO. NAMEM2X,-MEAN'/)

1001 FORMAT(13X,I2,2X,4A1,2X,E20.13,7X,I2,2X,4A1,2X,E20.13)

1002 FORMAT(//,21X,"MODIFIED',/ f

1 19X,'COVARIANCE , ,16X,'INVERSE , /5X,'ROW COL'

1003 FORMAT(5X,I3,2X,I3,2X,E20.13,5XJE20.13)

1004 FORMAT(////,5X,OETERMINANT : ',E20.13,4X,'PROBAB_LΠΥ : ', 1 E20.13//)

1005 FORMAT(///14X,'C0RRELAΗ0N MATRIX',/,14X,' )

1006 F0RMAT(//7X,16(7X,4A1),/)

1007 FORMAT(5X,4Al,16(3X,F8.5),)

1101 FORMAT(/)

1102 FORMAT(//)

1200 FORMAT(",8Al,' *TRF * CREATING FILE \9A1)

C INPUT FORMATS END