Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF ESTIMATING PROPERTY AND/OR COMPOSITION DATA OF A TEST SAMPLE
Document Type and Number:
WIPO Patent Application WO/1992/007326
Kind Code:
A1
Abstract:
A method of operating a spectrometer to determine property (13) and/or composition data of a sample comprises an on-line spectral measurement (1) of the sample using a computer controlled spectrometer, statistical analysis of the sample data based upon a statistical mode (2, 3, 4) using sample calibration data, and automatically identifying a sample if necessary based upon statistical and expert system (rule-based) criteria. Sample collection based upon statistical criteria consists of performing a statistical or rule-based check against the model to identify measurement data which are indicative of chemical species not in the samples already stored in the model. Samples identified either by a statistical criteria or using an expert system (rule-based decisions) are used preferably to isolate the liquid sample using a computer controlled automatic sampling system. The samples can be saved for subsequent laboratory analysis in removable sample containers. The results of the laboratory analysis together with the on-line spectroscopic measurements are preferably used to update the model being used for the analysis.

Inventors:
GETHNER JON STEVEN (US)
TODD TERRY RAY (US)
BROWN JAMES MILTON (US)
Application Number:
PCT/US1991/007583
Publication Date:
April 30, 1992
Filing Date:
October 09, 1991
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EXXON RESEARCH ENGINEERING CO (US)
International Classes:
G01N21/35; G01N21/27; G01R23/16; G01R35/00; G01N33/22; (IPC1-7): G06F15/20
Foreign References:
US5014217A1991-05-07
US4766551A1988-08-23
US4975581A1990-12-04
US4719582A1988-01-12
Other References:
See also references of EP 0552291A4
Download PDF:
Claims:
CLAIMS
1. A method of estimating property and/or composition data of a test sample, comprising performing a spectral measurement on the test sample and estimating property and/or composition data of the test sample from its measured spectrum on the basis of a predictive model correlating calibration sample spectra to known property and/or composition data of those calibration samples, wherein a determination is made, on the basis of a check of the measured spectrum against the predictive model, as to whether or not the measured spectrum is within the range of the calibration sample spectra in the model, and a response is generated if the result of the check is negative.
2. A method as claimed in claim 1, wherein said response is to isolate the test sample.
3. A method as claimed in claim 2, wherein, following isolation of the test sample, the sample is analyzed by a separate method to ascertain its property and/or composition data and the predictive model is updated with this data and with the spectral measurement data obtained by performing the spectral measurement on the test sample.
4. A method as claimed in claim 1, 2 or 3, wherein the spectral measurement of the test sample is performed in the infrared region.
5. A method as claimed in any preceding claim for which the predictive model is eigenvector— based, wherein a simulated spectrum for the test sample is determined by deriving the coefficients for the measured test spectrum from the dot products of the measured test spectrum with each of the model eigenspectra and by adding together the model eigenspectra scaled by the corresponding coefficients, and wherein a comparison is made between the simulated spectrum and the measured spectrum as an estimation of whether or not the measured spectrum is within the range of the calibration sample spectra in the model.
6. A method as claimed in claim 5, wherein the comparison between the simulated spectrum and the measured spectrum is made by determining a residual spectrum as the difference between the simulated spectrum and the measured spectrum, by calculating the Euclidean norm by summing the squares of the magnitudes of the residual spectrum at discrete frequencies, and by evaluating the magnitude of the Euclidean norm.
7. A method as claimed in claim 5, wherein the Mahalanobis distance is determined for the measured spectrum and the test sample is isolated if the magnitude of the determined Mahalanobis distance is indicative that the estimate of property and/or composition data of the test sample is an extrapolation from the range of data covered by the calibration samples.
8. A method as claimed in claim 7, further comprising calculating the Euclidean norm derived for each test sample/calibration sample pair and comparing the calculated Euclidean norms with a predetermined threshold value so as to isolate the test sample if said threshold value is exceeded.
9. A method as claimed in any preceding claim, wherein data in the calibration sample spectra due to the measurement process itself is removed therefrom prior to defining the predictive model by orthogonalizing the calibration sample spectra to one or more spectra modeling the measurement process data.
10. A method as claimed in any preceding claim, wherein said sample is a hydrocarbon/water mixture and the estimate is an estimate of the hydrocarbon content or water content of said mixture.
11. Apparatus for estimating property and/or composition data of a hydrocarbon test sample, comprising: — spectrometer means for performing a spectral measurement on a test sample; and — computer means (i) for estimating the property and/or composition data of the test sample from its measured spectrum on the basis of a predictive model correlating calibration sample spectra to known property and/or composition data for those calibration samples; (ii) for determining, on the basis of a check of the measured spectrum against the predictive model, whether or not the measured spectrum is within the range of the calibration sample spectra in the model; and (iii) for generating a response if the result of the check is negative.
12. Apparatus as claimed in claim 11, wherein the computer means is arranged to determine the predictive model according to all the calibration sample spectra data and all the known property and/or composition data of the calibration samples in its database, the computer means being further arranged to respond to further such data inputted to its database for storage therein, so that the predictive model thereby becomes updated according to said further such data.
Description:
METHOD OF ESTIMATING PROPERTY AND /OR COMPOSITION DATA OF A TEST SAMPLE

FIELD OF THE INVENTION

This invention relates to a method of estimating unknown property and/or composition data (also referred to herein as "parameters") of a sample. Examples of property and composition data are chemical composition measurements (such as the concentration of individual chemical components as, for example, benzene, toluene, xylene, or the concentrations of a class of compounds as, for example, paraffins), physical property measurements (such as density, index of refraction, hardness, viscosity, flash point, pour point, vapor pressure), performance property measurement (such as octane number, cetane number, combustibility), and perception (smell/odor, color).

BACKGROUND OF THE INVENTION

The infrared (12500—400 cm -1 ) spectrum of a substance contains absorption features due to the molecular vibrations of the constituent molecules. The absorptions arise from both fundamentals (single quantum transitions occurring in the mid— infrared region from 4000—400 cm -1 ) and combination bands and overtones (multiple quanta transitions occurring in the mid— and the near— infrared region from 12500—4000 cm -1 ). The position (frequency or wavelength) of these absorptions contain information as to the types of molecular structures that are present in the material, and the intensity of the absorptions contains information about the amounts of the molecular types that are present. To use the information in the spectra for the purpose of identifying and quantifying either components or properties requires that a calibration be performed to establish the relationship between the absorbances and the component or property that is to be estimated. For complex mixtures, where considerable overlap between the absorptions of individual constituents occurs, such calibrations must be accomplished using multivariate data analysis methods.

In complex mixtures, each constituent generally gives rise to multiple absorption features corresponding to different vibrational

motions. The intensities of these absorptions will all vary together in a linear fashion as the concentration of the constituent varies. Such features are said to have intensities which are correlated in the frequency (or wavelength) domain. This correlation allows these absorptions to be mathematically distinguished from random spectral measurement noise which shows no such correlation. The linear algebra computations which separate the correlated absorbance signals from the spectral noise form the basis for techniques such as Principal Components Regression (PCR) and Partial Least Squares (PLS). As is well known in the art, PCR is essentially the analytical mathematical procedure of Principal Components Analysis (PCA) followed by regression analysis. Reference is directed to "An Introduction to Multivariate Calibration and Analysis", Analytical Chemistry, Vol. 59, No. 17, September 1, 1987, pages 1007 to 1017, for an introduction to multiple linear regression (MLR), PCR and PLS.

PCR and PLS have been used to estimate elemental and chemical compositions and to a lesser extent physical or thermodynamic properties of solids and liquids based on their mid— or near— nfrared spectra. These methods involve: [1] the collection of mid— or near— infrared spectra of a set of representative samples; [2] mathematical treatment of the spectral data to extract the Principal Components or latent variables (e.g. the correlated absorbance signals described above); and [3] regression of these spectral variables against composition and/or property data to build a multivariate model. The analysis of new samples then involves the collection of their spectra, the decomposition of the spectra in terms of the spectral variables, and the application of the regression equation to calculate the composition/properties.

Providing the components of the sample under test are included in the calibration samples used to build the predictive model, then, within the limits of the inherent accuracy of the predictions obtainable from the model, an accurate estimate of the property and/or composition data of the test sample will be obtained from its measured spectrum. However, if one or more of the components of the test sample are not included in the calibration samples on which the model is based, then prediction of the property and/or composition data will be inaccurate, because the predictive model produces a "best fit" of the calibration data

to the test sample where some of the calibration data is inappropriate for that test sample. The present invention addresses, and seeks to overcome, this problem.

SUMMARY OF THE INVENTION

The method of the present invention is for estimating property and/or composition data of a test sample. An application of particular practical importance is the analysis of hydrocarbon test samples or for ascertaining the hydrocarbon content of a hydrocarbon/water mixture, whether in phase separated or emulsion form. The inventive method involves a number of steps. Firstly, a spectral measurement is performed on the test sample. Then, property and/or composition data of the test sample can be estimated from its measured spectrum on the basis of a predictive model correlating calibration sample spectra to known property and/or composition data of those calibration samples. In the present method, a determination is made, on the basis of a check of the measured spectrum against the predictive model, as to whether or not the measured spectrum is within the range of calibration sample spectra in the model. If the result of the check is negative, a response is generated, accordingly.

In this way, if the result of the check is affirmative (i.e., the measured spectrum is indicative of chemical compositions encompassed by the samples included in the predictive model), then the person performing the analysis can be satisfied that the corresponding property and/or composition prediction is likely to be accurate (of course within the limits of the inherent accuracy of the predictive model). However, even then, further tests may be made to optimize the effectiveness of the checking of the sample under test, in order to increase the confidence level of the prediction made for each test sample which passes all the tests. This preferred way of performing the invention will be described in more detail hereinbelow. Correspondingly, if the result of the check is negative, then the analyst knows that any corresponding prediction is likely to provide unreliable results.

The response to a negative result of the check can take one of many forms. For example, it could be simply to provide a warning or alarm to the operator. The prediction of property and/or composition

data can be made even when a warning or alarm is generated, but the warning or alarm indicates to the analyst that the prediction is likely to be unreliable. Preferably, any test sample for which the result of the check is negative (i.e. the measured spectrum is not within the range of calibration sample spectra in the model) is physically isolated. It can then be taken to the laboratory for independent analysis of its property and/or composition (determined by the standard analytical technique used in generating the property and/or composition data for the initial model). Preferably, the model is adapted to allow the data separately determined in this way, together with the corresponding measured spectral data, to be entered into the model database, so that the model thereby becomes updated with this additional data, so as to enlarge the predictive capability of the model. In this way, the model "learns" from identification of a test sample for which it cannot perform a reliable prediction, so that next time a similar sample is tested containing chemical species of the earlier sample (and assuming any other chemical species it contains correspond to those of other calibration samples in the model), a reliable prediction of property and/or composition data for that similar sample will be made.

Various forms of predictive model are possible for determining the correlation between the spectra of the calibration samples and their known property and/or composition data. Thus, the predictive model can be based on principal components analysis or partial least squares analysis of the calibration sample spectra. In any eigenvector— based predictive model such as the foregoing, whether or not the measured spectrum of the test sample is within the range of the calibration sample spectra in the model can be determined in the following manner. A simulated spectrum for the test sample is determined by deriving coefficients for the measured test spectrum from the dot (scalar) products of the measured test spectrum with each of the model eigenspectra and by adding together the model eigenspectra scaled by the corresponding coefficient. Then, a comparison is made between the simulated spectrum calculated in this way and the measured spectrum as an estimate of whether or not the measured spectrum is within the range of the calibration sample spectra in the model. This comparison, according to a preferred way of performing the invention, can be made by determining a residual spectrum as the difference between the simulated spectrum and

the measured spectrum, by calculating the Euclidean norm by summing the squares of the magnitudes of the residual spectrum at discrete frequencies, and by evaluating the magnitude of the Euclidean norm. A large value, determined with reference to a preselected threshold distance, is indicative that the required data prediction of the test sample cannot accurately be made, while a Euclidean norm lower than the threshold indicates an accurate prediction can be made.

The preferred way of performing the invention described above employs a statistical validity check against the predictive model. However, as an alternative to a statistical check, a rule— based check may be made. Examples of rule— based checks are pattern recognition techniques and/or comparison with spectra of computerized spectral libraries.

The calibration sample spectra may contain spectral data due to the measurement process itself e.g. due to baseline variations and/or ex— sample interferences (such as due to water vapor or carbon dioxide). This measurement process spectral data can be removed from the calibration sample spectra prior to defining the predictive model by orthogonalizing the calibration sample spectra to one or more spectra modeling the measurement process data. This will be described in further detail herein below under the heading "CONSTRAINED PRINCIPAL SPECTRA ANALYSIS (CPSA)".

Even though a test sample has passed the above— described validity check, further checking may be desirable. For example, despite passing the validity check, the property and/or compositions data prediction may be an extrapolation from the range of data covered by the calibration samples used for forming the predictive model. It is therefore preferred that the Mahalanobis distance is determined for the measured spectrum and the test sample "accepted" from this further test if the magnitude of the Mahalanobis distance is below an appropriate predetermined amount selected by the analyst. If the calculated Mahalanobis distance is above the appropriate predetermined amount, a similar response as described hereinabove for a negative check is initiated.

Another statistical check is to ascertain whether the test sample

is lying in a region in which the number of calibration samples in the predictive model is sparse. This check can be made by calculating the Euclidean norm derived for each test sample/calibration sample pair and comparing the calculated Euclidean norms with a threshold value which, if exceeded, indicates that the sample has failed to pass this additional statistical check. In which case, a similar response as described hereinabove for a negative check is initiated.

The method disclosed herein finds particular application to on-line estimation of property and/or composition data of hydrocarbon test samples. Conveniently and suitably, all or most of the above— described steps are performed by a computer system of one or more computers with minimal or no operator interaction required.

It has been remarked above that the prediction can be based on principal components analysis, and also that spectral data in the calibration sample spectra due to measurement process data itself can be removed by an orthogonalization procedure. The combination of principal components analysis and the aforesaid orthogonalization procedure is referred to herein as Constrained Principal Spectra Analysis, abbreviated to "CPSA". The present invention can employ any numerical analysis technique (such as PCR, PLS or MLR) through which the predictive model can be obtained to provide an estimation of unknown property and/or composition data. It is preferred that the selected numerical analysis technique be CPSA. CPSA is described in detail in the present assignees copending U.S. patent application

597 ,910 of James M. Brown, filed on the same day as the present case, namely October 15 , 1990 and with case reference

C— 2527, the contents of which are expressly incorporated herein by reference. The relevant disclosure of this patent application of James M. Brown will be described below.

In another aspect, the invention provides apparatus for estimating property and/or composition data of a hydrocarbon test sample. The apparatus comprises spectrometer means for performing a spectral measurement on a test sample, and also computer means. The computer means serves three main purposes. The first is for estimating the property and/or composition data of the test sample from its

measured spectrum on the basis of a predictive model correlating calibration sample spectra to known property and/or composition data for those calibration samples. The second is for determining, on the basis of a check (such as described hereinabove) of the measured spectrum against the predictive model, whether or not the measured spectrum is within the range of the calibration sample spectra in the model. The third function of the computer means is for generating a response (the nature of which has been described hereinabove in detail with reference to the inventive method) if the result of the check is negative.

The computer means is generally arranged to determine the predictive model according to all the calibration sample spectra data and all the known property and/or composition data of the calibration samples in its database. The computer means may be further arranged to respond to further such data inputted to its database for storage therein, so that the predictive model thereby becomes updated according to the further such data. The inputted property and/or composition data is derived by a separate method, such as by laboratory analysis.

The Constrained Principal Spectra Analysis (CPSA), being a preferred implementation of the inventive method and apparatus, will now be described in detail.

CONSTRAINED PRINCIPAL SPECTRA ANALYSIS (CPSA)

In CPSA, the spectral data of a number (n) of calibration samples is corrected for the effects of data arising from the measurement process itself (rather than from the sample components). The spectral data for n calibration samples is quantified at / discrete frequencies to produce a matrix X (of dimension / by n) of calibration data. The first step in the method involves producing a correction matrix U B of dimension f by m comprising m digitized correction spectra at the discrete frequencies /, the correction spectra simulating data arising from the measurement process itself. The other step involves orthoganalizing X with respect to U B to produce a corrected spectra matrix Xc whose spectra are orthogonal to all the spectra in U B . Due to this orthogonality, the spectra in matrix Xc are statistically independent of spectra arising from the measurement process itself. If (as would

normally be the case) the samples are calibration samples used to build a predictive model interrelating known property and composition data of the n samples and their measured spectra so that the model can be used to estimate unknown property and/or composition data of a sample under consideration from its measured spectrum, the estimated property and/or composition data will be unaffected by the measurement process itself. In particular, neither baseline variations nor spectra due for example to water vapor or carbon dioxide vapor in the atmosphere of the spectrometer will introduce any error into the estimates. It is also remarked that the spectra can be absorption spectra and the preferred embodiments described below all involve measuring absorption spectra. However, this is to be considered as exemplary and not limiting on the scope of the invention as defined by the appended claims, since the method disclosed herein can be applied to other types of spectra such as reflection spectra and scattering spectra (such as Raman scattering). Although the description given herein relates to NIR (near— infrared) and MIR (mid— infrared), nevertheless, it will be understood that the method finds applications in other spectral measurement wavelength ranges including, for example, ultraviolet, visible spectroscopy and Nuclear Magnetic Resonance (NMR) spectroscopy.

Generally, the data arising from the measurement process itself are due to two effects. The first is due to baseline variations in the spectra. The baseline variations arise from a number of causes such as light source temperature variations during the measurement, reflectance, scattering or absorbances from the cell windows, and changes in the temperature (and thus the sensitivity) of the instrument detector. These baseline variations generally exhibit spectral features which are broad (correlate over a wide frequency range). The second type of measurement process signal is due to ex— sample chemical compounds present during the measurement process, which give rise to sharper line features in the spectrum. For current applications, this type of correction generally includes absorptions due to water vapor and/or carbon dioxide in the atmosphere in the spectrometer. Absorptions due to hydroxyl groups in optical fibers could also be treated in this fashion. Corrections for contaminants present in the samples can also be made, but generally only in cases where the concentration of the contaminant is sufficiently low as to not significantly dilute the concentrations of the sample components,

and where no significant interactions between the contaminant and sample component occurs. It is important to recognize that these corrections are for signals that are not due to components in the sample. In this context, "sample" refers to that material upon which property and/or component concentration measurements are conducted for the purpose of providing data for the model development. By "contaminant," we refer to any material which is physically added to the sample after the property/component measurement but before or during the spectral measurement.

The present corrective method can be applied to correct only for the effect of baseline variations, in which case these variations can be modeled by a set of preferably orthogonal, frequency (or wavelength) dependent polynomials which form the matrix U B of dimension / by ro where is the order of the polynomials and the columns of U B are preferably orthogonal polynomials, such as Legendre polynomials. Alternatively the corrective method can be applied to correct only for the effect of ex— sample chemical compounds (e.g. due to the presence in the atmosphere of carbon dioxide and/or water vapor). In this case, the spectra that form the columns of U B are preferably orthogonal vectors that are representative of the spectral interferences produced by such chemical compounds. It is preferred, however, that both baseline variations and ex— sample chemical compounds are modeled in the manner described to form two correction matrices U p of dimension f by p and X s , respectively. These matrices are then combined into the single matrix U B , whose columns are the columns of U p and X s arranged side— by— side.

In a preferred way of performing the invention, in addition to matrix X of spectral data being orthogonalized relative to the correction matrix U B , the spectra or columns of U B are all mutually orthogonal. The production of the matrix U B having mutually orthogonal spectra or columns can be achieved by firstly modeling the baseline variations by a set of orthogonal frequency (or wavelength) dependent polynomials which are computer generated simulations of the baseline variations and form the matrix U p , and then at least one, and usually a plurality, of spectra of ex— sample chemical compounds (e.g. carbon dioxide and water vapor) which are actual spectra collected on the instrument, are supplied to form

the matrix X s . Next the columns of X s are orthogonalized with respect to Up to form a new matrix X s 7 . This removes baseline effects from ex— sample chemical compound corrections. Then, the columns of X s ' are orthogonalized with respect to one another to form a new matrix U s , and lastly Up and U s are combined to form the correction matrix U B , whose columns are the columns of U p and U s arranged side— by— side. It would be possible to change the order of the steps such that firstly the columns of X s are orthogonalized to form a new matrix of vectors and then the (mutually orthogonal) polynomials forming the matrix U p are orthogonalized relative to these vectors and then combined with them to form the correction matrix U B . However, this is less preferred because it defeats the advantage of generating the polynomials as being orthogonal in the first place, and it will also mix the baseline variations in with the spectral variations due to ex— sample chemical compounds and make them less useful as diagnostics of instrument performance.

In a real situation, the sample spectral data in the matrix X will include not only spectral data due to the measurement process itself but also data due to noise. Therefore, once the matrix X (dimension /by n) has been orthogonalized with respect to the correction matrix U B (dimension / by m), the resulting corrected spectral matrix X c will still contain noise data. This can be removed in the following way. Firstly, a singular value decomposition is performed on matrix X c in the form X c = UΣV', where U is a matrix of dimension / by n and contains the principal component spectra as columns, Σ is a diagonal matrix of dimension n by n and contains the singular values, and V is a matrix of dimension n by n and contains the principal component scores, V* being the transpose of V. In general, the principal components that correspond to noise in the spectral measurements in the original n samples will have singular values which are small in magnitude relative to those due to the wanted spectral data, and can therefore be distinguished from the principal components due to real sample components. Accordingly, the next step in the method involves removing from U, Σ and V the k + 1 through n principal components that correspond to the noise, to form the new matrices U', Σ' and V' of dimensions / by k, k by k and n by k, respectively. When these matrices are multiplied together, the resulting matrix, corresponding with the earlier corrected spectra matrix X c , is free of spectral data due to noise.

For the selection of the number (k) of principal components to keep in the model, a variety of statistical tests suggested in the literature could be used but the following steps have been found to give the best results. Generally, the spectral noise level is known from experience with the instrument. From a visual inspection of the eigenspectra (the columns of matrix U resulting from the singular value decomposition), a trained spectroscopist can generally recognize when the signal levels in the eigenspectra are comparable with the noise level. By visual inspection of the eigenspectra, an approximate number of terms, k, to retain can be selected. Models can then be built with, for example, k - 2, k - 1, k, k - 1, k + 2 terms in them and the standard errors and PRESS (Predictive Residual Error Sum of Squares) values are inspected. The smallest number of terms needed to obtain the desired precision in the model or the number of terms that give the minimum PRESS value is then selected. This choice is made by the spectroscopist, and is not automated. A Predicted Residual Error Sum of Squares is calculated by applying a predictive model for the estimation of property and/or component values for a test set of samples which were not used in the calibration but for which the true value of the property or component concentration is known. The difference between the estimated and true values is squared, and summed for all the samples in the test set (the square root of the quotient of the sum of squares and the number of test samples is sometimes calculated to express the PRESS value on a per sample basis). A PRESS value can be calculated using a cross validation procedure in which one or more of the calibration samples are left out of the data matrix during the calibration, and then analyzed with the resultant model, and the procedure is repeated until each sample has been left out once.

The polynomials that are used to model background variations are merely one type of correction spectrum. The difference between the polynomials and the other "correction spectra" modeling ex— sample chemical compounds is twofold. First, the polynomials may conveniently be computer— generated simulations of the background (although this is not essential and they could instead be simple mathematical expressions or even actual spectra of background variations) and can be generated by the computer to be orthogonal. The polynomials may be Legendre polynomials which are used in the actual implementation of the correction

method since they save computation time. There is a well— known recursive algorithm to generate the Legendre polynomials (see, for example, G. Arfken, Mathematical Methods for Physicists, Academic Press, New York, N.Y., 1971, Chapter 12). Generally, each row of the U p matrix corresponds to a given frequency (or wavelength) in the spectrum. The columns of the U p matrix will be related to this frequency. The elements of the first column would be a constant, the elements of the second column would depend linearly on the frequency, the elements of the third column would depend on the square of the frequency, etc. The exact relationship is somewhat more complicated than that if the columns are to be orthogonal. The Legendre polynomials are generated to be orthonormal, so that it is not necessary to effect a singular value decomposition or a Gram— Schmidt orthogonalization to make them orthogonal. Alternatively, any set of suitable polynomial terms could be used, which are then orthogonalized using singular value decomposition or a Gram— Schmidt orthogonalization. Alternatively, actual spectra collected on the instrument to simulate background variation can be used and orthogonalized via one of these procedures. The other "correction spectra" are usually actual spectra collected on the instrument to simulate interferences due to ex— sample chemical compounds, e.g. the spectrum of water vapor, the spectrum of carbon dioxide vapor, or the spectrum of the optical fiber of the instrument. Computer generated spectra could be used here if the spectra of water vapor, carbon dioxide, etc. can be simulated. The other difference for the implementation of the correction method is that these "correction spectra" are not orthogonal initially, and therefore it is preferred that they be orthogonalized as part of the procedure. The polynomials and the ex— sample chemical compound "correction spectra" could be combined into one matrix, and orthogonalized in one step to produce the correction vectors. In practice, however, this is not the best procedure, since the results would be sensitive to the scaling of the polynomials relative to the ex— sample chemical compound "correction spectra". If the ex— sample chemical compound "correction spectra" are collected spectra, they will include some noise. If the scaling of the polynomials is too small, the contribution of the noise in these "correction spectra" to the total variance in the correction matrix U B would be larger than that of the polynomials, and noise vectors would end up being included in the ex— sample chemical compound correction vectors. To avoid this,

preferably the polynomials are generated first, the ex— sample chemical compound "correction spectra" are orthogonalized to the polynomials, and then the correction vectors are generated by performing a singular value decomposition (described below) on the orthogonalized "correction spectra".

As indicated above, a preferred way of performing the correction for measurement process spectral data is firstly to generate the orthogonal set of polynomials which model background variations, then to orthoganalize any "correction spectra" due to ex— sample chemical compounds (e.g. carbon dioxide and/or water vapor) to this set to produce a set of "correction vectors", and finally to orthogonalize the resultant "correction vectors" among themselves using singular value decomposition. If multiple examples of "correction spectra", e.g. several spectra of water vapor, are used, the final number of "correction vectors" will be less than the number of initial "correction spectra". The ones eliminated correspond with the measurement noise. Essentially, principal components analysis (PCA) is being performed on the orthogonalized "correction spectra" to separate the real measurement process data being modeled from the random measurement noise.

It is remarked that the columns of the correction matrix U B do not have to be mutually orthogonal for the correction method to work, as long as the columns of the data matrix X are orthogonalized to those of the correction matrix U B . However, the steps for generating the U B matrix to have orthogonal columns is performed to simplify the computations required in the orthogonalization of the spectral data X of the samples relative to the correction matrix U B , and to provide a set of statistically independent correction terms that can be used to monitor the measurement process. By initially orthogonalizing the correction spectra X s due to ex— sample chemical compounds to U p which models background variations, any background contribution to the resulting correction spectra is removed prior to the orthogonalization of these correction spectra among themselves. This procedure effectively achieves a separation of the effects of background variations from those of ex— sample chemical compound variations, allowing these corrections to be used as quality control features in monitoring the performance of an instrument during the measurement of spectra of unknown materials, as

will be discussed hereinbelow.

When applying the technique for correcting for the effects of measurement process spectral data in the development of a method of estimating unknown property and/or composition data of a sample under consideration, the following steps are performed. Firstly, respective spectra of n calibration samples are collected, the spectra being quantified at / discrete frequencies (or wavelengths) and forming a matrix X of dimension / by n. Then, in the manner described above, a correction matrix U B of dimension / by m is produced. This matrix comprises m digitized correction spectra at the discrete frequencies /, the correction spectra simulating data arising from the measurement process itself. The next step is to orthogonalize X with respect to U B to produce a corrected spectra matrix X c whose spectra are each orthogonal to all the spectra in U B . The method further requires that c property and/or composition data are collected for each of the n calibration samples to form a matrix Y of dimension n by c (c > 1). Then, a predictive model is determined correlating the elements of matrix Y to matrix X c . Different predictive models can be used, as will be explained below. The property and/or composition estimating method further requires measuring the spectrum of the sample under consideration at the / discrete frequencies to form a matrix of dimension / by 1. The unknown property and/or composition data of the samples is then estimated from its measured spectrum using the predictive model. Generally, each property and/or component is treated separately for building models and produces a separate / by 1 prediction vector. The prediction is just the dot product of the unknown spectrum and the prediction vector. By combining all the prediction vectors into a matrix P of dimension / by c, the prediction involves multiplying the spectrum matrix (a vector of dimension / can be considered as a 1 by /matrix) by the prediction matrix to produce a 1 by c vector of predictions for the c properties and components.

As mentioned in the preceding paragraph, various forms of predictive model are possible. The predictive model can be determined from a mathematical solution to the equation Y = X P + E, where X* is the transpose of the corrected spectra matrix X c , P is the predictive matrix of dimension / by c, and E is a matrix of residual errors from the model and is of dimension n by c. The validity of the equation Y = X£P

+ E follows from the inverse statement of Beer's law, which itself can be expressed in the form that the radiation— absorbance of a sample is proportional to the optical pathlength through the sample and the concentration of the radiation— absorbing species in that sample. Then, for determining the vector y u of dimension 1 by c containing the estimates of the c property and/or composition data for the sample under consideration, the spectrum x u of the sample under consideration, x„ being of dimension / by 1, is measured and y u is determined from the relationship y„ = x*P, x* being the transpose of matrix x u .

Although, in a preferred implementation of this invention, the equation Y = X*P + E is solved to determine the predictive model, the invention could also be used with models whose equation is represented (by essentially the statement of Beer's law) as Xc = AY* + E, where A is an /by c matrix. In this case, the matrix A would first be estimated as A = X c Y(Y t Y)" 1 . The estimation of the vector y u of dimension 1 by c containing the c property and/or composition data for the sample under consideration from the spectrum x u of the sample under consideration would then involve using the relationship y„ = x u A(A'A) _1 . This calculation, which is a constrained form of the K— matrix method, is more restricted in application, since the required inversion of Y'Y requires that Y contain concentration values for all sample components, and not contain property data.

The mathematical solution to the equation Y = X£P + E (or Xc = AY' + E) can be obtained by any one of a number of mathematical techniques which are known per se, such as linear least squares regression, sometimes otherwise known as multiple linear regression (MLR), principal components analysis/regression (PCA/PCR) and partial least squares (PLS). As mentioned above, an introduction to these mathematical techniques is given in "An Introduction to Multivariate Calibration and Analysis", Analytical Chemistry, Vol. 59, No. 17, September 1, 1987, Pages 1007 to 1017.

The purpose of generating correction matrix U B and in orthogonalizing the spectral data matrix X to U B is twofold: Firstly, predictive models based on the resultant corrected data matrix X c are insensitive to the effects of background variations and ex— sample

chemical components modeled in U B , as explained above. Secondly, the dot (scalar) products generated between the columns of U B and those of X contain information about the magnitude of the background and ex— sample chemical component interferences that are present in the calibration spectra, and as such, provide a measure of the range of values for the magnitude of these interferences that were present during the collection of the calibration spectral data. During the analysis of a spectrum of a material having unknown properties and/or composition, similar dot products can be formed between the unknown spectrum, x u , and the columns of U B , and these values can be compared with those obtained during the calibration as a means of checking that the measurement process has not changed significantly between the time the calibration is accomplished and the time the predictive model is applied for the estimation of properties and components for the sample under test. These dot products thus provide a means of performing a quality control assessment on the measurement process.

The dot products of the columns of U B with those of the spectral data matrix X contain information about the degree to which the measurement process data contribute to the individual calibration spectra. This information is generally mixed with information about the calibration sample components. For example, the dot product of a constant vector (a first order polynomial) will contain information about the total spectral integral, which is the sum of the integral of the sample absorptions, and the integral of the background. The information about calibration sample components is, however, also contained in the eigenspectra produced by the singular value decomposition of X c . It is therefore possible to remove that portion of the information which is correlated to the sample components from the dot products so as to recover values that are uncorrelated to the sample components, i.e. values that represent the true magnitude of the contributions of the measurement process signals to the calibration spectra. This is accomplished by the following steps:

(1) A matrix V B of dimension n by m is formed as the product of X'Ua, the individual elements of V B being the dot products of the columns of X with those of U B ;

(2) The corrected data matrix X c is formed, and its singular value

decomposition is computed as UΣV 4 ;

(3) A regression of the form V B = VZ + R is calculated to establish the correlation between the dot products and the scores of the principal components: VZ represents the portion of the dot products which is correlated to the sample components and the regression residuals R represent the portion of the dot products that are uncorrelated to the sample components, which are in fact the measurement process signals for the calibration samples;

(4) In the analysis of a sample under test, the dot products of the unknown spectrum with each of the correction spectra (columns of U B ) are calculated to form a vector v B , the corrected spectrum c is calculated, the scores for the corrected spectrum are calculated as v = x^U∑" 1 , and the uncorrelated measurement process signal values are calculated as r = v. - vZ. The magnitude of these values is then compared to the range of values in R as a means of comparing the measurement process during the analysis of the unknown to that during the calibration.

It will be appreciated that the performance of the above disclosed correction method and method of estimating the unknown property and/or composition data of the sample under consideration involves extensive mathematical computations to be performed. In practice, such computations are made by computer means comprising a computer or computers, which is connected to the instrument. In a measurement mode, the computer means receives the measured output spectrum of the calibration sample, ex— sample chemical compound or test sample. In a correction mode in conjunction with the operator, the computer means stores the calibration spectra to form the matrix X, calculates the correction matrix U B , and orthogonalizes X with respect to the correction matrix U B . In addition, the computer means operates in a storing mode to store the c known property and/or composition data for the n calibration samples to form the matrix Y of dimension n by c (c > 1). In a model building mode, the computer means determines, in conjunction with the operator, a predictive model correlating the elements of matrix Y to those of matrix Xc. Lastly, the computer means is arranged to operate in a prediction mode in which it estimates the unknown property and/or compositional data of the sample under consideration from its

measured spectrum using the determined predictive model correlating the elements of matrix Y to those of matrix Xc-

In more detail, the steps involved according to a preferred way of making a prediction of property and/or composition data of a sample under consideration can be set out as follows. Firstly, a selection of samples for the calibration is made by the operator or a laboratory technician. Then, in either order, the spectra and properties/composition of these samples need to be measured, collected and stored in the computer means by the operator and/or laboratory technician, together with spectra of ex— sample chemical compounds to be used as corrections. In addition, the operator selects the computer— generated polynomial corrections used to model baseline variations. The computer means generates the correction matrix U B and then orthogonalizes the calibration sample spectra (matrix X) to produce the corrected spectral matrix X c and, if PCR is used, performs the singular value decomposition on matrix X c . The operator has to select (in PCR) how many of the principal components to retain as correlated data and how many to discard as representative of (uncorrelated) noise. Alternatively, if the PLS technique is employed, the operator has to select the number of latent variables to use. If MLR is used to determine the correlation between the corrected spectral matrix X c and the measured property and/or composition data Y, then a selection of frequencies needs to be made such that the number of frequencies at which the measured spectra are quantized is less than the number of calibration samples. Whichever technique is used to determine the correlation (i.e. the predictive model) interrelating X c and Y, having completed the calibration, the laboratory technician measures the spectrum of the sample under consideration which is used by the computer means to compute predicted property and/or composition data based on the predictive model.

Mathematical Basis for CPSA

The object of Principal Components Analysis (PCA) is to isolate the true number of independent variables in the spectral data so as to allow for a regression of these variables against the dependent property/composition variables. The spectral data matrix, X, contains the spectra of the n samples to be used in the calibration as columns of

length /, where / is the number of data points (frequencies or wavelengths) per spectrum. The object of PCA is to decompose the / by n X matrix into the product of several matrices. This decomposition can be accomplished via a Singular Value Decomposition:

X = UΣVt (1)

where U (the left eigenvector matrix) is of dimension / by n, Σ (the diagonal matrix containing the singular values σ) is of dimension n by n, and V* is the transpose of V (the right eigenvector matrix) which is of dimension n by n. Since some versions of PCA perform the Singular Value Decomposition on the transpose of the data matrix, X', and decompose it as VΣU', the use of the terms left and right eigenvectors is somewhat arbitrary. To avoid confusion, U will be referred to as the άgenspectram matrix since the individual column— vectors of U (the eigenspectra) are of the same length, /, as the original calibration spectra. The term eigenvectors will only be used to refer to the V matrix. The matrices in the singular value decomposition have the following properties:

U*U = In (2)

W = V*V = I n (3)

X*X = VΛV* and XX* = UΛU* (4)

where I n is the n by n identify matrix, and Λ is the matrix containing the eigenvalues, λ (the squares of the singular values), on the diagonal and zeros off the diagonal. Note that the product UU* does not yield an identity matrix for n less than / Equations 2 and 3 imply that both the eigenspectra and eigenvectors are orthonormal. In some version of PCA, the U and Σ are matrices are combined into a single matrix. In this case, the eigenspectra are orthogonal but are normalized to the singular values.

The object of the variable reduction is to provide a set of independent variables (the Principal Components) against which the dependent variables (the properties or compositions) can be regressed. The basic regression equation for direct calibration is

Y = X P (5)

where Y is the n by c matrix containing the property/ composition data for the n samples and c properties/components, and P is the f by c matrix of regression coefficients which relate the property/composition data to the spectral data. We will refer to the c columns of P as prediction vectors, since during the analysis of a spectrum x (dimension / by 1), the prediction of the properties/components (y of dimension 1 by c) for the sample is obtained by

y = xtP (6)

Note that for a single property /component, the prediction is obtained as the dot product of the spectrum of the unknown and the prediction vector. The solution to equation 5 is

[X- -Y = [X*]-1XΦ = P (7)

where [X']" 1 is the inverse of the X* matrix. The matrix X* is of course non— square and rank deficient ( > ι), and cannot be directly inverted. Using the singular value decompositions, however, the inverse can be approximated as

[X*]-*- = U∑-iVt (8)

where Σ" 1 is the inverse of the square singular value matrix and contains 1/σ on the diagonal. Using equations 7 and 8, the prediction vector matrix becomes

P = U∑-iV'Y (9)

As was noted previously, the objective of the PCA is to separate systematic (frequency correlated) signal from random noise. The eigenspectra corresponding to the larger singular values represent the systematic signal, while those corresponding to the smaller singular values represent the noise. In general, in developing a stable model, these noise components will be eliminated from the analysis before the prediction vectors are calculated. If the first k<n eigenspectra are retained, the

matrices in equation 1 become U' (dimension /by k), Σ' (dimension k by k) and V' (dimension n by k).

X = U'Σ'V't + E (10)

where E is an / by n error matrix. Ideally, if all the variations in the data due to sample components are accounted for in the first A; eigenspectra, E contains only random noise. It should be noted that the product V'V'* no longer yields an identity matrix. To simplify notation the ' will be dropped, and U,Σ and V will henceforth refer to the rank reduced matrices. The choice of k, the number of eigenspectra to be used in the calibration, is based on statistical tests and some prior knowledge of the spectral noise level.

Although the prediction of a property/component requires only a single prediction vector, the calculation of uncertainties on the prediction require the full rank reduced V matrix. In practice, a two step, indirect calibration method is employed in which the singular value decomposition of the X matrix is calculated (equation 1), and then the properties/compositions are separately regressed against the eigenvectors

Y = VB + E (11)

B = V«Y (12)

During the analysis, the eigenvector for the unknown spectrum is obtained

v = xtU∑- 1 (13)

and the predictions are made as

y = vB (14)

The indirect method is mathematically equivalent to the direct method of equation 10, but readily provides the values needed for estimating uncertainties on the prediction.

Equation 6 shows how the prediction vector, P, is used in the analysis of an unknown spectrum. We assume that the unknown ' spectrum can be separated as the sum of two terms, the spectrum due to the components in the unknown, c, and the measurement process related signals for which we want to develop constraints, x s , The prediction then becomes

y = xtP = cΦ + x 8 Φ (15)

If the prediction is to be insensitive to the measurement process signals, the second term in equation 15 must be zero. This implies that the prediction vector must be orthogonal to the measurement process signal spectra. From equation 10, the prediction vector is a linear combination of the eigenspectra, which in turn are themselves linear combination of the original calibration spectra (U = XVΣ' 1 ). If the original calibration spectra are all orthogonalized to a specific measurement process signal, the resulting prediction vector will also be orthogonal, and the prediction will be insensitive to the measurement process signal. This orthogonalization procedure serves as the basis for the Constrained Principal Spectra Analysis algorithm.

In the Constrained Principal Spectra Analysis (CPSA) program, two types of measurement process signals are considered. The program internally generates a set of orthonormal, frequency dependent polynomials, U p . U p is a matrix of dimension f by p where p is the maximum order (degree minus one) of the polynomials, and it contains columns which are orthonormal Legendre polynomials defined over the spectral range used in the analysis. The polynomials are intended to provide constraints for spectral baseline effects. In addition, the user may supply spectra representative of other measurement process signals (e.g. water vapor spectra). These correction spectra (a matrix X s of dimension / by s where s is the number of correction spectra) which may include multiple examples of a specific type of measurement process signal, are first orthogonalized relative to the polynomials via a Gram— Schmidt orthogonalization procedure

X s ' = X s - U p (U p s ) (16)

A Singular Value Decomposition of the resultant correction spectra is then performed,

X s ' = Us∑sVs* (17)

to generate a set of orthonormal correction eigenspectra, U s . The user selects the first s' terms corresponding to the number of measurement related signals being modeled, and generates the full set of correction terms, U B , which includes both the polynomials and selected correction eigenspectra. These correction terms are then removed from the calibration data, again using a Gram— Schmidt orthogonalization procedure

Xc = X - U B (U. t X) (17)

The Principal Components Analysis of the corrected spectra, X c , then proceeds via the Singular Value Decomposition

and the predictive model is developed using the regression

Y = V C B (19)

The resultant prediction vector

is orthogonal to the polynomial and correction eigenspectra, U m . The resulting predictive model is thus insensitive to the modeled measurement process signals. In the analysis of an unknown, the contributions of the measurement process signals to the spectrum can be calculated as

and these values can be compared against the values for the calibration, V B , to provide diagnostic as to whether the measurement process has changed relative to the calibration.

The results of the procedure described above are mathematically equivalent to including the polynomial and correction terms as spectra in the data matrix, and using a constrained least square regression to calculate the B matrix in equation 12. The constrained least square procedure is more sensitive to the scaling of the correction spectra since they must account for sufficient variance in the data matrix to be sorted into the k eigenspectra that are retained in the regression step. By orthogonalizing the calibration spectra to the correction spectra before calculating the singular value decomposition, we eliminate the scaling sensitivity.

Development of Empirical Model in CPSA

The Constrained Principal Spectra Analysis method allows measurement process signals which are present in the spectra of the calibration samples, or might be present in the spectra of samples which are latter analyzed, to be modeled and removed from the data (via a Gram— Schmidt orthogonalization procedure) prior to the extraction of the spectral variables which is performed via a Singular Value Decomposition (16). The spectral variables thus obtained are first regressed against the pathlengths for the calibration spectra to develop a model for independent estimation of pathlength. The spectral variables are rescaled to a common pathlength based on the results of the regression and then further regressed against the composition/property data to build the empirical models for the estimation of these parameters. During the analysis of new samples, the spectra are collected and decomposed into the constrained spectral variables, the pathlength is calculated and the data is scaled to the appropriate pathlength, and then the regression models are applied to calculate the composition/property data for the new materials. The orthogonalization procedure ensures that the resultant measurements are constrained so as to be insensitive (orthogonal) to the modeled measurement process signals. The internal pathlength calculation and renormalization automatically corrects for pathlength or flow variations, thus minimizing errors due to data scaling.

The development of the empirical model consists of the following steps: (1.1) The properties and/or component concentrations for which empirical models are to be developed are independently determined for a set of representative samples, e.g the calibration set. The independent

measurements are made by standard analytical tests including, but not limited to: elemental compositional analysis (combustion analysis, X— ray fluorescence, broad line NMR); component analysis (gas chromatography, mass spectroscopy); other spectral measurements (IR, UV/visible, NMR, color); physical property measurements (API or specific gravity, refractive index, viscosity or viscosity index); and performance property measurements (octane number, cetane number, combustibility). For chemicals applications where the number of sample components is limited, the compositional data may reflect weights or volumes used in preparing calibration blends.

(1.2) Absorption spectra of the calibration samples are collected over a region or regions of the infrared, the data being digitized at discrete frequencies (or wavelengths) whose separation is less than the width of the absorption features exhibited by the samples.

(2.0)The Constrained Principal Spectra Analysis (CPSA) algorithm is applied to generate the empirical model. The algorithm consists of the following 12 steps:

(2.1) The infrared spectral data for the calibration spectra is loaded into the columns of a matrix X, which is of dimension f by n where / is the number of frequencies or wavelengths in the spectra, and n is the number of calibration samples.

(2.2) Frequency dependent polynomials, U p , (a matrix whose columns are orthonormal Legendre polynomials having dimension /by p where p is the maximum order of the polynomials) are generated to model possible variations in the spectral baseline over the spectral range used in the analysis.

(2.3) Spectra representative of a other types of measurement process signals (e.g. water vapor spectra, carbon dioxide, etc.) are loaded into a matrix X s of dimension /by s where s is the number of correction spectra used.

(2.4) The correction spectra are orthogonalized relative to the polynomials via a Gram— Schmidt orthogonalization procedure

X s ' = X s - U p (U p tX s ) (2.4)

(2.5) A Singular Value Decomposition of the correction spectra is then performed,

X s ' = Us∑sVs* (2.5)

to generate a set of orthonormal correction eigenspectra, U s . ∑s are the corresponding singular values, and V s are the corresponding right eigenvectors, * indicating the matrix transpose.

(2.6) The full set of correction terms, U B = U p +U s , which includes both the polynomials and correction eigenspectra are then removed from the calibration data, again using a Gram— Schmidt orthogonalization procedure

Xc = X - U B (U B tX) (2.6)

(2.7) The Singular Value Decomposition of the corrected spectra, Xc, is then performed

Xc = U c ∑cV c t (2.7)

(2.8) The eigenspectra from step (2.7) are examined and the a subset of the first k eigenspectra which correspond to the larger singular values in ∑c are retained. The k+1 through n eigenspectra which correspond to spectral noise are discarded.

Xc = U ∑kVk* + E k (2.8)

(2.9) The k right eigenvectors from the singular value decomposition, V k , are regressed against the pathlength values for the calibration spectra, Y p (an n by 1 row vector),

Y P = V k Bp + E p (2.9a)

where E p is the regression error. The regression coefficients, B p , are calculated as

B p = (Vk t Vk^V k 'Yp = V k 'Yp (2.9b)

(2.10) An estimation of the pathlengths for the calibration spectra is calculated as

Y p = V k B p (2.10)

A 7i by 7i diagonal matrix N is then formed, the i th diagonal element of N being the ratio of the average pathlength for the calibration spectra, y p , divided by the estimated pathlength values for the i th calibration sample

(the i th element of Y p ).

(2.11) The right eigenvector matrix is then renormalized as

V ' = NV k (2.11)

(2.12) The renormalized matrix is regressed against the properties and or concentrations, Y (Y, a n by c matrix containing the values for the n calibration samples and the c property/concentrations) to obtain the regression coefficients for the models,

Y = V k 'B + E (2.12a)

(3.0) The analysis of a new sample with unknown properties/components proceeds by the following steps:

(3.1) The absorption spectrum of the unknown is obtained under the same conditions used in the collection of the calibration spectra.

(3.2) The absorption spectrum, x„, is decomposed into the constrained variables,

x u = Uk∑k u 4 (3.2a)

v u = ∑- i Uktxu (3.2b)

(3.3) The pathlength for the unknown spectrum is estimated as

y P = v u B p (3.3)

(3.4) The eigenvector for the unknown is rescaled as

where y p is the average pathlength for the calibration spectra in (2.10). (3.5) The properties/concentrations are estimated as

y u = Vu'B (3.5)

(4.1) The spectral region used in the calibration and analysis may be limited to subregions so as to avoid intense absorptions which may be outside the linear response range of the spectrometer, or to avoid regions of low signal content and high noise.

(5.1) The samples used in the calibration may be restricted by excluding any samples which are identified as multivariate outliers by statistical testing.

(6.1) The regression in steps (2.9) and (2.12) may be accomplished via a step— wise regression (17) or PRESS based variable selection (18), so as to limit the number of variables retained in the empirical model to a subset of the first k variables, thereby eliminating variables which do not show statistically significant correlation to the parameters being estimated.

(7.1) The Mahalanobis statistic for the unknown, D u 2 , given by

can be used to determine if the estimation is based on an interpolation or extrapolation of the model by comparing the value for the unknown to the average of similar values calculated for the calibration samples.

(7.2) The uncertainty on the estimated value can also be estimated based on the standard error from the regression in (2.12) and the Mahalanobis statistic calculated for the unknown.

(8.1) In the analysis of an unknown with spectrum x u , the contributions of the measurement process signals to the spectrum can be calculated as

v. = ∑ B - 1 U B t χ u (8.1)

These values can be compared against the values for the calibration, V B , to provide diagnostics as to whether the measurement process has changed relative to the calibration.

(9.1) In the analysis of an unknown with spectrum x u , a simulated spectrum x u is calculated as:

ii = U B Σ.v B t + UΛr u' (9.1a)

A comparison of the simulated and actual spectra is then made by calculating the Euclidean Norm, ||x u — x u ||

r "•— -

Xu— Xu * / (x u u ) t (x u — u) (9.1b)

The Euclidean Norm is then compared to a threshold value to determine if the unknown is within the range of the calibration spectra, i.e.

||x u — x u ||.< threshold. The threshold value is determined by treating each of the individual calibration spectra i (columns of the data matrix X) as unknowns, calculating the Euclidean Norms for these n calibration samples, and setting the threshold based on the maximum Euclidean Norm for the calibration set.

(10.1) In the analysis of an unknown with spectrum x u , the distance between the unknown and each of the calibration spectra Ϊ (columns of X) is calculated as

||x u -xi|| = 7 (x u -x i )t(x u -x i ) (10.1a)

and the distance is compared to a threshold to determine if the unknown is in a region where the number of calibration samples in the predictive model is sparse. Alternatively, the distance in the Principal Components scores space is used for the calculations

||v u -vi|| = >/ (v u -vi)t(v u -vi) (10.1b)

where i is the vector of V c corresponding to the ith calibration sample. The threshold value is determined by treating each of the individual calibration spectra Xi (columns of the data matrix X) as unknowns,

calculating the distances for these n calibration samples using equations 10.1a or 10.1b, and setting the threshold based on the maximum distance for the calibration set.

Those and other features and advantages of the invention will now be described, by way of example, with reference to the accompanying single drawing.

BRIEF DESCRIPTION OF THE DRAWING

The single drawing is a flow chart indicating one preferred way of performing the method of this invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

The flow chart of the single drawing gives an overview of the steps involved in a preferred way of carrying out the inventive method. The reference numerals used in the drawing relate to the method operations identified below.

1), 2), 3), and 4)

These deal with updating the estimation model and will be described later.

5) Perform On— line Measurement

An infrared absorption spectrum of the sample under consideration is measured. However, the method is applicable to any absorption spectrum. The methodology is also be applicable to a wide variety of other spectroscopic measurement techniques including ultraviolet, visible light, Nuclear Magnetic Resonance (NMR), reflection, and photoacoustic spectroscopy, etc.

The spectrum obtained from performing the on— line measurement is stored on a computer used to control the analyzer operation, and will hereafter be referred to as the test spectrum of the test sample.

6) Data Collection Operation Valid

The spectrum and any spectrometer status information that is available is examined to verify that the spectrum collected is valid from the stand point of spectrometer operations (not statistical comparison to estimation models). The principal criteria for the validity checks are that there is no apparent invalid data which may have resulted from mechanical or electronic failure of the spectrometer. Such failings can most readily be identified by examining the spectrum for unusual features including but not limited to severe baseline errors, zero data or infinite (over— range) data.

If the data collection operation is deemed to be valid, processing continues with the analysis of the data which is collected. If the data collection operation is deemed to be invalid, diagnostic routines are executed in order to perform spectrometer and measurement system diagnostics [16] (numbers in [ ] refer to operation numbers in the attached figure). These may consist of specially written diagnostics or may consist of internal diagnostics contained in the spectrometer system. In either event, the results of the diagnostics are stored on the operational computer and process operations are notified that there is a potential malfunction of the spectrometer [17]. Control returns to operation [5] in order to perform the on— line measurement again since some malfunctions may be intermittent and collection of valid data may be successfully resumed upon retry.

The objective of the diagnostics performed are to isolate the cause of failure to a system module component for easy maintenance. Therefore, as part of the diagnostic procedure, it may be necessary to introduce calibration and/or standard reference samples into the sample cell in order to perform measurements under a known set of conditions which can be compared to an historical database stored on the computer. The automatic sampling system is able to introduce such samples into the sample cell upon demand from the analyzer computer.

7) Calculate Coefficients and Inferred Spectrum from Model and Measured Spectrum

The measured spectrum of the sample under test, the measured spectrum being the spectral magnitudes at several discrete measurement frequencies (or wavelengths) across the frequency band of the spectrum, is used with the model to calculate several model estimation parameters which are intermediate to the calculation of the property and/or composition data parameter estimates. In the case where the model is an eigenvector— based model, such as is the case when PCA, PLS, CPSA or similar methods are used, the dot (scalar) product of the measured test spectrum with the model eigenspectra yield coefficients which are a measure of the degree to which the eigenspectra can be used to represent the test spectrum If, in CPSA and PCR, the coefficients are further scaled by 1/σ, the result would be the scores, v„, defined in equation 3.2b of the foregoing section of the specification describing CPSA. Such scaling is not required in the generation of the simulated spectrum. Back calculation of the simulated test sample spectrum is performed by adding together the model eigenspectra scaled by the corresponding coefficients. For models which are not eigenvector— based methods, calculations can be defined which can be used to calculate the simulated spectrum of the test sample corresponding to the parameter estimation model.

The residual between the measured test sample spectrum and the simulated test sample spectrum is calculated at each measurement wavelength or frequency. The resulting residual spectrum is used in operation [8].

8) Calculate Measurement Statistical Test Values

From the coefficients and residual spectra available from operation [7] and the measured test sample spectrum from operation [5], several statistical test values can be calculated which are subsequently used in operations [9—11]. Preferred statistics are described in the discussion of operations [9—11] and

are particularly useful for eigenvector— based methods. The calculation in the current operation is to provide statistical measures which can be used to assess the appropriateness of the model for estimating parameters for the test sample. Any method, statistical test or test(s), any inferential test, or any rule— based test which can be used for model assessment either singly or in combination may be used.

9) Does Test Sample Spectrum Fall within the Range of the

Calibration Spectra in the Model

In the case of a principal components (or PLS) based analysis, this test refers to an examination of the Euclidean norm calculated from the residual spectrum by summing the squared residuals calculated at each measurement frequency or wavelength. The simulated spectrum only contains eigenspectra upon which the model is based. Therefore spectral features representing chemical species which were not present in the original calibration samples used to generate the model will be contained in the residual spectrum. The Euclidean norm for a test sample containing chemical species which were not included in the calibration samples used to generate the model will be significantly larger than the Euclidean norms calculated for the calibration spectra used to generate the model. As noted in operation [8], any statistic, test or procedure may be used which provides an assessment of whether chemical species are present in the test sample which are not contained in the calibration samples. In particular, pattern recognition techniques and/or comparison to spectra contained in computerized spectral libraries may be used in conjunction with the residual spectrum.

In a preferred way of performing the invention, the magnitude of the Euclidean norm is evaluated to see if the test sample spectrum falls within the range of the calibration sample spectra used to generate the model, i.e. is the Euclidean norm small with respect to the Euclidean norms of the calibration sample spectra calculated in a similar fashion. A small Euclidean norm is taken as indication that no chemical species are present in the test

sample that were not present in the calibration samples. If negative (a large Euclidean norm), the sample spectrum is archived and a spot sample collected for further laboratory analysis. This is performed in operation [12]. The sampling system with the analyzer is capable of automatically capturing spot samples upon command by the analyzer control computer.

In the context of this test, chemical species are being thought of as chemical components which are contained in the sample as opposed to external interferences such as water vapor which will also show up here and must be distinguished from chemical components which are present in the sample. This can be done by modeling the measured water vapor spectrum and by orthogonalizing the calibration spectra thereto, as described above in relation to CPSA.

10) Does Test Sample Parameter Estimation involve Interpolation of the Model

If the sample is selected as acceptable in operation [9], it is preferable to examine the validity of the model with respect to accurately estimating properties of this sample. Any method of determining statistical accuracy of parameter estimates or confidence levels is appropriate. A preferred way of achieving this is for the Mahalanobis distance (as defined above in equation (7.1) of the section of this specification describing the development of an empirical model in CPSA) to be used to determine the appropriateness of the model calibration data set for estimating the sample. The Mahalanobis distance is a metric which is larger when a test sample spectrum is farther from the geometric center of the group of spectra used for the model calculation as represented on the hyperspace defined by the principal components or eigenspectra used in the model. Thus, a large value of the Mahalanobis distance indicates that the property estimate is an extrapolation from the range of data covered by the model calibration. This does not necessarily mean that the estimate is wrong, only that the uncertainty in the estimate may be larger (or the confidence level smaller) than

desirable and this fact must be communicated in all subsequent uses of the data.

If the estimate is found to be uncertain (large Mahalanobis distance), it is desirable to archive the sample spectrum and capture a spot sample for subsequent laboratory analysis using the computer controlled automatic sampling system [operation 12].

11) Does Test Sample Spectrum Fall in a Populated Region of Data in Calibration Model

Even though the sample may lie within the data space covered by the model (small value of Mahalanobis distance), the sample may lie in a region in which the number of calibration samples in the model set is sparse. In this case, it is desirable to archive the sample spectrum and capture a spot sample [12] so that the model may be improved. Any standard statistical test of distance may be used in order to make this determination. In particular the inter— sample Mahalanobis distance calculated for each test sample/calibration sample pair may be examined in order to arrive at the decision as to whether or not the samples should be saved. An inter— sample Mahalanobis distance is defined as the sum of the squares of the differences between the scores for the test sample spectrum and those for the calibration sample spectrum, scores being calculated by equation (3.2b) of the section of this specification describing the development of an empirical model in CPSA. A negative response results if all the inter— sample Mahalanobis distances are greater than a predetermined threshold value selected to achieve the desired distribution of calibration sample spectra variability, in which case, it is desirable to archive the sample spectrum and capture a spot sample for subsequent laboratory analysis using the computer controlled automatic sampling system [12].

13) Calculate Parameter and Confidence Interval Estimates

After having performed the statistical tests indicated in

operations [9], [10] and [11] and possibly collecting a spot sample as indicated in step [12], the parameters are now estimated from the model. For CPSA, this involves calculating the scores (equation 3.2b) and then estimating the parameters (equations 3.3 to 3.5). The actual numerical calculation performed will depend upon the type of model being used. For the case of a eigenvector— based analysis (such as PCR, PLS), the method is a vector projection method identical to that described above as CPSA.

14) Transmit Estimate(s) to Process Monitor/ Control Computer

Having calculated the parameter estimates and the statistical tests, the parameter estimates and estimates of the parameter uncertainties are now available. These may be transmitted to a separate process monitor or control computer normally located in the process control center. The results may be used by operations for many purposes including process control and process diagnostics. Data transmission may be in analog or digital form.

15) Transmit Estimate(s) to Analyzer Workstation

The analyzer is normally operated totally unattended (stand alone). It is desirable for the results of the analysis and statistical tests to be transferred to a workstation which is generally available to analyzer and applications engineers. This is indicated in operation [15]. While the availability of the data on a separate workstation may be convenient, it is not essential to the operation of the analyzer system.

1) Archived Test Spectrum and Lab Data for Model Update Present

In the event that the samples have been captured and spectra archived for subsequent model updating, it is necessary to update the estimation model. This can only be carried out once the results of laboratory analysis are available along with the archived spectrum.

If model updating is not needed, operation continues with [5].

Model Updating

Model updating consists of operations [2], [3], and [4]. Any or all of the operations may be performed on the analyzer computer or may be performed off— line on a separate computer. In the latter case, the results of the updated model must be transferred to the analyzer control computer.

2) Full Model and Regression Calculation Necessary

If the sample which is being included in the model did not result from a negative decision in operation [9], it is not necessary to carry out the calculation which produces the model eigenspectra. This is because operation [9] did not identify the inclusion of additional eigenspectra into the model as being necessary. In this case, only a new regression is required, and the process continues with operation [4].

3) Calculate New Model Using CPSA or Equivalent

The calibration model data base is updated by including the additional spectrum and corresponding laboratory data. The database may be maintained on a separate computer and models developed on that computer. The entire model generation procedure is repeated using the expanded set of data. This, for example, would mean rerunning the CPSA model or whichever numerical methods have been used originally. If this step is performed off— line, then the updated eigenspectra must be transferred to the analyzer computer.

Model updating methods could be developed which would allow an updated model to be estimated without having to rerun the entire model building procedure.

4) Perform New Regression and Update Model Regression Coefficients

A regression is performed using the scores for the updated calibration set and the laboratory measurements of composition and/or property parameters to obtain regression coefficients which will be used to perform the parameter and confidence interval estimation of operation [13]. The regression step is identical to that described above for CPSA (equations 2.9a and b in the section on the development of an empirical model hereinabove). If this step is performed off-line, then the regression coefficients must be transferred to the analyzer computer.

The steps described above allow the estimation of property and/or composition parameters by performing on— line measurements of the absorbance spectrum of a fluid or gaseous process stream. Mathematical analysis provides high quality estimates of the concentration of chemical components and the concentrations of classes of chemical components. Physical and performance parameters which are directly or indirectly correlated to chemical component concentrations are estimable. Conditions for the measurement of the absorbance spectra are specified so as to provide redundant spectral information thereby allowing the computation of method diagnostic and quality assurance measures.

The steps comprising the methodology are performed in an integrative manner so as to provide continuous estimates for method adjustment, operations diagnosis and automated sample collection. Different aspects of the methodology are set out below in numbered paragraphs (1) to (10).

(1.) Selection of the subset region for the absorbance spectra measurements

(1.1) The measurement of the infrared spectrum in the various subregions can be accomplished by the use of different infrared spectrometer equipment. Selection of the appropriate subregion(s) is accomplished by obtaining the spectrum of a representative sample in each of the candidate subregions and selecting the subregion(s) in which absorptions are found which are due directly or

indirectly to the chemical constituent(s) of interest. Criteria for selection of the appropriate subregion(s) may be summarized as follows:

The procedure of this paragraph is applicable over a wide range of possible absorption spectrum measurements. No single spectrometer can cover the entire range of applicability. Therefore, it is necessary to select a subset region which matches that which is available in a spectrometer as well as providing significant absorption features for the chemical constituents which are in the sample and which are correlated to the composition and/or property parameter for which a parameter estimate is to be calculated. The criteria for selecting the preferred wavelength subset region include subjective and objective measurements of spectrometer performance, practical sample thickness constraints, achievable sample access, and spectrometer detector choice considerations. The preferred subset region for measuring liquid hydrocarbon process streams is one in which the thickness of the sample is approximately 1 cm. This is achievable in the region from 800 nm to 1600 nm which corresponds to a subset of the near infrared region for which spectrometer equipment which is conveniently adaptable to on— line measurement is currently available. The named region is a further subset of the range which is possible to measure using a single spectrometer. The further restriction on range is preferred in order to include sufficient range to encompass all absorptions having a similar dynamic range in absorbance and restricted to one octave in wavelength.

(2.) Criteria for selection and measurement of samples for the calibration model calculation.

(2.1) Samples are collected at various times to obtain a set of samples (calibration samples) which are representative of the range of process stream composition

variation.

(2.2) Absorption spectra of the samples may be obtained either on— line during the sample collection procedure or measured separately in a laboratory using the samples collected.

(2.3) The property and/or composition data for which the calibration model is to be generated are separately measured for the collected samples using standard analytical laboratory techniques.

(3.) Calibration model calculation

(3.1) The calibration model is obtained using any one of several multivariate methods and the samples obtained are designated calibration samples. Through the application of the method, a set of eigenspectra are obtained which are a specific transformation of the calibration spectra. They are retained for the property/composition estimation step. An important preferred feature of the invention allows for the updating of the predictive model by collecting samples during actual operation. This will permit a better set of samples to be collected as previously unrecognized samples are analyzed and the relevant data entered into the predictive model. Therefore it is not particularly important how the samples are obtained or which model calculation method is used for the initial predictive model. It is preferable that the initial calibration model be developed using the same method which is likely to be used for developing the model from the updated sample set. Methods which can be used for the calibration model calculation are:

(3.1.1) Constrained Principal Spectra Analysis as described hereinabove is the preferred method.

(3.1.2) Principal components regression discussed above is an alternative method.

(3.1.3) Partial least squares analysis, which is a specific implementation of the more general principal components regression.

(3.1.4) Any specific algorithm which is substantially the same as the above.

(3.1.4) A neurological network algorithm, such as back propagation, which is used to produce a parameter estimation model. This technique may have particular advantage for handling non— linear property value estimation.

(4.) Property /composition estimation

(4.1) Property and/or composition data are estimated according to the following equation as explained above (equation 3.5):

yu = V U B

(5.) Calibration model validation

Calibration model validation refers to the process of determining whether or not the initial calibration model is correct. Examples of validating the calibration model would be cross— validation or PRESS referred to hereinabove.

(5.1) Additional samples which are not used in the calibration model calculation (paragraph (3) above) are collected (test set) and measured.

(5.1.1) Spectra are measured for these samples either on— line or in a laboratory using the

samples which have been collected.

(5.1.2) Property and/or composition data are obtained separately from the same standard analytical laboratory analyses referred to in paragraph (2.3) above.

(5.2) Property and/or composition data are estimated using equations (3.3—3.5) in the description of CPSA hereinabove and validated by comparison to the laboratory obtained property and/or composition data.

(6.) On— line absorption spectrum measurement

(6.1) Any infrared spectrometer having measurement capabilities in the subset wavelength region determined in paragraph (1) above may be used.

(6.2) Sampling of the process steam is accomplished either by extracting a sample from the process stream using a slip stream or by inserting an optical probe into the process stream.

(6.2.1) Slip stream extraction is used to bring the sample to an absorption spectrum measuring cell. The spectrum of the sample in the cell is measured either by having positioned the cell directly in the light path of the spectrometer or indirectly by coupling the measurement cell to the spectrometer light path using fiber optic technology. Slip stream extraction with indirect fiber optic measurement technology is the preferred on— line measurement method. During the measurement, the sample may either be continuously flowing, in which case the spectrum obtained is a time averaged spectrum, or a valve may be used to stop the flow during the spectral measurement.

(6.2.2) Insertion sampling is accomplished by coupling the optical measurement portion of the spectrometer to the sample stream using fiber optic technology.

(7.) Process parameter (on— line property and/or composition) estimation.

(7.1) Spectra are measured on— line for process stream samples during process operation. Several choices of techniques for performing the spectral measurement are available as described in paragraph (6) immediately above.

(7.2) Parameter estimation is carried out using the equation in paragraph (4.1) above.

(8.) Calibration model updating

(8.1) Spot test samples for which the estimated parameter(s) are significantly different from the laboratory measured parameter(s) as determined in paragraphs (9) and (10) below are added to the calibration set and the calibration procedure is repeated starting with paragraph (3) to obtain an updated calibration model as set out in the equation in paragraph

(3.1) above.

(8.2) Samples which are measured on— line are compared to the samples used in the calibration model using the methods described in paragraphs (9) and (10) below. Samples for which fail the tests in (9) or (10) are noted and aliquots are collected for laboratory analysis of the property/composition and verification of the spectrum. The on— line measured spectrum and the laboratory determined property/composition data for any such sample is added to the calibration data set and the calibration procedure is repeated starting with paragraph

(3) to obtain updated calibration model.

(9.) Diagnostic and quality assurance measures

(9.1) Diagnostics are performed by calculating several parameters which measure the similarity of the test sample spectrum to the spectra of samples used in the calibration.

(9.1.1) Vector— based distance and similarity measurements are used to validate the spectral measurements. These include, but are not limited to,

(9.1.1.1) Mahalanobis distances and/or Euclidean norms to determine the appropriateness of the calibration set for estimating the sample.

(9.1.1.2) Residual spectrum (the difference between the actual spectrum and the spectrum estimated from the eigenspectra used in the parameter estimation) to determine if unexpected components having significant absorbance are present.

(9.1.1.3) Values of the projection of the spectrum onto any individual eigenspectrum or combination of the eigenspectra to determine if the range of composition observed is included in the calibration set.

(9.1.1.4) Vector estimators of spectrometer system operational conditions (such as wavelength error, radiation source variability, and optical

component degradation) which would affect the validity of the parameter estimation or the error associated with the parameter estimated.

(9.1.2) Experienced— based diagnostics commonly obtained by control chart techniques, frequency distribution analysis, or any similar techniques which evaluate the current measurement (either spectral or parameter) in terms of the past experience available either from the calibration sample set or the past on— line sample measurements.

(10.) Process control, optimization and diagnosis

(10.1) Parameters are calculated in real— time which are diagnostic of process operation and which can be used for control and/or optimization of the process and/or diagnosis of unusual or unexpected process operation conditions.

(10.1.1) Examples of parameters which are based on the spectral measurement of a single process stream include chemical composition measurements (such as the concentration of individual chemical components as, for example, benzene, toluene, xylene, or the concentration of a class of compounds as, for example, paraffins); physical property measurements (such as density, index of refraction, hardness, viscosity, flash point, pour point, vapor pressure); performance property measurement (such as octane number, cetane number, combustibility); and perception (such as smell/odor, color).

(10.1.2) Parameters which are based on the spectral measurements of two or more streams

sampled at different points in the process, thereby measuring the difference (delta) attributable to the process included between the sampling points along with any delayed effect of process between to the sampling points.

(10.1.3) Parameters which are based on one or more spectral measurements along with other process operational measurements (such as temperatures, pressures, flow rates) are used to calculate a multi— parameter (multivariate) process model.

(10.2) Real— time parameters as described in paragraph (10.1) can be used for:

(10.2.1) Process operation monitoring.

(10.2.2) Process control either as part of a feedback or a feedforward control strategy.

(10.2.3) Process diagnosis and/or optimization by observing process response and trends.