Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMPROVED METHOD OF PROCESSING DATA AND DATA PROCESSING APPARATUS
Document Type and Number:
WIPO Patent Application WO/2000/042385
Kind Code:
A1
Abstract:
A method of extracting information from remotely sensed data is disclosed comprising the steps of acquiring a plurality of data samples from at least one sensor, each data sample corresponding to a respective footprint. The one or more sensors are adapted to sense a parameter of a surface and have a known gain function. The data samples are processed to produce a respective footprint corresponding to the area of the surface contributing to each data sample, and a set of n bounded areas are selected for the surface being sensed and allocating a variable to the area defined by each boundary. Next, a weighting $g(a) is defined for each data sample dependent upon the sensor gain function and the location of the footprint for the respective data sample relative to the bounded areas of the surface, and a set of at least n equations are constructed from said weighted data samples. Finally, these equations can be solved to determine the values for the variables allocated to respective bounded areas.

Inventors:
BARRETT ERIC CHARLES (GB)
BEAUMONT MICHAEL JOHN (GB)
Application Number:
PCT/GB2000/000063
Publication Date:
July 20, 2000
Filing Date:
January 12, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SECR DEFENCE (GB)
BARRETT ERIC CHARLES (GB)
BEAUMONT MICHAEL JOHN (GB)
International Classes:
G01C11/00; (IPC1-7): G01C11/00
Other References:
TIM BELLERBY ET AL: "RETRIEVAL OF LAND AND SEA BRIGHTNESS TEMPERATURES FROM MIXED COASTAL PIXELS IN PASSIVE MICROWAVE DATA", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 36, no. 6, November 1998 (1998-11-01), USA, pages 1844 - 1851, XP002136739
Attorney, Agent or Firm:
Barker, Brettell (138 Hagley Road Edgbaston Birmingham B16 9PW, GB)
Download PDF:
Claims:
CLAIMS
1. A method of extracting information from remotely sensed data comprising the steps of acquiring a plurality of data samples from at least one sensor, each data sample corresponding to a respective footprint and said one or more sensors being adapted to sense a parameter of a surface and having a known gain function, processing (reprojecting) the data samples to produce a respective footprint corresponding to the area of the surface contributing to each data sample, selecting a set of n bounded areas for the surface being sensed and allocating a variable to the area defined by each boundary, defining a weighting a for each data sample dependent upon the sensor gain function and the location of the footprint for the respective data sample relative to the bounded areas of the surface, constructing a set of at least n equations from said weighted data samples, and solving said equations to determine the values for the variables allocated to respective bounded areas.
2. A method according to claim 1 in which each data sample comprises a convolution of the sensor gain function with the signal from the footprint on the surface which is incident upon the sensor.
3. A method according to claim 1 or claim 2 in which the equations comprise linear equations.
4. A method according to any preceding claim in which the variables correspond to points in an image of a surface constructed from an array of data samples from different positions on the surface.
5. A method according to any preceding claim in which each variable allocated to a bounded area corresponds to the parameter to be sensed (e. g. surface temperature).
6. A method according to any preceding claim in which the variables comprise surface temperature measurements and the sensor comprises a microwave antenna adapted to detect radiation emitted from or reflected off the surface.
7. A method according to any preceding claim in which more than one set of linear equations is used and each linear equation in a set may be allocated a weighting, with different weightings used for each set.
8. A method according to any preceding claim in which the sensor is adapted to receive information radiated or reflected from a footprint on the surface to produce a data sample and the linear equations may then be calculated from data samples corresponding to a number of such footprints for different surface areas.
9. A method according to claim 8 in which the footprints for two or more of the data samples used to construct the set of linear equations overlap at least partially.
10. A method according to claim 9 in which the bounded areas define the boundaries of areas of land and ocean at a coastline.
11. A method according to any preceding claim in which each bounded area is smaller than the size of each footprint for the data samples.
12. A method according to any preceding claim in which each data sample used to construct the set of linear equations corresponds to a footprint covering a different but overlapping area of the surface.
13. A method according to any preceding claim in which the boundaries are defined by reprojecting the data samples onto a GIS map of the surface.
14. A method according to any preceding claim in which the weights, a, are based on the convolution for the gain function of the receiver with the boundaries of the area surface sensed.
15. A method according to claim 14 in which the convolution is approximated by a summation.
16. A software program for implementing on a computer adapted to perform a method in accordance with any one of claims 1 to 15.
17. A method of processing an original set of signals to create a processed image comprising: taking a set of original signals derived from one or more image sensors, the signals being representative of an image to be image processed; having a spatial map upon which the original image signals are to be translated, the map having a plurality of mapping regions; having weighting coefficients for each original signal in the set of original signals corresponding to the weight to be given to each individual original signal associated with any particular mapping region, and creating processed image signals for each mapping region, the processed image signals for individual mapping regions being determined by applying the weighting coefficients to the set of original signals for each individual mapping region so as to create a combined, weighted, processed signal associated with each mapping region.
18. A method of processing according to claim 17 in which there are at least as many original signals in a set as there are spatial mapping regions.
19. A method according to claim 17 or claim 18 in which the weighting coefficients are associated with the directional sensitivity of the sensor (s) used to capture the set of original signals.
20. A method according to any one of claims 17 to 19 in which the processed signal for each mapping region is determined by assuming it to be a constant value and solving a redundant, or solvable, number of simultaneous equations which each have the processed signal as one parameter, and weighted values derived from the original signals as other parameters, the weighting and the original signals being known.
21. A method according to any one of claims 17 to 20 in which the mapping regions are representative of real physical features known to be present in the scene being viewed by the imagegathering apparatus.
22. A method of improving the resolution of an imaging system which generates original pixel signals corresponding to pixels of a pixellated detection field of view or attributable to pixels of a virtual pixellated detector field of view, the method comprising: taking the original signals and operating on them with a weighting function so as to generate processed signals for each mapping region of a predetermined map, the weighting function attributing a contributing weight to each original pixel signal so as to determine a processed signal for each mapping region.
23. A method according to claim 22 in which the weighting function applies the antennas gain function/sensitivity function of the imaging system to the original pixel signals to at least partially determine the processed signal for each mapping region.
24. A method according to claim 22 or 23 in which the method comprises processing the original signals to obtain a plurality of values for at least some pixels of the original pixellated field of view.
25. An image processing apparatus comprising input signal receiving means; processed signal output means; and computational means; the computational means being adapted to take a set of original input signals from the input signal receiving means and of processing them by applying a weighting function which in use is used to generate an output signal for a predetermined mapping region, the mapping region being input to the computational means or stored there; the computational means being capable of applying respective appropriate weighting to the original input signals of a set of signals representative of an image to be processed to generate each respective output signal corresponding to each respective mapping region and of outputting signals corresponding to the mapping regions to the processed signal output means.
26. A software carrier carrying software which when operational on a computer or network either provides an image processing apparatus according to claim 25 or operates the computer or network according to the method of any one of claims 1 to 24.
27. An image processing system comprising image capturing means and image processing means; the image capturing means capturing or sensing an image signal for each pixel of a pixellated image field of view, and the image processing means comprising image processing apparatus in accordance with the proceeding aspect of the invention; the arrangement being such that in use the image capturing means obtaining an image of original signals for each of its pixels and the image processing means processing them to create output image processed signals translated into the mapping regions of the image processing apparatus.
28. Apparatus according to claim 27 which is capable of oversampling an observed scene to create a plurality of sensed original image signals for elements with the scene and of using the"redundant"data to improve the resolution of the system beyond the resolution of the image capturing means alone.
29. Apparatus according to claim 27 or claim 28 in which the image capturing means comprises a microwave antenna or an infrared receiver.
30. Apparatus according to claim 27,28 or 29 in which the image capturing means is mounted on a remote sensing vehicle such as a satellite, aeroplane or ship.
31. A satellite system, or other remote sensing platform or installation having one or more sensors having an overlapping field of view so as to produce overlapping unprocessed images or data samples, and means for processing the images or data samples using a method in accordance with any one of claims 1 to 20.
32. A satellite system according to claim 30 in which a plurality of sensors are provided which sense infrared, microwave, visible or other radiation.
Description:
IMPROVED METHOD OF PROCESSING DATA AND DATA PROCESSING APPARATUS This invention relates to improvements in methods of processing data, and especially (but not exclusively) to a method of extracting information from remotely sensed data. The invention also relates to data/image processing apparatus.

Remote sensing is a field of activity with many applications. In a typical application, a sensor such as a microwave antenna on a satellite may be adapted to produce data from microwave signals emitted from the earth's surface or intermediate surfaces such as clouds or vegetation. Each data sample measured corresponds to the signal emitted from an area or footprint of the surface determined by the field of view of the sensor. As the satellite moves, data from different footprints are collected and can be processed to produce a data sample indicative of a spectral characteristic, such as brightness temperature, for a footprint, sets of data samples being used to generate a two-dimensional image of the surface. In one example, the image may correspond to the temperature of different parts of the sensed surface, such as land or sea which can be used for weather forecasting. Other applications include the measurement of wind speed at the ocean surface; precipitation; land and sea surface temperatures; ice and snow cover etc, etc. Obviously, the shape of the footprint will depend upon the satellite and sensor characteristics, shape of the surface and the location of each footprint on that surface. Similar considerations apply in the case of remote sensing by sensors mounted on other types of platforms, e. g. aircrafts, ships, and the ground.

Typically, the sensed data sample corresponding to a footprint is individually processed and allocated to a data pixel. The pixel is selected

to be a location on the surface corresponding to the centre of the footprint. However, this approach is inefficient in its use of the information making up the data sample.

In the case of apparent earth surface temperature measurements at microwave frequencies, the footprint for each sensed data sample may be of more than 1000km2 (say 30km x 30 km). At coastal regions, where a footprint crosses both land and sea, contamination of detected signals occurs resulting in erroneous or inaccurate data samples. This is due to geophysical differences between land and sea. Prior systems have relied upon masking out the region where erroneous values occur, but this results in a loss of information around the coastline. Hitherto this has either been accepted as an inevitable result of preventing the contamination from giving unreliable results in that area, or linear extrapolation between reliable data either side of the sensed coastal area has been proposed. This is due to geophysical differences between land and sea. Similar considerations apply to any discontinuities across the entire range of scales. In order to overcome these problems, current solutions are to design better sensors, or orbit the satellite lower, or both.

In accordance with a first aspect, the invention provides a method of extracting information from remotely sensed data comprising the steps of: acquiring a plurality of data samples from at least one sensor, each data sample corresponding to a respective footprint and said one or more sensors being adapted to sense a parameter of a surface and having a known gain function;

processing (reprojecting) the data samples to produce a respective footprint corresponding to the area of the surface contributing to each data sample; selecting a set of n bounded areas for the surface being sensed and allocating a variable to the area defined by each boundary; defining a weighting a for each data sample dependent upon the sensor gain function and the location of the footprint for the respective data sample relative to the bounded areas of the surface; constructing a set of at least n equations from said weighted data samples; and solving said equations to determine the values for the variables allocated to respective bounded areas.

Preferably each data sample comprises a convolution of the sensor gain function with the signal from the footprint on the surface which is incident upon the sensor. The data samples may form a data stream which may be defined spatially, or temporally or both.

Preferably, the equations comprise linear equations. This makes them easier to solve, perhaps using matrixes.

Looked at one way, each data sample in the stream may represent the net radiation measured by the sensor over its field of view, taking into account its varying sensitivity for different angles of incidence (its gain function). The area of surface contributing to the field of view is defined as the sensor footprint.

The variables may therefore correspond to attributes of the surface in an image (for example a substantially two-dimensional image) of a surface constructed from an array of data samples from different positions on the surface.

Preferably, (n + m) or more linear equations are formulated where (n + m) is the number of data samples used and m is greater than or equal to zero (and integer). This will (generally) produce an over-specified system.

Preferably, each variable allocated to a bounded area corresponds to the parameter to be sensed (e. g. surface temperature) and assumes that the variable is relatively constant (at least over the measurement period) within the boundary (or will be closer to the values for its bounded area than an adjacent bounded area, or other bounded areas).

For example, the information extracted may comprise surface temperature measurements. In this case, the sensor may comprise a microwave or infrared sensor or antenna adapted to detect radiation emitted from or reflected off the surface. The variable will then be a temperature value.

By solving the linear equations for all or part of the entire area set, the values can be calculated from the weighted data.

By giving a weighting to each data sample, and using several data samples to generate a set of linear equations, it is possible to utilise more of the data to improve accuracy. In the past, in practice it has generally been assumed that the individual data samples will correspond to accurate measurements of the variable with respect to the sampling frequency.

More than one set of linear equations may be used. Each linear equation in a set may be allocated a weighting, with different weightings used for each set.

Preferably the sensor is adapted to receive information radiated or reflected from a footprint on the surface to produce a data sample rather than a single point measurement. The linear equations may then be calculated from data samples corresponding to a number of such footprints for different surface areas. A boresight may be provided at the sensor to limit the field of view of the sensor, optimising the footprint size.

Preferably, the footprints for two or more of the data samples used to construct the set of linear equations overlap at least partially. The overlap may be (by area) 5% or 10%, or 50% or any other value which enables the equations to be solved.

Two bounded areas may be chosen for the surface to be sensed. For example, these may define the boundaries of areas of land and ocean at a coastline. Of course, many more boundaries may be selected. In one envisaged application, the boundaries divide the surface into a grid, with each area in the grid being allocated a variable. Each area may be a square or other shape which can be identified. The area of each"square" of the grid may be smaller than the size of each footprint for the data samples. This enables an increased resolution to be obtained. For increasingly small grid areas, the amount of overlap between footprints may need to be increased to produce a computable set of equations.

Preferably, each data sample used to construct the set of linear equations corresponds to a footprint covering a different but overlapping area of the surface. The sensor may therefore be moved relative to the surface and

take a series of snapshots of different areas over time. Alternatively, a set of snapshots may be taken during a single timespan.

In one possible arrangement, nine data samples are used to construct nine linear equations, with the footprints for each data sample being arranged in a 3 row by 3 column grid. However, the optimal number of equations depends on the relationship between the sensor and the areas defined.

Each footprint in the grid may partially overlap its adjacent footprint.

In an alternate arrangement, enough data samples may be acquired to completely cover the surface with overlapping footprints. These data samples can then be used to provide a set of equations whereby the variables allocated to the bounded areas in the surface are all determined in one process. This is an alternate to using groups or sets of data samples to cover the surface in several sets.

The sensor in one preferred arrangement may be adapted to produce a data sample for a footprint in the form: where t is the time relative to the measurement time, h (t) is the impulse response of the sensor, G (6, zizis the normalised gain function for the sensor in the direction (0, ) relative to the direction of the sensor at the measurement time, (3 (t), 4) (t) describe the movement of the sensor during the measurement period and (X (e, ), Y (6, +)) is the projection of the viewing direction (6, ) onto the surface.

In the case of microwave sensing, To may be surface temperature brightness, although it could be any other parameter depending on the type of sensor used. It will also be appreciated that other parameters than land and sea temperature can be measured, such as snow field cover, ocean surface wind speed etc.

Assuming that brightness temperatures vary slowly compared to the time scale of the measurement and reversing the order of integration: G (0, ) is the sensor gain function integrated over the measurement interval.

The footprint associated with the true antenna response G (0, ) is known as the Instantaneous Field of View (IFOV). The footprint for the time integrated G (O, +) is called the Effective Field of View (EFOV). The processing of the data samples to project the EFOV onto the surface of the Earth, for example, results in an elliptical footprint, with the shorter axis parallel to the scan. If the sampling distance at each frequency is shorter than the dimensions of these projected footprints they overlap.

Each equation may be (for two variables) of the form: To(x,y) = TaL(x,y) + Tb(1-L(x,y)) where L (x, y) 1 if (x, y) is a point within a first boundary

0 if (x, y) is a point within a second boundary Ta variable allocated to the first boundary Tb variable allocated to the second boundary and x, y are co-ordinates representing a position on the surface.

From nine data samples, To (x, y), nine linear equations in terms of Ta and Tb can be established and solved for Ta and Tb.

Convolving the antenna gain function over the model yields: Finally, restructuring enables the system of equals to be expressed as: 7 =a,,, jT + (1-a i, j) Tbi = 1,2,3j = 1,2,3 The nine equations can be expressed in a compact form as: =,,(7-7,)+ where cri j are the weights for each data sample.

By overlaying the footprint for the central data sample of the grid of 3 x 3 samples onto the map, the values of Ta and Tb derived (or other variables

as appropriate) can be allocated to each boundary area within that footprint.

After evaluating the parameter for the central footprint, the calculation can be repeated by forming values of Ta and Tb from a second set of footprints offset from the first set. This can be continued to build up a two-dimensional image of the parameters across the whole or a substantial part of the surface.

Obviously, the method can be readily expanded to cope with more than two parameters, such as land, sea and ice for instance, e. g. Tau Tb, T,.

The boundaries may be defined by reprojecting the data samples onto a map of the surface, such as a GIS data model. The reprojection may be adapted to take into account changes in the profile of the surface, which may otherwise distort the footprint for each data samples.

The weights, a, may be based on the convolution for the gain function of the receiver with the boundaries of the area surface sensed. The convolution may be approximated by a summation.

The linear equations may be solved using linear regression. They may be solved using matrix mathematical techniques.

The data samples may, for example, comprise brightness temperature values, but may correspond to other parameters. They may be produced by integrating the sensor output over time, or be an instantaneous value.

By selecting overlapping data samples to form a set, and using a grid of smaller areas than the sensor resolution (footprint size), increases in resolution can be obtained. A grid area for footprint area of 1/8 or smaller could be used.

In accordance with a second aspect, the invention provides a software program for implementing on a computer adapted to perform a method in accordance with the first aspect of the invention. Alternatively, the method may be embedded in a computer chip to perform the data extraction in hardware. It may be adapted to process the data in real time.

According to a third aspect, the invention provides a method of processing an original set of signals to create a processed image comprising: taking a set of original signals derived from one or more image sensors, the signals being representative of an image to be image processed; having a spatial map upon which the original image signals are to be translated, the map having a plurality of mapping regions; having weighting coefficients for each original signal in the set of original signals corresponding to the weight to be given to each individual original signal associated with any particular mapping region, and creating processed image signals for each mapping region, the processed image signals for individual mapping regions being determined by applying the weighting coefficients to the set of original signals for each individual mapping region so as to create a combined, weighted, processed signal associated with each mapping region.

Preferably, there are at as least many original signals in a set as there are spatial mapping regions, and most preferably more original signals in a set than mapping regions.

Preferably, the weighting coefficients are associated with the directional sensitivity of the sensor (s) used to capture the set of original signals.

Preferably, the processed signal for each mapping region is determined by assuming it to be a constant value and solving a redundant, or solvable, number of simultaneous equations which each have the processed signal as one parameter, and weighted values derived from the original signals as other parameters, the weighting and the original signals being known.

Preferably, some or all mapping regions represent known, predetermined shapes. By knowing in advance what the"shapes"and/or sizes of the mappings should be it is possible to solve the equations for the initially unknown processed signal value for each mapping region from the measured original signals and the weighting coefficients to be applied- possibly using a best-fit technique if there is no exact solution.

The mapping regions may be standard, possibly equal, areas (for example a grid). The mapping regions may be irregular, with not all of them having the same areas, and/or outline shapes. Some or all of the mapping regions may be of precisely known areas/shapes representative of real physical features known to be present in the scene being viewed by the image-gathering apparatus. For example the mapping regions could be parts of a known coastline, or known fields, or individual geographic features.

Looked at in another way the invention can be thought of as a new technique for creating high resolution imagery from low resolution sensor sampling in both space and time.

"Traditional"remote sensing consists of obtaining a set of sensor values. These are readings taken from a field of view seen, for example, by an observing satellite, and displaying these values as a grid of pixels on a computer display. The location of the centre of the field of view is computed from the satellite orbit and viewing geometry. This is then used to locate the appropriate display pixel. If more than one value is allocated to a pixel then some rule is used to select the"best"value. If the resulting image shows empty pixels then this is because not every field of view centre falls within every pixel, in this case some interpolation of adjacent data is required to supply missing pixel values. In order to minimise these effects the pixel image is scaled so that the pixel size approximates to size of the sensor field of view. In this way an image of the entire view scanned by the sensor is built up, its resolution being approximately determined by the satellite resolution.

This traditional approach uses the pixel hardware in a directly linked sense to the data. It ignores the fact that the centre of the actual field of view is not necessarily at the centre of the pixel. It ignores the fact that the field of view is not described by the pixel geometry, the actual field of view is more likely to be an ellipse and may vary in size. It ignores overlap between sensor values or the relationship between them. These shortcomings of data processing are not normally significant when dealing with very high resolution data, say in the order of metres, but become much more of a problem if the sensor resolution is over kilometres.

The present approach is to look at the entire imaging process and produce a more rigorous definition of remote sensing processing. Central to the idea is changing ground area boundaries from the prior art arbitrary pixel display to a properly geometrically defined set of geographic areas. The geometric boundaries may constitute real boundaries such as coastlines or

fields, or simply by arbitrary grids of rectangular areas of fixed or variable size. But in every case the boundaries must yield a fixed and consistent metric or coordinate system. The sensor then provides observed values over these boundaries either at the same time or at different times, with or without overlapping fields of view. We prefer overlapping fields of view-which is quite contrary to conventional thinking.

If one assumes uniform values for each area within its boundary and over the time interval of the measurements being taken, then for each sensor measurement field of view these real responses are integrated according to the antenna response pattern. This gives sensor measurements as function of areal responses. Given all these related measurements it is then possible to solve for the whole geometric system to give the ground values for each defined area.

In the prior art, the main point is that remote sensing samples are projected"raw"onto pixel hardware without there being a proper mapping between them. We can remap the antenna pattern into land and sea areas (or other parameters) and can map onto any co-ordinate system or geographical base. In other words the post sensor processing should remap the data onto a defined GIS or similar. Instead of computing, say, nine samples into two regions (as discussed in a later example) there is no reason why n samples should not be computed into n regions. A chip could be loaded with the GIS and all samples pushed through it as a space time data stream with each sample positioned within the GIS.

It will be appreciated that in prior art conventional Remote Sensing (e. g. using satellite data) established practice is to dislike irregular fields of view (FOV) (i. e. all fields of view from satellites are irregular owing to

the satellite motion, scanning characteristics, the curvature of the earth and antenna gain functions), the solution being to crop down the data to give a central region of the FOV and say that that is the value for a pixel. Conventional thinking is to disregard, and even dislike, over-sampling (which is common, and becoming more so with modern sensing systems).

In our view conventional thinking samples/fits RS data too soon to some standard co-ordinate framework-i. e. before it is weighted to compensate for contamination of signals by things from adjacent pixels (on the ground). The prior art fails to exploit the full set of available observations.

We, on the other hand, believe that we achieve significantly better end- user oriented products from almost any digital data-base RS/GIS system, without awaiting"next-generation"sensors.

In accordance with a fourth aspect, the invention provides a method of improving the resolution of an imaging system which generates original pixel signals corresponding to pixels of a pixellated detection field of view or attributable to pixels of a virtual pixellated detector field of view, the method comprising: taking the original signals and operating on them with a weighting function so as to generate processed signals for each mapping region of a predetermined map, the weighting function attributing a contributing weight to each original pixel signal so as to determine a processed signal for each mapping region.

Preferably the weighting function applies the antennas gain function/sensitivity function of the imaging system to the original pixel

signals to at least partially determine the processed signal for each mapping region.

Preferably the method comprises processing the original signals to obtain a plurality of values for at least some pixels of the original pixellated field of view (and possibly for all).

According to a fifth aspect, the invention provides an image processing apparatus comprising input signal receiving means; processed signal output means; and computational means; the computational means being adapted to take a set of original input signals from the input signal receiving means and of processing them by applying a weighting function which in use is used to generate an output signal for a predetermined mapping region, the mapping region being input to the computational means or stored there; the computational means being capable of applying respective appropriate weighting to the original input signals of a set of signals representative of an image to be processed to generate each respective output signal corresponding to each respective mapping region and of outputting signals corresponding to the mapping regions to the processed signal output means.

According to a sixth aspect, the invention provides an image processing system comprising image capturing means and image processing means; the image capturing means capturing or sensing an image signal for each pixel of a pixellated image field of view, and the image processing means comprising image processing apparatus in accordance with the preceding aspect of the invention; the arrangement being such that in use the image capturing means obtaining an image of original signals for each of its pixels and the image processing means processing them to create output

image processed signals translated into the mapping regions of the image processing apparatus.

Preferably the system is capable of oversampling an observed scene to create a plurality of sensed original image signals for elements with the scene (possibly by having spatially overlapping original images, a plurality of which contain the elements, or by having temporarily overlapping original scene images, or both spatially and temporarily overlapping original scene, images), and of using the"redundant"data to improve the resolution of the system beyond the resolution of the image capturing means alone.

The image capturing means may comprise a microwave antenna or an infrared receiver. This may be mounted on a remote sensing platform such as a satellite, aeroplane or ship.

In accordance with a seventh aspect, the invention provides a satellite system, or other remote sensing vehicle or installation having one or more sensors having an overlapping field of view so as to produce overlapping unprocessed images or data samples, and means for processing the images or data samples using a method in accordance with any one of the first, second, third or fourth aspects of the invention.

The system may comprise a plurality of sensors which sense infrared, microwave, visible or other radiation, such as a microwave antenna.

Looking at it one way, the invention lies in the use of overlapping image data from an imaging system to improve the resolution of the imaging system. By providing a satellite having a number of overlapping fields of view, and using the overlapping images in combination, better resolution

can be allowed than using each image separately to derive information.

This is achieved, in particular, by using a weighing function in combination with an accurate map of the size and/or shape of at least some, and preferably substantially all, features that should be being imaged to improve resolution.

The invention is especially applicable to the processing of images which exhibit areas with different spatial characteristics, and can result in more accurate knowledge of the value of the characteristics near a boundary.

A map indicating where the boundary of the area is helps to compensate for erroneous effects due to boundary contamination. As one example, this may allow ships to be located in an area adjacent a coastline by producing accurate information in that area. Changes over time can be used to detect movement of a ship into or out of that area.

Part of a particular embodiment of the invention relies upon the condition, or assumption, that within two bounded areas or mapping regions points in one mapping region have a measured value/signal (representative of a characteristic such as temperature) that is constant within the mapping region, or at least"constant"in the sense of being closer to other characteristic values (signals for other parts within the mapping region) than to points in the other bounded area or mapping region.

There will now be described, by way of example only, one embodiment of the present invention with reference to the accompanying drawings of which: Figure 1 illustrates the configuration of the Special Sensor Microwave Imager (SSM/I) showing the scanning motion of the sensor;

Figure 2 shows the arrangement of nine overlapping data footprints used to produce a set of linear equations; Figures 3,4 and 5 show the three vertically polarised test images using the"raw"scan/back co-ordinate system; Figure 6 illustrates the land map of boundaries derived from UK admiralty charts; Figure 7 is a plot of geolocation error against relative position of boresight and boundary; Figure 8 is a plot of change of height against height itself on the same axis as Fig 7; Figures 9-11 illustrate a photographic view of the test scenes used; Figures 12-14 illustrate corresponding SSM/I images for figures 9- 11; Figures 15-23 illustrate results of processing of test images; and Figure 24 is an overview of a method in accordance with one embodiment of the invention.

A method in accordance with one embodiment is shown in schematic form in Figure 24.

In a first step, a set of data samples are acquired to form a sample view or two dimensional image. The data samples may comprise various overlapping snapshots of parts of an area of a surface. These may be acquired by an antenna having a known gain function and a sensitivity to the parameter of interest.

In a second step, a map or transformation of each snapshot on the surface is constructed by processing the effective field of view of the sensor in combination with a knowledge of the shape of the surface. This enables the actual footprint for each snapshot to be calculated using data about the surface profile.

A set of, say n, data samples forming an overlapping grid are then selected, and each data sample is allocated a weight V Vn.

The calculation of the weights is achieved from a combination of knowledge of the expected parameters of features of the surface being measured. For example, the surface is divided into bounded areas or mapping regions. Each area is assumed to be uniform and homogeneous and is allocated a variable p (corresponding to its spectral value). The weighting w for each data sample depends on the location of the footprint relative to these bounded areas.

The n data samples and the weightings Vl Vm are arranged in a matrix and, by suitable mathematics processing, the values of the unknown variables allocated to each area are determined. Returning to the map, these calculated variables can then be used to form an image point or pixel at the centre of the n data samples.

The process can then be repeated using blocks of adjacent data samples to build up values for adjacent image points or pixels in a similar manner

(by calculating the allocated unknown variables from the new data) until a complete processed image of the whole surface has been constructed.

Alternately, a set of footprints carry the whole surface of interest could be proceeded at once to substantially simultaneously produce pixel values.

Of course, extracting the variables from the datasets depends upon the set of equations in terms of measured data and weightings being soluble. The method may therefore include further processing steps to test for computability and reliability of the processing.

In one arrangement, the weightings are allocated to respective linear equations, one per data sample, according to the overlap between the footprint for the sample and the variables p allocated to each area. This also takes into account the sensor gain function, and can be readily achieved by conjugating the gain function with the surface area overlapping a footprint.

One source of passive microwave data samples which can be used to measure surface temperature is the special Sensor Microwave Images (SSM/I) on the United States Defence Meteorological Satellite program range of spacecraft. The instrument measures incident microwave radiation emitted or reflected by the earths surface with typical 3dB footprints of between 69 x 43 km and 15 x 13 km, dependent upon frequency.

With such large footprints, coastal zones must (using prior data extraction methodology) be eliminated from the areas for which many physical parameter would be retrieved to avoid problems due to contamination of footprints for coastal zone data samples (footprints covering both land and sea).

An important example of the problem of this coastal contamination occurs in the retrieval of surface wind speeds from the microwave data.

The SSM/I instrument is constructed to continuously rotate, completing 31.6 revolutions each minute. For approximately one third of each rotation (51.2 degrees to either side of the direction of spacecraft motion) the instrument measures the incoming radiation. The axis of the rotation is maintained to ensure a constant incidence angle between the direction of measurement and the surface normal vector at the point where the measurement direction intersects the Earth's surface. The combination of this conical scanning arrangement and the motion of the satellite results in data being collected on a succession of curved swaths across the Earth's surface each swath being 1400 km long (Figure 1).

The 85.5 GHz measurements are made every 4.22 milliseconds along the swath, resulting in 128 measurements spaced 12.5 km apart. The spacecraft itself moves 12.5 km forward during the 1.9 seconds required for one full rotation of the instrument. The high resolution 85.5 GHz data is sampled for every swath giving a 12.5 by 12.5 km resolution. The lower frequency channels are sampled every 8.44 milliseconds on alternate scans (designated A-scans), giving a sampled resolution of 25 km by 25 km.

The brightness temperature measured by the satellite in a given channel is affected by a number of factors: the instrument calibration; extraneous signals due to cross polarisation effects and radiation emanating from parts of the spacecraft and from outer space; and the signal from the target area entering the instrument at angles close to the direction of view (along the boresight). Calibration and extraneous signal effects may be

removed using a standard pre-processing technique which results in a data sample comprising a"brightness temperature"measurement at the satellite which is supposed to represent as closely as possible the net radiation input for a narrow range of angles about the boresight. This brightness temperature should be equal the convolution of the antenna gain characteristics with the pattern of incoming radiation. A gain function which represents the responsiveness of the instrument to radiation entering at different angles to the boresight was experimentally determined pre-flight.

A particular satellite brightness temperature measurement may be related to a pattern of microwave emission from the surface (and atmosphere) of the Earth by: where t is the time relative to the measurement time, h (t) is the impulse response of the instrument, G (e (t) is the normalised gain function for the instrument in the direction (0, ) relative to the direction of the boresight at the measurement time, O (t), (D (t) describe the movement of the boresight during the measurement period and (X (0, ), Y (0, +)) is the projection of the viewing direction (0, ) onto the surface of the Earth. Assuming that brightness temperatures for any sensed area vary slowly compared to the time scale of the measurement and reversing the order of integration:

where G (0, ) is the antenna response integrated over the measurement interval. Given the consistent motion of the SSM/I instrument this function is assumed to be independent of the measurement time.

The footprint associated with the true antenna response G (0, ) is known as the Instantaneous Field of View (IFOV). The footprint for the time integrated G (e, ) is called the Effective Field of View (EFOV). The projection of the EFOV onto the surface of the Earth results in an elliptical footprint, with the shorter axis parallel to the scan. Since the sampling distance at each frequency is shorter than the dimensions of these projected footprints they overlap. This overlap blurs the microwave image and extends the coastal contamination problem.

In the example described hereinafter, to obtain accurate date even at coastal regions the antenna gain function is convolved with a simple microwave emission model. The model uses GIS data to determine the land sea boundary and then assumes that all land areas within a small localised region have the same brightness temperature and that the sea within that region has an independent, but similarly uniform, brightness temperature. Convolving the antenna gain function with the model over the footprint associated with a given pixel results in a linear equation relating the brightness temperature measured at the satellite to the land and sea brightness temperatures assumed by the model. Repeating the convolution process for each of nine pixels forming a 3x3 square about a particular point as shown in Figure 2 results in an over-specified system of linear equations relating two unknowns (the modelled land and sea

brightness temperatures which are assumed to have the same respective values within each footprint) to nine known values (the measured brightness temperature for each pixel). These equations are solved using a least squares optimisation technique to obtain estimates for the actual land and sea brightness temperatures present within a combined footprint of the nine pixels.

As described hereinbefore, the time-integrated antenna pattern is convolved with a simple model which employs only two brightness temperatures to describe the combined microwave emission from surface and atmosphere over land and sea areas respectively within the footprint of a particular measurement (i. e. only two mapping regions, each assumed to have a constant brightness temperature). The model is derived using GIS data to determine the coastal position and may be mathematically expressed as: Tb (x, y) = Tland L (x, y) + Tsea (1-L (x, y)) Where L (x, y) = 1 if (x, y) is a point on land = 0 if (x, y) is a point in the ocean Tland = the brightness temperature of the land Tsea = the brightness temperature of the ocean Here land and sea brightness temperatures are assumed to include the effects of the atmosphere intervening between a given point on the surface and the sensor.

The brightness temperatures seen by the satellite may be obtained by convolving the antenna gain function over the model substitution gives: The above equation may be applied for each of the nine brightness temperature measurements corresponding to the 9 pixels surrounding the given point. Assuming uniform land and sea brightness temperatures within the 9 overlapping footprints, this gives rise to a set of simultaneous linear equations: Tbi,j = αi,jT1and + (1-αi,j)Tsea=1,2,3j=1,2,3 With nine equations and only two unknowns (Tland, Tsea), this system is clearly overspecified and will in general not have an exact solution. Some form of optimisation technique must be used. Restructuring: Tbi, j = ai, j (Tland'Tsea) + Tsea shows that a linear regression of the brightness temperatures Tb against the weights a will yield values for Tland-Tsea and Tsea, which corresponds to the slope and intercept of the fitted line respectively.

This retrieval results in average land and sea brightness temperatures for a region consisting of the combined footprints from nine pixels approximately 87 by 78 km square (defined by the 3dB level for the 37 GHz footprint) as opposed to 37 by 28 km for the original pixel measurements. (Although it should be noted that Tb values are estimated for each pixel position on the original 37 GHz data grid (i. e. every 25

km)). Thus resolution is sacrificed to gain the information needed to separate land and sea brightness temperatures.

The accuracy of the retrievals will be affected by the veracity of the assumption that the surface emission for an area covered by nine measurement footprints may be represented by only two uniform values for brightness temperature. Since this assumption is fundamental to this embodiment of the technique any resulting errors will represent an absolute bound on the accuracy of retrieval. These errors were partially constrained in this work by the use of cloud-free imagery of satellite frequencies which thus limited atmospheric, although not surface, variations. Other factors affecting the accuracy of the retrievals relate to the precision with which the relationship between surface emission patterns and temperature measurements at the satellite is modelled.

Errors associated with these factors must be minimised if realistic land and sea brightness temperatures are to be retrieved.

The method has been applied to three 37.0 GHz vertically polarised SSM/I images from the DMSP F13 satellite for the region around the southern end of the Red Sea and the Gulf of Aden (5-25N, 35-55E). This region was chosen because desert coastlines should present a less complex test case than would vegetated ones.

The images were selected to meet the following criteria: -The image had to be as cloud free as possible. This was necessary both to provide a simple and ambiguous test for the retrieval technique itself and to facilitate any geolocation correction.

Coincident visible data from the Operational Linescan System (OLS; an instrument located on the same satellite) should be available. These data would be used to facilitate an investigation into possible OLS-SSM/I co-location and to provide additional higher resolution information about the studied scene.

The OLS images were examined on the SPIDER (Space Physics Interactive Data Resource) World Wide Web site and once selected were ordered from the US National Geophysical Data Centre (NGDC), which made the data available by FTP. The corresponding SSM/I image data (see Table 2) were extracted from 8 mm tapes from the CRS archive and decoded using standard packages. This software generated"brightness temperature"images, computed from the actual satellite temperature measurements using a standard pre-processing technique.

Table 2: Test Images Date Julian Day Orbit Times 23 March 1996 83 515354 12: 54-14: 38 24 March 1996 84 516768 12: 42-14: 26 31 March 1996 91 526667 12: 59-14: 42 Figures 3,4 and 5 show the three 37 GHz vertically polarised SSM/I test images using the"raw"scan/track co-ordinate system. The coastal regions are delineated by white lines.

The coastline for the study area was digitised from UK Admiralty charts to an accuracy of the order of 1 km, this data being used to create a land sea mask of boundaries on a lat./long grid at a resolution of 0.02° (Figure 6).

In the method of the present example of the invention, the precision with which the satellite field of view is projected onto the surface of the Earth is important, since the convolution operation is performed at a spatial scale of less than 1/8 of a pixel. Unfortunately, the satellite ephemeris information in the SSM/I data is not supplied in a form readily applicable to this problem. The data do, however, include geolocation information for each pixel which may be reversed through the satellite-Earth projection process to obtain some of the necessary parameters. The positional data do not display the required sub-pixel level of accuracy and must be improved through a manual or automatic re-registration of the microwave image before being employed.

The sensitivity of the retrieval technique to geolocation errors may be investigated by convolving the antenna gain function with models consisting of a straight boundary at different positions. As before, the convolution results in a weight, a, which indicates the relative contribution made on one side of the boundary. If one is trying to retrieve values for the other side of the boundary, this weight may be regarded as the error due to contamination. For example, if a is deemed to refer to the contribution of the land brightness temperature to the mixed pixel value, that weight may also be regarded as indicating the relative error which the presence of land introduces into the pixel brightness temperature if that value is viewed as an ocean brightness temperature.

The difference between the weights derived for two different, but closely spaced, boundary positions give an indication of the error introduced into the modelling process by a faulty geolocation. Figure 7 plots both the spatial derivative (the geolocation error) against the relative position of boresight and boundary. Figure 8 plots the relative geolocation error (change in weight divided by the weight itself) against the same axis.

In order for the retrieval technique to actually improve anything, the error in modelling the antenna response to the a GIS model must be less than the"contamination"error the technique is trying to remove. An examination of Figures 7 and 8 leads to two important conclusions: -Geolocation errors of greater than about 2 km will lead to errors which comprise a significant portion of the effect to be removed.

-For certain positions of the coast with respect to the boresight the technique will be ineffective for any reasonable geolocation error since the effects of a displaced location far outweigh those of the combination error itself (see Figure 8).

Sea surface wind speeds and other important geophysical parameters can be derived from polarisation differences. The effect of geolocation errors on these values may be determined thus: <BR> <BR> <BR> T Th = a v Tv land + (1a v) Tv sea a ^ Th land (1 a ^) Th sea<BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> =αv(Tv,land - Th,land) + (1-αv)(Tv,sea - Th,sea) + (αv - αh)(Th,land - Th,sea)

Here the v and h refer to vertical and horizontal linear polarisations respectively. This equation reveals that polarisation differences vary across a coastline in a manner very similar to brightness temperatures.

The difference lies in the third term on the right hand side of the equation which represents the effects of any asymmetry between the vertical and horizontal instrument gain functions. Unfortunately there is known to be a discrepancy of several kilometres between the maxima of the vertical and horizontal gain functions (as projected onto the ground). Thus the third term, while still small, has some significance.

There is considerable error in the locational accuracy of both OLS and SSM/I imagery as can be seen in Figures 9-14. Figures 9,10 and 11 show the visible (OLS) view of each scene while figures 12,13 and 14 show the corresponding 37 GHz vertically polarised SSM/I images. In these diagrams, the data are shown on a (0.020 resolution) Latitude/Longitude grid, with a digitised coastline overlaying the pixel data. The OLS data seem to be shifted to the West, whereas the SSM/I images are displaced to the North west.

The strategy was carried out using the SSM/I imagery for three 85 GHz images corresponding to the three test windows on different days (day 1, day 2, day 8). The resulting transformation coefficients were then applied to the original imagery and the results for first, second, and third order polynomials for the day 1, day 2 and day 8 respectively are shown in Figures 15 through 23. The illustrations are overlaid with the digitised coast outline to illustrate the accuracy of the fit.

The first order transform coefficients are given in Table 3. Table 3: First order transform coefficients for the three SSM/I images. Date x'y'Julian Day Day 1 const 8. 1049 9. 8219 83 x 0. 9883 0. 0107 83 y 0. 0056 0. 9892 83 Day 2 const 2. 5049 8. 4483 84 x 1. 0090 0. 0018 84 y-0.0027 0. 9997 84 Day 8 const 10. 0395 10. 3634 91 x 1.0015-0.0018 91 y-0.0049 0. 9991 91

From the constant term we can see that the geolocational error caused by shift alone is of the order of 8 to 10 pixels (approx. 16-20 km) in both the x and y directions. With day 84 being the exception with a shift of only 2.5 pixels in the x direction. The errors of the fit of the GCPs (i. e. coastline) to the polynomial are given in Table 4. For the 83 and 91 images the improvement in fig going from first to second order is considerable, whereas from second to third is negligible. For day 84 the improvement from first to second order is less effective, whereas the progression from second to third order is much more effective.

Table 4 Root Mean Square Errors for the Ground Control Point fit to the Transformation Polynomial.

RMS Error <BR> <BR> <BR> <BR> Date Num of Order x v Total<BR> GCPs 23 March 219 1 2.559 3.035 3.970 2 1.127 1.374 1.777 3 1.068 1.051 1.498 214 1 2.564 2.087 3.305 2 2.323 1.374 2.699 3 1.215 1.017 1.585 347 1 2.069 1.325 2.458 2 1.460 1.132 1.850 3 1.419 0.996 1.733 Following the geolocation techniques described above a further means of improving image geolocation may be considered. In this method pixel locations are varied so as to optimise the goodness of fig criterion in the linear regression used to derive the sea and land temperatures. The retrieval process may be completed at each location for a set of possible geolocation displacements. Defining: for a possible geolocation error (#X, #Y) and substituting into the above equation yields: bi;. ; =a, iBY). (T, and-Tea) +Tea Restricting the possible displacements to points on a grid (i. e. 8X and 8Y are integer multiples of some small displacement) results in a set of overspecified linear systems. For example, if a 5 by 5 grid of

displacements is defined, 25 systems of nine equations each will result. The displacement which causes the retrieval to generate the smallest root mean square error in the linear regression may now be assumed to correspond to the true location of the nine pixel set used for the particular retrieval. Being associated with the correct location, the selected regression coefficients should yield the best possible values for the land and sea brightness temperatures.

PRACTICAL IMPLEMENTATION OF THE METHOD The first tasks in the implementation of the retrieval technique is to generate a set of weights a based on the convolution of the gain function with the land-sea scene. The gain function must be correctly projected onto the Earth's surface and so depends on the satellite-Earth geometry associated with each pixel. To accomplish this, detailed information concerning the movement of both the satellite and the direction of the SSM/I sensor is required. There are a number of formats in which SSM/I data may be supplied. The F13 SSM/I data used for this study was originally provided by the US Fleet Numerical Meteorology and Oceanography Center (FNMOC). These data contain both ephemeris information for the satellite and a latitude/longitude location for each pixel. While the decoding software provided with the FNMOC data tapes does not extract satellite ephemeris information, communications with NGDC establish the necessary changes to the code. Unfortunately, the ephemeris data were not usable; the times for the satellite position measurements did not coincide with the times for the corresponding temperature data.

The only information within the FNMOC data set from which the movement of the satellite with time could be readily derived was the

location data for the pixel grid. Provisionally ignoring the movement of the satellite during the course of a single scan, all pixels within that scan may be assumed to be equidistant from the satellite location. This assumption allows that location to be simply derived using spherical trigonometry. Given the satellite location, the measurement direction for each pixel in the scan may be calculated and the antenna footprint projected onto the ground. Other parameters for the projection such as satellite altitude and incidence angle between the observation vector and the surface of the Earth must then be assumed to be close to their published mean values. This assumption is more reasonable for the incidence angle, which does not vary to a significant degree than it is for altitude; the altitude of the F13 satellite varies from 840 km to 875 km; the variation of incidence angle for F13 is not available, but for previous spacecraft this parameter has varied by less than 1.5°.

Given the need to make such approximations, the projection of the boresight direction for a measurement onto the ground does not exactly match the pixel location given in the data. The latter position (as corrected by any geolocation enhancements) is assumed to be more reliable, and the location of the centre of the projected footprint shifted accordingly.

With the projection parameters in place, the footprint must be projected onto the ground. Although projection software had been obtained from the University of Bremen, this was restricted in its application and contained a number of significant approximations. The projection function was thus rederived from basic principles using plane and spherical trigonometry and implemented as a subroutine for use by the retrieval programs.

Once the correct footprint projection function has been established, calculating the weights for each pixel is relatively straightforward. The time-integrated (Effective Field of View) antenna pattern is read in as an array of values on a 0.1° grid. The convolution is approximated by a summation, with each point in the pattern array being included in the summation if and only if that particular viewing direction intersects with a portion of the Earth's surface deemed to be land by the GIS mask. The net result of this process is an array of weights, and, each weight value corresponding to a single SSM/I 37 GHz pixel.

If the optimal-fit geolocation improvement technique (described in Section 6.3) is to be employed, the above process must be repeated once for each possible displacement between the given pixel location and its actual position. These displacements were taken from a 5 by 5 grid, consisting of possible displacements 0,0.02° or 0.04° in either ordinate (latitude or longitude), resulting in twenty-five possible values for a for each pixel. Correction using a much larger 21 by 21 array with the same angular resolution of 0.02° between displacements was attempted in an effort to eliminate the necessity for the image-matching geolocation enhancement step.

The computation of weights is a relatively slow process. For a simple retrieval a 64 by 64 image took 50 minutes on one of the CRS Silicon Graphics Indy 2 workstations. The same image using the 5 by 5 array for each pixel needed for geolocation enhancement took 12 hours to compute, and, even with some additional optimisation to the algorithm, the 21 by 21 array took 48 hours. In future it should be possible to restructure the retrieval algorithm to achieve running times of one quarter to one third of this total, for example by extending the GIS mask to differentiate between

potentially coastal pixels and those which must be either 100% land or 100% ocean.

Once the weights have been computed, they may be combined with microwave brightness temperature data for the 9 pixel square surrounding each pixel in the area of interest to form the dataset for a linear regression. If more than one weight has been computed for each pixel, a regression is performed for each possible displacement of the nine pixels (which are assumed to remain fixed with respect to each other). The coefficients returned by the linear regression with the smallest root mean square, error are used to calculate the land and sea brightness temperatures for the central pixel. The process is carried out for each pixel in the original grid spacing (25 km x 25 km).

As is apparent to the skilled man, the method of the present invention is applicable to data extraction in any application in which a sensor with a known (or approximated or estimated) gain function is provided to produce an output corresponding to a sensed area. Suitable sensors which operate in such a manner include sonar hydrophones, microwave antennas, RF antennas etc. Each produce a convolved output data sample.

Envisaged applications include trafficability, sea ice monitoring, scatterometry, moving vs stationary image recognition, agricultural land use (each field perhaps defining a boundary), irrigation management, ocean front variations, and coastal zone monitoring.

It is also envisaged that more abstract applications will fall within the scope of the invention, the surface perhaps being an image which is "observed"by a sensor, such as an image taken by a camera (e. g. digital or video camera) or optical sensing element. Possible sensors for use in

the method may cover radar to sonar, from visible to infrared or vertical sounding etc. The important feature is knowledge (or approximation) of the sensor gain function and use of this function to extract more accurate data using a knowledge of"surface"boundaries.

In summary, the present invention provides, in one embodiment, a method of improving the extraction of data land/sea images observed by satellite base sensors.

Given sensor values, Vl..... to Vm, and areas Al..... to An, where each V is an integral of Al.... An, and assuming each area A is precisely geometrically defined, and internally homogeneous; assuming that the sensor spatial integration V is also precisely geometrically defined; and assuming that the entire geometric frame of reference is consistent, then providing m is greater than n we have a soluble system of linear equations. It may be that the problem naturally provides for the last constraint, but it can be ensured that m is greater than or equal to n in three ways, first by degrading in the spatial dimension; or second by degrading in the temporal dimension; or third, both together.

In the case of a coastal zone study it has been assumed that each dataset is a single time slice, and nine sensor values were taken, V1-9 to arrive at two unknowns, al and a2, in this case land and sea values. If the time interval were less than the variability of the parameter being measured, then it is possible that the V equations could be derived from successive time slices, rather than from the same time slice.

Regarding the source of"additional information", we would say first that the addition of the geometric definitions of A and their specified relationships with V provide the required data for more accurate

interpretation of V. In one sense the information contained in V is obviously not increased, but the procedure provides a means of extracting the maximum amount of information from the original data.

It will be appreciated that in addition to being applicable to image processing, and remote sensing (not necessarily of an image), the invention is more generally applicable to signal processing and protection for such is envisaged.