Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF MEASUREMENT USING FUSION OF INFORMATION
Document Type and Number:
WIPO Patent Application WO/2012/042280
Kind Code:
A1
Abstract:
It is often necessary to make the best possible measurement of an object given a set of approximate assessments of its true state. As states change over time, or more information is made available, the set of assessments of the relative likelihood of the various possibilities has to be revised. An example might be the identification of an observed object such as a person or an aircraft, or the generation of a weather forecast from several pieces of information distributed in time or place, or both. The invention relates to methods for making the best possible measurement of an object, described by a powerset Θ, given uncertain data in terms of the elements of the powerset mfused, comprising the following steps: a) Set up the state of the measurement with any prior knowledge if available, or otherwise as ignorant, for the fused measurement, mfused; b) Receive the new data; c) Put the new data into the powerset mmeasurement; d) Work out the precision of mfused by evaluating the distribution of data across the mfused; e) Disjunctively discount mmeasurement by an amount depending on the result of Step d to get mmeasurementd; f) Conjunctively discount mmeasurement by an amount depending on the result of Step d to get mmeasurementc; g) Disjunctively combine mf with mmeasurementd to get mfusedd; h) Conjunctively combine mfused with mmeasurementc to get mfusedc; i)Combine mfusedd and mfusedc to get a new average value mfused; and j) Return to (b), if there are more data; else end the process. Such a method balances the tendencies of known methods towards throwing away useful information available in measurements that disagree.

Inventors:
POWELL GAVIN (GB)
Application Number:
PCT/GB2011/051867
Publication Date:
April 05, 2012
Filing Date:
September 30, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EADS UK LTD
POWELL GAVIN (GB)
International Classes:
G06K9/62
Foreign References:
US6944566B22005-09-13
US7337086B22008-02-26
US20060030988A12006-02-09
Other References:
GAVIN POWELL: "Pitfalls for recursive iteration in set based fusion", WORKSHOP ON THE THEORY OF BELIEF FUNCTIONS, 1 April 2010 (2010-04-01), Rennes, France, pages 1 - 6, XP055018415, Retrieved from the Internet [retrieved on 20120203]
OSSWALD C ET AL: "Understanding the large family of Dempster-Shafer theory's fusion operators a decision-based measure", 2006 9TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, FUSION - 2006 9TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, FUSION 2006 INST. OF ELEC. AND ELEC. ENG. COMPUTER SOCIETY US, IEEE, PISCATAWAY, NJ, USA, 1 July 2006 (2006-07-01), pages 1 - 7, XP031042372, ISBN: 978-1-4244-0953-2, DOI: 10.1109/ICIF.2006.301631
GAVIN POWELL ET AL: "GRP1. A recursive fusion operator for the transferable belief model", INFORMATION FUSION (FUSION), 2011 PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON, IEEE, 5 July 2011 (2011-07-05), pages 1 - 8, XP032008955, ISBN: 978-1-4577-0267-9
DENOEUX ET AL: "Conjunctive and disjunctive combination of belief functions induced by nondistinct bodies of evidence", ARTIFICIAL INTELLIGENCE, ELSEVIER SCIENCE PUBLISHER B.V., AMSTERDAM, NL, vol. 172, no. 2-3, 18 December 2007 (2007-12-18), pages 234 - 264, XP022392585, ISSN: 0004-3702, DOI: 10.1016/J.ARTINT.2007.05.008
SMETS P: "Belief functions: The disjunctive rule of combination and the generalized Bayesian theorem", INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, ELSEVIER SCIENCE, NEW YORK, NY, US, vol. 9, no. 1, 1 August 1993 (1993-08-01), pages 1 - 35, XP002587073, ISSN: 0888-613X
STEPHANOU ET AL.: "Measuring Consensus Effectiveness by a Generalized Entropy Criterion", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 10, no. 4, July 1988 (1988-07-01), pages 544 - 554, XP002669897
A. P. DEMPSTER: "A Generalisation of Bayesian Inference", JOURNAL OF THE ROYAL STATISTICAL SOCIETY, 1968, pages 205 - 247
G. SHAFER: "A mathematical theory of evidence", 1976, PRINCETON UNIVERSITY PRESS
P. SMETS, R. KENNES: "The transferable Belief Model", ARTIFICIAL INTELLIGENCE, vol. 66, 1994, pages 191 - 234
JEAN DEZERT: "Combination of Paradoxical Sources of Information within the Neutrosophic Framework", PROCEEDINGS OF THE FIRST INT. CONF. ON NEUTROSOPHICS, 1 December 2001 (2001-12-01)
B. PANNETIER, J. DEZERT: "GMTI and IMINT Data Fusion for Multiple Target Tracking and Classification", FUSION, 2009
A. MARTIN ET AL.: "Towards a combination rule to deal with partial conflict and specificity in belief function theory", 10TH CONFERENCE OF THE INTERNATIONAL SOCIETY OF INFORMATION FUSION, 2007, pages 313 - 320
M. C. FLOREA, J. DEZERT, P. VALIN, F. SMARANDACHE, ANNE-LAURE JOUSSELME: "Adaptive combination rule and proportional conflict redistribution rule for information fusion", COGIS '06 CONFERENCE, March 2006 (2006-03-01), Retrieved from the Internet
"Belief Functions: the disjunctive rule of combination and the generalised Bayesian theorem", INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, vol. 9, 1993, pages 1 - 35
STEPHANOU ET AL.: "Measuring Consensus Effectiveness by a Generalized Entropy Criterion", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 10, no. 4, July 1988 (1988-07-01), pages 544 - 554, XP002669897, DOI: doi:10.1109/34.3916
See also references of EP 2622541A1
Attorney, Agent or Firm:
GIBBS, Christopher Stephen (Redcliff Quay 120 Redcliff Street, Bristol Avon BS1 6HU, GB)
Download PDF:
Claims:
Claims

A method for making a measurement of an object, described by a powerset Θ, given an existing assessment consisting of uncertain data in terms of the elements of the

powerset mfused, comprising the following steps:

a) Set up the state of knowledge with any prior

knowledge if available, or otherwise as ignorant, for the fused measurement, mfused;

b) Receive the new data;

c) Put the new data into the powerset mmeasurement;

Optionally carry out steps d) to f) :

d) Work out the precision of mfused by evaluating the distribution of data across the mfused;

e) Disjunctively discount mmeasurement by an amount depending on the result of Step d to get

¾easurementd

f) Conjunctively discount mmeasurement by an amount depending on the result of Step d to get

¾easurementc

g) Disjunctively combine mf with mmeasurementd to get

fftfusedd

h) Conjunctively combine mfused with mmeasurementc to get

F fusedc

i) Combine mfusedd and mfusedc to get a new average value mfuSed; and

j) Return to (b) , if there are more data; else end the process.

A method according to claim 1, in which the precision p is determined in Step (d) using Equations 7-9:

|Ω|—\A\

p(m) =∑ yXffl ) Α≠0,Α^Θ Equation 7 m(0) V ≠0,ic0 Equation 8 V ≠0,i c 0 Equation 9

where Ω is the union of all elements of the powerset and 0 is the empty set.

A method according to claim 1 or 2, in which the discount including the empty set (Step (d) ) is determined using Equation 6:

ma{A\x) = {\ -a)-m{A) VA^ Q,A≠

ma{A\x) = [(l -a)-m{A)]+ a A = 0 Equation 6

A method according to any preceding claim, in which the discount ignoring the empty set (Step (e) ) is determined using Equation 5:

ma{A\x) = {\ -a)-m{A) VA^ Q,A≠Q

ma(A\ x) = [(l - )-m(A)]+ Α = Ω Equation 5

A method according to any preceding claim, in which the disjunctive combination is determined using Equation 3:

W2 (A) =∑A=B^cmi (B)mi (c) Equation 3 where /¾ and /¾> are the two sets of information to be fused and B and C are (alternative) hypotheses within these powersets.

A method according to any preceding claim, in which the conjunctive combination is determined using Equation 4:

(A) =∑A=BnC mi (B)m2 ( Equation 4

A method according to any preceding claim, in which the data are measurements from a sensor, or a set of sensors.

A method according to claim 7, in which the sensor is a position sensor, a speed sensor, a light sensor, an acoustic sensor, a lidar sensor, a radar sensor, a camera-based sensor or similar.

9. A method according to any preceding claim, in which the powerset is a description of a target, e.g. a military target such as an aircraft.

10. A method according to any of claims 1 to 8, in which the powerset is a collection of data about the weather.

11. An apparatus for making measurements of an object based on uncertain or incomplete data, comprising an input means for gathering data about the object, and a computer arranged to carry out the fusion of successive data from the input means in accordance with a method according to any of claims 1 to 8.

12. An apparatus according to claim 11 and comprising one or more sensors.

13. An apparatus according to claim 12, in which the sensors are adapted for medical diagnosis.

Description:
Method of Measurement Using Fusion of Information

The invention relates to a method of measurement involving fusing or combining information; that is, of pooling evidence about an object, such as an event or object under investigation, in order to update existing information and estimates about the identity or nature of the object when new information is received, for instance from sensors.

Overview

It is a common task for an x agent' , such as a person or a computer program, to create a set of subjective quantified beliefs, or approximate assessments analogous to

probabilities, of the true state of some object.

Generally, from lack of knowledge, this is an imprecise evaluation of the true state. As states change over time, or more information is made available to the agent, they may wish to update or alter their set of beliefs. An example might be the identification of an observed object such as a person or an aircraft, or the generation of a weather forecast from several pieces of information

distributed in time or place, or both.

To take an example, in an identification procedure, sensing devices can classify an enemy target from a selection of 'known' objects. This may be from human intelligence, radar, LADAR etc. This object classification will occur iteratively over time, providing a new measurement, or classification, at regular time intervals. To obtain a more informed overall classification, all sensors'

measurements need to be fused at each time interval, and recursively over time, as shown in Equation 1.

S . = S .. f - ί + s f Equation 1 where S is the fusion of previous sensor measurements and s is the sensor measurement at time t. From the fused data a better classification or assessment of the object can be made, from 5i... t . As will be explained, current set-based methods are inadequate, because of the fusion method used. The present invention aims to overcome these problems by providing a much more intelligent form of fusion method, designed specifically for iterative situations.

Background

If one is presented with more than one piece of information about a subject, from either the same measurement source over time or multiple sources, or even multiple sources over time, then it is normal to want to combine all of this information, to increase the accuracy, or confidence, of the measurement. This combination will enable a more informed decision to be made, using all available

information, as opposed to just looking at a single piece of information. An example of such a task would be

classifying an object where information is received

continuously or intermittently over time, from a variety of sensors, and one wants to recursively combine, or fuse, this information, so as to obtain a continuously updated measurement .

Set-based methods have been in existence for some time, originating from work done by Dempster and Shafer who formulated the popular Dempster-Shafer Theory (DST) . See A. P. Dempster, "A Generalisation of Bayesian Inference", Journal of the Royal Statistical Society, Series B30, pp. 205-247, 1968, and G. Shafer, "A mathematical theory of evidence", Princeton University Press, Princeton, NJ 1976. Its popularity lies in its relative simplicity, but there are many issues related to its use, and care must be taken.

Extensions of the DST theory exist that try to overcome some of its failings, primarily the Transferable Belief Model (TBM) - see P. Smets, R. Kennes, "The transferable Belief Model", Artificial Intelligence, V66, 1994,

pp. 191-234; and Dezert Smarandache Theory (DSmT) : Jean Dezert, "Combination of Paradoxical Sources of Information within the Neutrosophic Framework", Proceedings of the First Int. Conf. on Neutrosophics , Univ. of New Mexico, Gallup Campus, December 1-3, 2001. Patents exist in the area of using DST to perform classification (US 6944566) and decision-making and using DSmT for fault diagnosis (US 7337086) . TBM has been used for fusing information to understand vehicle occupancy, as shown in US 2006/030988 (Farmer) .

There are three points that need to be taken into account when looking at these approaches. First, can they fuse information iteratively? Secondly, do they retain the value of the empty set (defined below) ? Thirdly, do they adapt to the data as it changes through time? The empty set represents, as it were, the hypothesis that the object to be identified or classified is not within the known range of possibilities or hypotheses ("open-world") , where the range of known possibilities, or 'elements' , represents the 'world' ; on the other hand, a system which forces an assignment to the known range is called "closed-world". The 2 n possible combinations of the elements are each known as a 'hypothesis' , and collectively as the 'powerset' - this is shown in Fig. 1, to be discussed in more detail below. Here, n is the number of elements in the world, or the number of possible singleton outcomes of the

measurement or classification process.

In the real world, each successive measurement or input will be to a certain extent in conflict with existing data. On a strict interpretation, any such conflict must be interpreted as meaning that the object is not described by any of the known hypotheses. That is, the weighting of the empty set becomes larger. However, such a conclusion does not reflect the uncertainty in the input information. Some way has to be found of dealing with this tendency.

The DST method normalizes the empty set on each iteration and therefore throws away the information associated with it (i.e. the conflict between information sources or the confidence that the true state corresponds to something outside of the known world) . It also has no concept of adapting to its environment. The TBM has no normalization and so keeps the empty-set information. This is a more suitable approach for many applications, but unfortunately becomes its downfall when used recursively with conventional combination rules, making it impossible to do any recursive fusion with the TBM.

Finally, DSmT adds more complexity to the simple and elegant DST. It goes some way to retaining the empty set value, allowing for recursive fusion to take place but not adapting to its environment. Research is still very active in this area and has applications toward data fusion for classification: B. Pannetier and J. Dezert, GMTI and IMINT Data Fusion for Multiple Target Tracking and

Classification, Fusion 2009, Seattle, 6-9 July 2009. These approaches tend to be reliant upon the conflict coming from the sources of data. Situations can easily arise where there is no conflict between information sources, yet there is still uncertainty. It is desirable to capture this uncertainty and accordingly to improve the reliability of the result.

These issues are well known and have been accepted for some time within the community. The death of the founder of the TBM has stunted work in that area, and the limits of the DST were seen to have been reached some time ago. The article "Towards a combination rule to deal with partial conflict and specificity in belief function theory" by A. Martin et al, 10th Conference of the International Society of Information Fusion, 2007, pages 313-320, presents a discussion of conjunctive and disjunctive combinations, redistribution and also weighting of expert responses. The article "Adaptive combination rule and proportional conflict redistribution rule for information fusion" by M. C. Florea, J. Dezert, P. Valin, F.

Smarandache, Anne-Laure Jousselme, Presented at Cogis '06 Conference, Paris, March 2006;

http : //www .see.asso.fr/cogis2006 /pages /programme . htm likewise uses both conjunctive and disjunctive combination, . However, the process still takes place in a closed world, so is in particular unsuitable for recursive applications . The present invention aims to make it possible to utilise the TBM (which is an improvement/extension of DST) and make it flexible and usable in more realistic iterative and recursive real-world scenarios, which it was previously unable to do. Summary of the invention

The invention is concerned with a method for measurement involving fusing multiple sets of data about an object, an interaction of objects or a change in an object after or through interaction with another object or other objects, and is defined in claim 1 as a method, and in claim 11 as an apparatus. The "object" could be a physical object or system as such, or an event relating to such an object or set of objects; for convenience and brevity the word

"object" will be used. Methods embodying the present invention, known as GRP1, have two distinct steps that allow for fusion of data from measurements to be performed recursively in order to make the best use of the available uncertain data. First, the steps in which the pieces of information are fused applies existing methods, in a particular manner, to allow for iterative fusion. Secondly, intelligent decisions are made as to how much influence the incoming information can have on the classification. These decisions are based on a novel adaptive-weighting method. Preferred embodiments of the invention are based on a combination of these steps.

For iterative fusion to be able to take place using set- based theory, dominance by the empty set needs to be avoided. This needs to be done in a manner that does not simply redistribute the empty set after each iteration. The value given to the empty set is a valuable measure that should not be thrown away, as in other techniques. To accomplish this, embodiments of the invention combine information in two different ways. An average (Equation 2) of the disjunctive (Equation 3) and conjunctive (Equation 4) combinations of the data provides the necessary balance between precision and vagueness to give a meaningful answer, and to avoid domination by the empty set. In a simple case the mean can be taken: "¼2 Equation 2 where m(A) is the "mass" given to hypothesis A, taken from the following combination rules, and /¾ and /¾> are the two sets of information to be fused, where each possible hypothesis in Ω (the union of all elements of the powerset) has a mass assigned, and B and C are hypotheses within these worlds;

W2 (A) =∑ A=B ^ C m l (B)m 2 (C) Equation 3

(disjunctive) and

( A ) =∑ A=BnC m i( B ) m i ( c ) Equation 4

(conjunctive) Thus, "disjunctive" means that an element is added to the sum if A is equivalent to both B and C, and "conjunctive" means that it is added if A is equal to the common elements of B and C.

Here a "world" contains the elements that are known about and understood, and can be reasoned with. Each of the 2 n combinations of those elements in the world, including the empty set 0, is called a hypothesis, and collectively these hypotheses are the powerset, Θ (See Fig. 1) . An x agent' attaches quantified subjective beliefs about the true state to each of these hypotheses, where a belief signifies how much weight is to be given to one of the elements in that hypothesis as representing the true state. The powerset with mass (beliefs) assigned is signified by m. There are various powersets within the process, but in an iterative process the main two are:- firstly one m fused that describes the beliefs of the fused measurements up to time t-1, and secondly one measurement that describes the incoming

information as a result of a further measurement at time t. It is with the combination of these two that this

application is chiefly concerned.

Secondly, to enable the method to fuse information both iteratively and intelligently, a novel means of

distributing the amount of weighting (discounting) can be applied to the information prior to its disjunctive and conjunctive combination. Regular discounting will move mass to the uncertain set Ω, which makes the system vaguer as there is less trust in the incoming information. This is fine for conjunctive combination, as it counteracts the natural move of belief to the empty set that occurs through the conjunctive combination rule. For the disjunctive combination one must ensure that the discounting adds vagueness by moving mass to the empty set, to counteract the natural move of belief to the uncertain set Ω, as occurs with the disjunctive rule of combination. If it is not discounted in this manner, then the iterative nature of the problem will make the method converge undesirably.

The weighting factor is a sign of the precision and

certainty in the system, and determines how much it can be influenced by new information. If for instance the system is one for identifying aircraft and it has been instructed for the last 2000 readings that the object to be identified is an aircraft of type GR7, then there will be great precision and certainty in its classification. It will take many conflicting readings for the system then to change that classification. If the system is very unsure of the target type, then it will be easy to alter its classification. This acts as a memory to the system of the information that it has received over time. Brief description of the drawings

For a better understanding of the invention, embodiments of it will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 shows how the powerset is made up of a

distribution of beliefs about certain possible descriptions of the (real) world, or the object of study; and

Figure 2 shows an example of an application of the method to multiple-sensor layout, for identifying a moving vehicle.

In a typical method using the invention, the powerset, denoted Θ, will have beliefs associated with its hypotheses regarding the true state of the object being measured, either from a sensor of some sort or simply human input, e.g. typed in at a keyboard or a computer, or by fusing it with another powerset. Evaluation of how that belief is distributed throughout the powerset, Θ, will show how vague, or precise, that powerset is. Figure 1 shows various possible assignments of (usually mutually

exclusive) beliefs a, b, c, d, which may be, say, four different types of aircraft. The box a represents a particular identification and will, following a

measurement, have a mass associated with it. The box ab represents a belief that the object is a or b, but with no information as to the relative likelihood as between these two; similarly for the other boxes. The empty set, i.e. the possibility that the object is not one of the known possibilities, is shown as 0. If values are assigned to the singleton sets a, b, c, d, i.e. those which have only one element, then the world is precise and any decisions are well educated. If beliefs are given to the uncertain set, Ω, that is, the box abed, then the world is vague and any decisions made from this are uneducated. This notion of precision is quite important, and can be used to determine how the incoming information is fused. If the powerset is showing a high degree of precision then the identification is relatively certain and it should take a significant number of contradictory readings to alter the belief. Alternatively, if the existing assessment is completely vague about knowledge and beliefs, then the system will be more accepting of new information. This concept needs to be accounted for when information is being fused.

It is known to Miscount' incoming information - P. Smets, "Belief Functions: the disjunctive rule of combination and the generalised Bayesian theorem", International Journal of Approximate Reasoning, 9, pp. 1-35, 1993. This discounting process will weight the incoming data and is a measure of how much it is to be trusted.

This known discounting of data is described by Equation 5:

m a (A\ x) = [(l - )-m(A)]+ Α = Ω Equation 5 Here, the notation m a (A|x) means the mass assigned to hypothesis A given that it is already known that event x has occurred. This works perfectly well when one is dealing with the conjunctive combination rule (Equation 4), because the discounted masses are passed toward the empty set, 0. For the disjunctive rule (Equation 3) the

procedure will only force the belief to be vaguer and encourage convergence toward the uncertain set, Ω. When using the disjunctive combination rule, according to the invention, one must discount using Equation 6 below. This will allow the discounted mass to be passed to the empty set, which when fused with the "cautious" combination rule (i.e. eq. 3) allows for the mass to be redistributed evenly across the system:

m a {A\x) = {\ -a)-m{A) VA^ Q,A≠

m a {A\x) = [{l -a)-m{A)]+ a A = 0 Equation 6 The degree that one chooses to discount by is of course related to the degree of precision in the powerset Θ, and shows how much existing hypotheses can be influenced by incoming data. One can measure the precision, p, using Equation 7:

|Ω|—\A\

p( m ) =∑ yXffl ) Α≠0,Α^Θ Equation 7

Here the magnitude signs mean the number of elements in the set in question. Any value, or mass, added to the empty set is treated as adding to the vagueness. There is a point to be decided as to whether the empty set is making the system vaguer, or it is adding precision, or in fact it should be ignored. If the empty set is adding precision, then : p{m) +m{0) V ≠0,i c 0 Equation 8

If any belief given to the empty set is to be ignored, then to normalise one can use Equation 9: p{m) Equation 9

These equations, in particular Equation 7, are similar to those described in Stephanou et al . , "Measuring Consensus Effectiveness by a Generalized Entropy Criterion", IEEE transactions on pattern analysis and machine intelligence, vol. 10, no. 4, July 1988, pp. 544-554 - see Definition 4.4 on page 546.

The method in its entirety, for a sensor-based application, thus proceeds as follows:

Steps :

1. Set up the fused state, m fused , with any prior

knowledge, or as ignorant if no prior knowledge exists ;

2. Receive (new) measurement from sensor; 3. Put the measurement into the powerset m measurement ;

4. Work out the precision associated with m fused using an appropriate one of Equations 7-9;

5. Discount measurement by an amount derived from the ined in Step 4, using Equation 6,

6. ement similarly, using Equation 5, to

7. Disjunctively combine m fu s e d with m me asurementd to get mfusedd using Equation 3;

8. Conjunctively combine m fu s e d with measurements to get i¾usedc using Equation 4 ;

9. Combine m fu s e dd and m fu s e dc with the arithmetic mean operator, or other suitable operator, from Equation 2 to get a new m fUsed ;

10. Return to 2, if there are still data to be

processed .

Steps 4-6 are a significant part of the method and can be known as Dynamic Discounting.

Figure 2 shows an application of the method to the

identification of a vehicle moving along a path V. Sensors SI, S2... are scattered over a terrain through which vehicles and personnel are expected to pass. The sensors can simply be proximity sensors, or they can give more sophisticated information about a passing vehicle. They pass their measurement data to a central control (which can itself be incorporated in one of the sensors) from which the speed and perhaps direction of travel of the vehicle can be estimated. Individual readings might be compatible with the vehicle being, say, a pedestrian, a bicycle or a car, but some will be much less probable. On the basis of many readings a best measurement can be obtained. If the system has a reasonably certain identification, a new measurement that is inconsistent with this conclusion does not disturb the consensus greatly, and it can be concluded (for instance) that a different vehicle has been detected from the one previously measured, or that the sensor has

malfunctioned .

In summary, GRP1 is a general-purpose method for fusing independent measurements. It is intended for use in iterative situations where information relating to a target or object of measurement or event is received over time, e.g. from distributed sensors, and a belief about what it really is continually updated. It is also well suited to situations where the powerset being sensed is not fully understood. Example applications can be:

Target Classification - taking information from radar (etc . ) sensors ;

Behaviour Classification - taking information from accelerometers on a human;

Stress analysis - taking the readings from biomedical sensors on a human;

Systems welfare - receiving information on the status of a system;

Medical Diagnostics - for instance, if a patient has symptoms a, b, and c, what is the diagnosis; or if an MRI scan suggests condition a and an X-ray scan suggests condition a or b, what is the diagnosis?

Sensor Reliability Assessment;

Diagnostics within machinery, such as cars, factories etc . ;

Combining weather measurements and predictions;

Combining the evidence from a number of sensors, e.g. for controlling a machine.

As can be seen, GRP1 is only limited by the types of information that can be sensed or collected and presented to it. Other combination operators are aimed at combining more than one source of information in a collective manner. The method is aimed at recursive and iterative use where information is received over time. Methods of the invention thus:

1. Allow for iterative and recursive fusion of

information;

2. Do not remove the empty set, which is an important measure (this allows open-world operation) ;

3. Dynamically adjust their own fusion parameters

depending on the confidence of the system. This can create memory in the system.