Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ELECTRONIC SYSTEM FOR CLASSIFYING OBJECTS
Document Type and Number:
WIPO Patent Application WO/1991/017525
Kind Code:
A1
Abstract:
A frequency-domain pattern-recognition system is provided for the real-time classification of diverse populations of objects where the classification of a sub-set of the population has been predetermined. Adjustment of system parameters is automatically performed during an initial training phase using the sub-set objects to optimise, with respect to classification accuracy and speed, (i) the match between the predetermined and the assessed classifications and (ii) the distinction between the assessed classes. Frequency-domain vectors characteristic of each class are extracted from frequency-domain transforms of time-domain data derived from the objects and are stored for operational use, together with their associated parameter settings. In operation, a microprocessor controller (30) loads parameter settings, for the population of objects to be classified, from a preset memory (34) into a time-domain image capture and preprocessing circuit (18), the transform vector generator (26) and a comparator circuit (28). Time-domain images of objects (10) passing on a production line (12) are captured, digitised and preprocessed (in circuit 18) and fed to a transform vector generator (26), the output of which is compared by a comparator (28) with stored class vectors to identify the class to which the object under inspection should be assigned. Adjustment of system parameters, vector extraction and vector comparison may be placed under the control of artificial neural networks, the parameters of which are, in turn, determined by the microprocessor controller.

Inventors:
BELILOVE JAMES ROBERT (US)
GLOVER DAVID EUGENE (US)
HEKKER ROELAND MICHAEL THEODOR (US)
WRATHALL EDWARD (US)
BUCK ROBERT DAVID (US)
Application Number:
PCT/AU1991/000183
Publication Date:
November 14, 1991
Filing Date:
April 30, 1991
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
IMPACQ TECHNOLOGIES INC (US)
GRANT PAUL AINSWORTH (AU)
International Classes:
G06N3/00; G06N99/00; G06F15/18; G06T1/00; G06V30/194; G06T7/00; (IPC1-7): G06F15/00; G06K9/68
Foreign References:
GB2140603A1984-11-28
EP0205628A11986-12-30
US4682365A1987-07-21
US4878736A1989-11-07
US4881270A1989-11-14
Attorney, Agent or Firm:
PAUL A. GRANT AND ASSOCIATES (Fisher, ACT 2611, AU)
Download PDF:
Claims:
THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
1. A system for automatically classifying objects into predetermined classes, comprising: * input means for receiving classindicator signals signifying the predetermined classification of objects, transform means adapted to receive timedomain signals derived from objects and to generate and output frequencydomain transformvectors corresponding to the objects, * classifier means for receiving said transform vectors and adapted to assess the classification of the corresponding objects and to generate output signals indicative of said assessed classification, * comparator means for receiving and comparing the respective predetermined and assessed classifications of each object and adapted to generate an output indicative of the quality of the match attained between said classifications, and * control means for receiving the output of said comparator means and adapted to iteratively adjust system parameters to successively improve said match attained between the predetermined and the assessed classes of the objects upon repeated assessment of the classification of the same objects, whereby the system is thereby setup for the classification of similar objects of unknown classification. A system according to claim 1 wherein said transform means includes a transform generator for generating frequencydomain transforms of said timedomain signals, said transform generator including parameters adjustable by said control means to vary the transform algorithm and/or the transform size. A system according to claim 1 or 2 wherein said transform means includes a transform generator for generating frequencydomain transforms of said timedomain o signals, and a feature extractor for receiving said transforms and adapted to derive transform vectors therefrom, said feature extractor including parameters adjustable by said control means to vary the manner in which said vectors are derived from the SUBSTITUTE SHEET transforms and/or to vary the number of different vectors derived from each trans¬ form. A system according to claim 3 including artificial neural network, fuzzylogic and/or crisplogic algorithms for controlling the manner in which said vectors are derived, said network having parameters adjustable by said control means. A system according to any preceding claim wherein said classifier means includes parameters adjustable by the control means to effect changes in the classification strategy employed by said classifier. A system according to claim 5 including artificial neural network, fuzzylogic and/or crisplogic algorithms for determining said classification strategy, said network having parameters adjustable by said control means. A system according to any preceding claim wherein said comparator means includes parameters adjustable by the control means to effect changes in the comparison strategy employed by said classifier. A system according to claim 7 including artificial neural network, fuzzylogic and/or crisplogic algorithms for controlling said comparison strategy, said network having parameters adjustable by said control means. A system according to any preceding claim having preconditioning circuit means connected to output said timedomain signals to said transform means, the preconditioning circuit means including parameters adjustable by said control means to modify the greyscale of the pixels, the number of pixels or the pattern noise comprising said signals. A system according to any preceding claim wherein the control means is adapted to iteratively adjust the system parameters to minimise the time taken to assess the classification of the objects, given a predetermined level of classmatch to be attained. SUBSTITUTE SHEET .
2. A system according to any preceding claim wherein the control system includes and artificial neural network, fuzzylogic and/or crisplogic algorithms for effecting the adjustment of said parameters.
3. 5 12 A method for classifying the members of a population of objects into pre¬ determined classes, the class membership of a subset of the objects having been predetermined, the method including the steps of: deriving frequencydomain transform vectors of timedomain signals derived from the subset objects, 0 * comparing the transform vectors to assess the classification of the respective subset objects, determining the match between the assessed and the predetermined classification of the subset objects, iteratively adjusting system parameters to improve the match between the as 5 sessed and the predetermined classification of the subset objects upon repetition of the preceding steps, and assessing the classification of the remainder of the objects in the population while retaining the parameter settings corresponding to said improved match.
4. 20 13 A method according to claim 12 including the steps of: deriving frequencydomain transforms from said time domain signals under control of at least one adjustable system parameter, and * deriving said transform vectors from said transforms, 25 14 A method according to claim 12 or 13 including the steps of: deriving frequencydomain transforms from said time domain signals, and deriving said transform vectors from said transforms under control of at least one adjustable system parameter. 30 15 A method according to any one of claims 12 to 14 including the steps of controlling the derivation of said vectors by means of the operation of artificial neural network, fuzzylogic and/or crisplogic algorithms and/or by means of at least one adjustable system parameter. SUBSTITUTE SHEET 16 A method according to any one of claims 12 to 15 including the step of comparing said transform vectors by means of the operation of artificial neural network, fuzzylogic and/or crisplogic algorithms and/or by means of at least one adjustable system parameter.
5. 17 A method according to any one of claims 12 to 16 including the step of determining said match by means of the operation of artificial neural network, fuzzy logic and/or crisplogic algorithms and/or by means of at least one adjustable system parameter.
6. 18 A method according to claim 16 including the steps of determining said match by means of artificial neural network, fuzzylogic and/or crisplogic algorithms and, in turn, controlling said network by at least one adjustable system parameter.
7. 19 A method according to any one of claims 12 to 18 including the step of iteratively adjusting the system parameters to minimise the time taken to achieve a predetermined level of classmatch. SUBSTITUTE SHEET.
Description:
"ELECTRONIC SYSTEM FOR CLASSIFYING OBJECTS"

TECHNICAL FIELD This invention relates to systems and methods for automatically classifying objects into pre-defined classes. More particularly, it is concerned with classification systems that employ frequency-domain processing of signals derived from the objects. These signals are typically derived from the illumination or interrogation of the objects with acoustic and/or electromagnetic signals, but they may also be derived sensors that measure strain, pressure, vibration or other phenomena associated with the objects.

The systems are suited to real-time "machine vision" in a manufacturing or packaging environment where video signals derived from the objects are processed for quality-control or machine-control purposes. However, the invention has many applications outside of manufacturing; for example, the recognition of abnormal cell patterns in cytological specimens, identification of bacterial colonies, quantification of insect damage, fragmentation and varietal contamination in grain samples, and the recognition of electrophoresis patterns for protein or genetic analysis. It may also be applied to the classification of scenes such as photographs of solar flares or clouds; or, to surveillance and security systems where pre-determined changes in a scene are to be recognised. In this specification, therefore, the 'object' to be classified may include any form of scene or signal pattern.

Signals derived from such scanning or interrogation of objects are typically of the time-variable or "time-domain" type and can be processed by pixel-based pattern recognition methods. However, it is well known that such time-domain signals can be subjected to Fourier analysis and transformed into "frequency-domain" signals representing the frequency and/or phase characteristics of the original time-domain signals, though there are many other algorithms for transforming time-domain signals into the frequency-domain. Like the time-domain signal itself, its frequency- domain transform can be represented in a multi-dimensional format or image that is amenable to the application of pattern recognition techniques to discriminate

between objects. This invention is concerned with such frequency-domain pattern- recognition (FDPR) techniques.

Finally, while classification systems formed in accordance with this invention are suitable for 'real-time' operation because they can operate fast enough for use in many on-line control situations, their application is by no means confined to real¬ time control.

BACKGROUND TO THE INVENTION Real-time electronic sorting and grading systems for classifying objects according to simple characteristics such as size, colour and weight by analogue techniques that do not involve image processing or pattern recognition are well known and widely used, but they are generally not suitable where discrimination must be based on subtle and complex characteristics. In such cases, some form of digital signal processing and pattern recognition is required, but these methods are so comput¬ ationally intensive that, either they must be inflexibly tailored to a few precisely- defined discrimination criteria, or they cannot be used in real-time situations at acceptable cost.

Whilst it is also known that many subtle discrimination tasks can be more readily accomplished using FDPR techniques than by using time-domain image processing methods, the additional task of deriving the frequency domain transform of the time domain image merely adds to the already excessive computational burden.

In short, a computational impasse has been reached in attempting to develop real¬ time electronic object classification systems of value in real-life inspection tasks where the discrimination criteria may be clear to human operators but are difficult to translate into appropriate software algorithms, or where the inspection task itself changes frequently. While a significant part of the computational burden lies in the image-processing function (whethertime-domain or frequency-domain, on-line or off¬ line), perhaps the greater part is accounted for by application-specific computer software written to effect the specific classification or discrimination function.

SUBSTITUTE SHEET

The use of optically-generated transforms and FDPR has been proposed as a means of circumventing this impasse. Frequency-domain transforms can be generated almost instantaneously by optical methods without the need for digital computation, and thus offer ready access to the more efficient FDPR discrimination techniques. Because of the character of the transform, simple masks can be used to select only those portions of the transform image which need to be examined (say, for average intensity) to effect the desired discrimination, thus further reducing the computational burden.

A system of this type was disclosed by Pernick et al in Applied Optics for January 1978 (Vol 17, pp 21-34) for "The screening of cervical cytological samples using coherent optical processing". The samples were presented (in turn) on glass microscope slides, illuminated with coherent light and the diffraction images so formed were transformed optically and masked and detected at the Fourier plane by the use of a commercially available photo-electric detector having both radially spaced ring-like detector elements and circumferentially spaced segment-like detector elements. (See US 1972 patent to Nicholas George). A combination of detector elements (ie, a masking pattern) was selected under manual control to best suit the task of discriminating between normal and malignant cells.

The application of such FDPR techniques to real-time, industrial machine vision systems was disclosed by the Global Holonetics Corporation (GHC) in US Patent No. 4,878,736. Instead of illuminating the objects under inspection with coherent light and capturing sufficient of the reflected or transmitted light to generate a satisfactory image for use in the optical transform system, the GHC system employed a spatial light modulator (SLM) to generate a transparent-image display from the output of a video camera. This image, could then be illuminated by coherent light and projected through the optical transform system. Like Pernick et al, a wedge/ring masking technique was employed, but this was done by applying a fixed set of 32 different masks in succession to the transform image of each object and averaging the intensity of light transmitted through each, thereby generating a characteristic "signature" of varying intensity for each object under inspection. Points of similarity between signatures for objects of the same class, and points of

SUBSTITUTE SHEET

difference for objects of different classes, could thus be identified by a suitable discriminator algorithm.

Application of the GHC system to specific inspection problems was disclosed in a series of subsequent papers: "An Optical Feature Extractor for Machine Vision Inspection", D Clark, Proc. SME 'Vision 87 Co nf ere nee ',1987; Optical Feature Ex¬ traction for high Speed Inspection", D Clark and D Casasent, SPIE Advances in Intelligent Robot Systems Conference, November 1987; and "Neural Nets in Automated Inspection", D. Glover, Synapse Connection, June/July 1988. The last- mentioned paper disclosed the use of an artificial neural network (ANN) to compare object signatures and, thus, allow the system to be trained by an operator, instead of the use of computer programs dedicated to each classification task. The training essentially involved the operator presenting objects to the system, manually indicating the class to which they belonged and checking the system's ability to make the necessary distinction until satisfactory discrimination was achieved or the attempt was abandoned.

Whilst this art offers an important line of attack on the 'computational impasse' faced in the development of real-time inspection systems, it has not proved to be practical. o The optical computers are essentially analogue instruments with great sensitivity to temperature gradients, vibration and dust; whereas high-definition SLMs of the type required are costly, delicate, slow in operation and of limited definition. These problems are judged by the present applicant to seriously limit the inherent discrimi¬ natory power of the FDPR technique. But perhaps of most importance, these 5 systems like their digital predecessors, suffer from a lack of flexibility in one important respect: the discrimination process is inflexible in that it is essentially the same for easy or difficult, appropriate or inappropriate, tasks.

OBJECTIVES OF THE INVENTION 0 It is therefore the general object of the present invention to provide improved methods and electronic equipment for the automatic classification of objects. More particularly, but without limitation thereto, it is desired to provide a real-time FDPR object classification system suitable for industrial use which is capable of adapting

SUBSTITUTE SHEET

its image-processing and discrimination procedures to a wide range of tasks under the control of artificial neural networks or similar control devices. Systems of the present invention are thus intended to circumvent the 'image-processing impasse ' that has characterised digital FDPR systems without incurring the penalties associated with optical transform systems, while maintaining flexibility and adaptabili¬ ty.

OUTLINE OF THE INVENTION

This invention is based upon the realisation that, because an FDPR system deals with global information about an object, it should be possible for the system itself to determine whether changing one or more of its parameters helps to discriminate between objects or not. In that event, the system should then be able to optimise itself for any given classification task with respect to speed and level of discrimina¬ tion. From one aspect, therefore, the invention comprises a system for automatically classifying objects into predetermined classes, comprising: * input means for receiving class-indicator signals signifying the predetermined classification of objects, transform means adapted to receive time-domain signals derived from objects and to generate and output frequency-domain transform-vectors corresponding to the objects, classifier means for receiving said transform vectors and adapted to assess the classification of the corresponding objects and to generate output signals indicative of said assessed classification, * comparator means for receiving and comparing the respective predetermined and assessed classifications of each object and adapted to generate an output indicative of the quality of the match attained between said classifications, and control means for receiving the output of said comparator means and adapted to iteratively adjust system parameters to successively improve said match attained between the predetermined and the assessed classes of the objects upon repeated assessment of the classification of the same objects, whereby the system is thereby set-up for the classification of similar objects of unknown classification. The control means may also be adapted to iteratively adjust

SUBSTITUTE SHEET

the system parameters to minimise the time taken to assess the classification of the objects, given that an acceptable level of class-match to be achieved is specified, or is built-in.

The invention also comprises a method for classifying the members of a population of objects into predetermined classes, the class membership of a sub-set of the objects having been predetermined, the method including the steps of: deriving frequency-domain transform vectors of time-domain signals derived from the sub-set objects, * comparing the transform vectors to assess the classification of the respective sub-set objects, determining the match between the assessed and the predetermined classification of the sub-set objects, iteratively adjusting system parameters to improve the match between the as- sessed and the predetermined classification of the sub-set objects upon repetition of the preceding steps, and assessing the classification of the remainder of the objects in the population while retaining the parameter settings corresponding to said improved match.

Four levels of automatic optimisation to improve discrimination and/or speed are envisaged in accordance with this invention, and they may be employed alone or in any desired combination: conditioning the time-domain signal; selecting appropriate transforms or transform parameters for the particular task; selecting the most rewarding parts of the resultant transform for creating class signatures; and, adopting efficient discrimination strategies for comparing object and class signatures to assess the class of the object. Preferably, all these optimisation functions are performed adaptively so that the system can be rapidly trained to discriminate between different classes of similar objects and so that the speed and accuracy of discrimination can be matched to the demands of the classification task in hand.

The conditioning means for the time domain signal may, for example, allow control of the grey-scale through the use of a frame-grabber and suitable look-up-tables (LUTs), while the transform means may include a variable transform generator

SUBSTITUTE SHEET

which allows the type and/or size of the transform generated from a given time- domain input to be controlled. The feature extractor means (by which the rewarding parts of the transform are identified) and, possibly, the discriminator/classifier, are preferably placed under the control of an ANN for the generation of class signatures and for the comparison of object signatures with class signatures.

As will become clear from the following description and claims, many variations and modifications to such systems and methods are possible without departing from the scope of the present invention.

DESCRIPTION OF EXAMPLES

Having generally portrayed the nature of the present invention, some exemplary embodiments thereof will now be described by way of illustration. In the following description, reference will be made to the accompanying drawings, in which:

Figure 1 is a general block diagram of a first embodiment of the invention, comprising a system for the real-time inspection of objects on a production line;

Figure 2 is a more detailed block diagram of a second embodiment,

Figure 3 is a detailed block diagram of a signal conditioning circuit suitable for use in the system of Figure 2, and

Figure 4 is a general functional block diagram of a digital signal processing board that is suitable for the implementation of both the controllable transform engine and the feature extraction circuit of Figure 2.

The first embodiment of the present invention is a machine vision system for use in the real-time classification of objects on a production line into one 'pass' class and a few 'reject' classes (according to the type of defect identified). Such systems need to be able to classify objects at rates of around a few objects per second. In

SUBSTITUTE SHEET

this first example, the objects are assessed on the basis of visual data only, a visual image being captured by a video camera.

Referring now to Figure 1 , the object 10 to be classified is carried on conveyor 12 through a classification station 14 where it is illuminated by strobe-lamp 16 and its image is generated by camera 17 and captured, digitised and adjusted by circuit 18, the output of circuit 18 being fed as an input to classifier circuit 20. The output 22 of classifier 20 designates the class to which the object is judged to belong and may be used to divert object 10 accordingly at a later time (by means not shown). To ensure a standardised view of each object at position 14, a simple 'in-position' sensor 24 is employed, the state of which is signalled on line 25 to classifier circuit 20.

As will be seen from Figure 1 , classifier circuit 20 basically comprises a transform vector generator 26 connected to receive the digitised output of image capture circuit 18 and connected to direct its output signatures to a comparator 28. vector generator 26 is adjustable to output one of a range of different signatures for each time-domain input signal so as to select, emphasise and/or de-emphasise features of the basic (unmodified) frequency-domain transform of that input time-domain signal. It can be controlled by a microprocessor control unit 30 in two alternative ways: either directly via control line 32 in a continuous manner, or indirectly, by selected presets from memory 34. Similarly, parameters of comparator 28 can either be set directly by control unit 30 or indirectly from presets stored in memory 34.

Control unit 30 is also connected to receive the 'in-position' signals from sensor 14, to fire strobe 16, synchronise and/or control image capture circuit 18, set threshold levels in detector 28 and receive output signal 22 indicating the assessed class of the in-position object 10. Finally, it is itself controlled by external signals via input bus 36 from a computer (not shown) running under a stored program or, simply, from a manual control panel (not shown). This permits the system to be operated in two distinct modes or phases: a set-up or training mode in which class signatures

SUBSTITUTE SHEET

are derived and parameter presets are generated and stored in memory 34; and, an operational mode or phase in which objects of unknown class are classified.

In the set-up phase, raw images from camera 17 of objects of known class are recorded (together with their class designations) in a memory (not shown) by controller 30, the class designations being signalled to control unit 30 via input bus 36. Controller 30 then transfers successive recorded object images through circuit 18 for digitisation (if necessary) and normalisation to circuit 26 for transformation and vector generation. Either under control of an external heuristic program, or simple internal routines, controller 30 varies the parameters of vector generator 26 and comparator 28 under a search routine until output 22 consistently signifies the correct class of the object at inspection station 14. These settings are then recorded in memory 34. This will result in the creation of a class 'signature envelope' — ie, a class-signature — within which all the signatures of objects in that class will fall. The search routine can also be designed to ensure that the class- signatures are small and distinct from one another.

In the operational phase, the parameter presets are down-loaded from memory 34 to correctly set-up circuits 26 and 28 and similar objects 10 (but of unknown class) are fed in succession along conveyor 12, and their transform signatures are generated and compared with the stored class signatures in order to determine their (assessed) classification.

If desired, parameters of image capture unit 18 may also be controlled to pre- process the time-domain signals to enhance system performance, the settings of these parameters also being recorded in preset memory 34 along with those for circuits 26 and 28. The paths for the necessary control signals from the controller 30 and memory 34 are shown in dotted lines in Figure 1. One important parameter that may be varied in this way is the gray-scale of the captured digital image.

The classification system of the second embodiment (to be described with reference to Figures 2 to 5) is structured and operates in the same basic manner as that of Figure 1 but it is not tailored to production line inspection.

SUBSTITUTE SHEET

Referring to Figure 2, the operation of the system in training or set-up mode is as follows. Interrogation of an object to be classified (not shown) yields analogue data that is input on line 100 to A/D converter 102, and digital data that is input on line

104 to digital interface 106, the raw time-domain data from these circuits being transferred to data memory 108 on line 109 upon the receipt of an input trigger signal on line 110 to host processor interface 112 and the resultant activation of enable signals on lines 114 and 116 to circuits 106 and 102 respectively, together with a write signal on write/read line 117 to memory 108 from interface 112.

Activation of enable lines 114 and 116 is also used to prevent further data from being accepted into circuits 102 and 106 until transfer to memory 108 is complete.

The known class of the object under interrogation is signalled via input line 118 to interface 112 and an appropriate tag is transferred to memory 108 on line 120 where it is recorded with the appropriate image data file.

It should be noted that controller 300 is set-up for the training mode by the host processor 122 via interface 112 and mode-designation line 142 and control line 144. At the outset of the training phase, switch 140 is activated to direct data output from the classifier via line 152 to comparator 282 from the interface 112.

After all data from the training objects has been recorded (with appropriate class tags), host processor 122 effects the sequential transfer of the time-domain data sets via a preprocessing circuit 124 (the adjustable parameters of which are set to their default values) for conversion to the frequency-domain by transform engine 126 (also in its default condition). The resultant transforms are processed by feature extractor 128 (also with its parameters in default condition) which generates a signature or vector that is, in turn, transferred to a classifier circuit 130 where it is stored until all signatures have been collected (together with their assigned predetermined class tags), at which time controller 300 initiates the iterative trial classification of the vectors and their trial matching with their respective tagged predetermined classes.

At the end of the first iteration, classifier 130 outputs the discrimination data to comparator 282 for comparison with the default discrimination data contained

SUBSTITUTE SHEET

therein. If the level of discrimination is sufficient, the default data is replaced by the derived discrimination data, otherwise it is maintained. The result of the comparison is signalled to controller 300 which signals interface 118 accordingly and the same set of data is again read from memory 108 to begin the second iteration, but before that occurs, controller 300 retrieves a set of parameter settings from preset memory 340 and sets circuits 124 to 130 accordingly. The resulting discrimination data is again compared with that in comparator 282 and again reiaces the existing data if it indicates better performance. This procedure is repeated until a default or the user-specified standard of discrimination level has been achieved (or all possible solutions exhausted). The user may also impose conditions relating to the maximum number of iterations or the time taken.

When a combination of system parameters has been found which yields the desired standard of discrimination, they are loaded into the preset memory 340 (together with a flag identifying the task), the memory 340 being controlled from interface 112 by write/read line 148, line 146 being provided to allow preset data to be down¬ loaded into memory 340 via interface 146. The collection of presets for various tasks built up in this way may be conveniently recorded by host processor 122. When a classification task is to be performed on a population of objects for which presets have been recorded, the presets are down-loaded from host processor 122 into the preset memory 340 and then applied to circuit modules 124 to 130, and the system is otherwise readied for operation. When input trigger signal 110 is received to indicate an object in position, the time-domain data relating to that object is loaded via memory 108 for processing through optimised circuits 124 to 128. The resultant class-assignment and object vector output from classifier 130 are reported to the host processor for recording and via system interface 132 to initiate the necessary external action.

The preprocessor circuit 124 illustrated in Figure 3 is a framegrabber board that is commercially available from Data Translation (USA), but other similar commercial products are also suitable. Incoming video signal 350 is fed to a video A/D module 352 (which, in fact, performs the function of circuit 102 in Figure 2) and to a phase- locked-loop 351 that is used to synchronise the remainder of the circuitry to the

SUBSTITUTE SHEET

incoming video signal. Once digitised by converter 352, an input LUT operation can be performed on the digitised signal by the LUT 353, after which the resulting image can be stored in one of the two framegrabbers 354 and 355. Before displaying the data on a video monitor, an output LUT operation maybe performed - by output LUT module 356. The digitised signal may then be re-converted to analogue form by D/A converters 357 (one for each colour). It is possible to input digitised images through high-speed digital interface 361 and to transfer them from the frame-grabber to an external device via output port 360. The activities of the framegrabbers are controlled from a host computer (not shown) via the I/O control 358 and host interface bus 359.

Figure 4 is functional block diagram of a general purpose DSP board (Zoran ZR34161 vector signal processor) manufactured by the Zoran Corporation of the USA. Such boards allow the kernels for a variety of transformations, such as Fourier, Hadamard, Haar or Mellon to be up-loaded from the controller 300 or retained in on-board memory and activated by the controller. (It would of course be possible, but expensive, to employ a separate board for each transform algorithm).

Image data from data memory 108 can either be loaded through interface 415 or through a special high speed data interface (not shown) directly into memory 408.

The program instruction can also be loaded through controller interface into the program memory 409. Controller 300 will load the 80286 control register 411 with the information necessary for the 80286 microprocessor 410 to preform the transformation search procedure and load the desired transform kernel via interface bus 401 and internal bus 413 into the board's program memory. In parallel it will also load the control register via the 80286 bus 414. Then, the data to be processed is loaded into dual port memory 403 and, once started, the vector signal processor 405 will perform the transformation without intervention of the processor 410, the only function of which is to load the image data and watch the status register 406. After the transformation has been completed, the transformed data can be down¬ loaded from image memory 408 via the AT bus interface 407 for further processing by the feature extraction module 128

SUBSTITUTE SHEET

A variety of known feature extraction algorithms are also available, the most common being the wedge/ring technique used by GHC. Others are Canonical Analysis and Principal Component Analysis and its derivatives like the Karhunen- Loeve transformation. Like the transformation itself, the feature extraction can be implemented on a general purpose DSP board like that of Figure 4.

Similarly, a wide variety of algorithms and methods are available to implement the classifier 130 of Figure 2. Methods like template matching, linear and optimal discriminant functions can be used to calculate the amount of discrimination that can be obtained with particular data sets. Another class of algorithms are ANNs such as the Perceptron, Boltzman or Hopfield Nets. Such ANN can be optimised themselves with regard to the size of the net and the number of connections and layers. As already indicated, ANNs may be used in a variety of locations in the systems of this invention.

The comparator 282 can conveniently be a simple known metric that compares multi-dimensional data and determines whether the incoming data is better than some set-point.

Finally, the controller 300 can also be implemented in many different ways known in the art, the most obvious being an exhaustive search of all permutations of parameter settings. A number of commercially available algorithms are suitable, including genetic algorithms, rule-based systems using crisp logic and fuzzy logic algorithms, and even ANNs. (See: Fuzzy Logic in Control Systems: Fuzzy Logic Controller - Part 1 , IEEE Transactions on Systems, Man and Cybernetics, Volume 20 No.2 pp 404-418, and Volume 2 pp 419-435, IEEE March/April 1990 )

SUBSTITUTE SHEET