Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
STEREOSCOPIC GAZE CONTROLLER
Document Type and Number:
WIPO Patent Application WO/1999/030508
Kind Code:
A1
Abstract:
An integrated controller for controlling both pan and vergent movements of a stereoscopic vision system, having two cameras, is described. The controller determines motion and direction based on a provided retinal error. Each of the cameras is moved until no retinal error remains which results from camera orientation. A butterfly controller accepts data input relating to retinal errors from any number of sources and provides a single signal relating to common motion and another signal relating to vergent motion to each control circuit. The circuits are designed to operate in common mode or differential mode thereby causing sensors to move in different directions when controlled by same control signals.

Inventors:
GALIANA HENRIETTA L
WAGNER ROSS
Application Number:
PCT/CA1998/001128
Publication Date:
June 17, 1999
Filing Date:
December 04, 1998
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV MCGILL (CA)
International Classes:
G01S3/786; H04N13/239; (IPC1-7): H04N13/00
Foreign References:
GB2226923A1990-07-11
Attorney, Agent or Firm:
Teitelbaum, Neil (Ontario K1S 5C4, CA)
Download PDF:
Claims:
What is claimed is:
1. A method of controlling gaze in a binocular vision system having two imaging devices and at least a plant for moving at least an imaging device comprising the steps of : a) providing to a processor a retinal error relating to location of a target as imaged by each of the two imaging devices; b) providing to the processor a model related to each of the two imaging devices ; and, c) using the processor, determining from both retinal errors absent calculation of the target location in spatial coordinates a control signal related to imaging device motion for reducing the provided retinal errors.
2. A method of controlling gaze in a binocular vision system as defined in claim 1 wherein the control signal comprises two control signals for controlling device motion along a same dimension and within a same plane.
3. A method of controlling gaze in a binocular vision system as defined in claim 2 wherein the controller is a butterfly controller having the following transfer functions.
4. A method of controlling gaze in a binocular vision system as defined in claim 1 wherein the control signal comprises at least three control signals for controlling device motion in each of two dimensions.
5. A method of controlling gaze in a binocular vision system as defined in claim 2 wherein motion of each of the two imaging devices is controlled by a separate plant associated with each imaging device and by another plant associated with a movable platform upon which both imaging devices are dispose and wherein the control signal is provided to some of the plants.
6. A method of controlling gaze in a binocular vision system as defined in claim 5 wherein the control signal is provided to each of the plants.
7. A method of controlling gaze in a binocular vision system as defined in claim 6 wherein the controller comprises a butterfly controller having two input ports and two output ports, the two input ports for receiving each of the two retinal errors; the control signal comprising two signal components, a signal component provided at each of the two output ports.
8. A method of controlling gaze in a binocular vision system as defined in claim 2 comprising the step of : providing another sensor input to the processor wherein the sensor input is used to determine the control signal.
9. A method as defined in claim 8 wherein the processor is in the form of a butterfly controller for simultaneously processing each retinal error and the other sensor input, and for determining a same control signal for provision to each plant.
10. A method of controlling gaze in a binocular vision system as defined in claim 2 comprising the step of : providing another sensor input to the processor; providing a model relating to the sensor to the processor, wherein the sensor input is used to determine the control signal for provision to the plant for moving the imaging devices to reduce retinal errors.
11. A method as defined in claim 2 wherein the step (c) comprises the steps of : c 1) using the processor, determining from both retinal errors absent calculation of target location in spatial coordinates a control value related to imaging device motion for reducing the provided retinal errors; and, c2) providing a control signal based on the control value.
12. A method as defined in claim 2 wherein the control signal is calculated in a simultaneous and integrated fashion for both pan and vergence of both imaging devices in a coordinated fashion.
13. A method as defined in claim 2 wherein the step (a) comprises the steps of : al) providing image data from a first imaging device; a2) providing image data from a second imaging device; a3) determining a first target location within the image data from a first imaging device and a second target location within image data from a second imaging device; and a4) deriving a retinal error related to each imaging device from the determined target locations.
14. A method as defined in claim 2 wherein the processor is in the form of a butterfly controller for simultaneously processing each retinal error and for determining a same control signal for provision to each plant.
15. A method as defined in claim 2 wherein the control signal relating imaging device motion for reducing the retinal error is determined absent calculation of independent control signals for panning and vergence.
16. A method as defined in claim 2 wherein the same control signal is provided to each plant for controlling motion of any platform upon which at least one of the first imaging device and the second imaging device are dispose.
17. A method as defined in claim 2 comprising the steps of : determining a state of imaging device motion; and, when retinal error is substantially small, determining a value based on vergence of the imaging devices, the value related to a distance from the imaging devices to the target.
18. A method as defined in claim 17 comprising the steps of : adjusting a focus of each imaging device in dependence upon the determined value.
19. A method as defined in claim 17 comprising the steps of : using current position and retinal error, determining estimated focus adjustment of imaging devices.
20. A method as defined in claim 19 comprising the steps of: using current position and retinal error, determining butterfly controller gain adjustment; adjusting gains within the butterfly controller based on the determined butterfly gain adjustments.
21. A method as defined in claim 2 comprising the steps of : providing a known angle of vergence between the two imaging devices; adjusting a distance between the imaging devices and the target to reduce retinal error; and, when retinal error is below a predetermined level, providing an indication, whereby the retinal error is below a predetermined level when the imaging devices are a predetermined distance from the target.
22. A method of controlling sensor orientation in a bisensor system including a first sensory device, a second sensory device and a plurality of plants for controlling motion of the sensory devices, the method comprising the steps of : a) providing to a processor sensory errors relating to a location of a target as sensed by each of the two sensory devices; d) providing a model related to each plant from the plurality of plants to the processor; and, e) using a processor having a symmetric topology determining a same output signal for provision to the plurality of plants for controlling the sensory device motion, the control signal related to sensory device motion for reducing the sensory error from said sensory device, the determination in dependence upon the provided model and the provided sensory errors.
23. A method of controlling sensor orientation in a bisensor system as defined in claim 22 wherein the output signal inclues a common component output signal and a differential component output signal.
24. A method of controlling sensor orientation in a bisensor system as defined in claim 23 wherein the sensory devices are imaging devices and wherein the differential component of the control signal is used to control one of panning and vergent motion of the sensory devices and the common component of the output signal is used for controlling the other of panning and vergent motion of the sensory devices.
25. A method of controlling sensor orientation in a bisensor system as defined in claim 22 wherein the sensory devices are imaging devices and wherein the processor is a single processor for calculating in an interdependent fashion the output signal related to the sensory device motion for both pan and vergence of the two imaging devices absent calculation of target location in spatial coordinates.
26. A method of controlling sensor orientation in a bisensor system as defined in claim 22 wherein the sensory errors are not based on a world coordinate system.
27. A method of controlling sensor orientation in a bisensor system as defined in claim 22 wherein the sensory devices, the controller and the plants define a feedback loop for accepting sensory errors and providing a control signal for moving the sensory devices in order to reduce the sensory errors, the control signal being provided to the plants until the sensory errors are substantially 0.
28. A method of controlling sensor orientation in a bisensor system as defined in claim 22 comprising the step of : providing sensory data from another sensory device; wherein the two sensory devices, the other sensory device, the controller and the plants define a feedback loop for accepting sensory errors and sensory data and for providing a control signal for moving the sensory devices in order to reduce the sensory errors, the control signal being provided to the plants until the sensory errors are substantially 0.
29. A method of controlling sensor orientation in a bisensor system as defined in claim 27 wherein the feedback loop operates in each of two modes, a first mode for reducing small sensory errors and a saccade mode for reducing sensory errors other than small sensory errors.
30. A method of controlling sensor orientation in a bisensor system as defined in claim 29 wherein during saccade mode the control signal relates to substantially rapid motion of the sensory devices.
31. A method of controlling sensor orientation in a bisensor system as defined in claim 30 wherein the control signal relates substantially rapid motion including panning and vergence motion.
32. A controller for a vision system having an imaging device associated with a plant for providing motion to the imaging device, the controller comprising: a first processor for controlling saccade phase movement of the imaging device, the first processor having a first input and a first output; a second processor for controlling slow phase movement of the imaging device, the second processor having a second input and a second output; wherein the second processor and the first processor are substantially same and are provided with same input signal and provide an output signal to a same plant.
33. A controller for a vision system as defined in claim 32, wherein the vision system is a binocular vision system having two imaging devices each associated with a plant for providing motion to the imaging devices; wherein the first processor has first inputs and first outputs; wherein the second processor has second inputs and second outputs; and wherein the second processor and the first processor are provided with same input signals and provide output signals to same plants.
34. A controller for a vision system as defined in claim 32 wherein the first processor and the second processor are a same processor having some different gain parameters.
Description:
Stereoscopic Gaze Controller Field of the Invention This invention relates to a binocular controller for an artificial vision system and more particularly to a controller for controlling gaze to reduce retinal errors resulting from imaging a target with each of two imaging devices.

Background of the Invention The evolution of robots has produced machines of increased complexity and of greater ability. Compare to the first commercial models, which were simple blind servants that could be taught to execute pre-programmed movements, the mobile robots of today are built to interact with the world. The first visual systems which were passive in nature, accepting any data that randomly fell into the field of view, have been replace by active ones, able to select objects of interest and view them in an intelligent manner.

When designers gave robots the means to move about, they brought to bear issues that previously were irrelevant. One such issue is the co-ordination of an artificial vision system mounted on a mobile platform. At present, solutions to this problem have known drawbacks.

From a somewhat philosophical perspective, the current position of robot designers can be compare to that of Mother Nature's in the early stages of biological system design, or evolution. For during the course of designing a moving, seeing robot the designer is faced with the same issues that Mother Nature had to address long ago. On the other hand, the designer is in a favorable position here because he can sneak a peek at existing design strategies (namely nature's) to help guide his own work. It can certainly be argued that designs in nature are not necessarily optimal, but they certainly are robust and serve as a good starting point.

For example, the control of computational vision systems is generally solved using a traditional engineering mindset. Even though knowledge of anatomy and physiology relevant to biological binocular control is now extensive, designers do not use

existing natural systems to help guide their work. An artificial vision system is often organized into three modules: (i) an imaging system to acquire images, (ii) an image processing stage, and (iii) a controller. Generally, imaging system design, image processing and control are independent aspects of a vision system and can be designed independently, though the resulting system is highly dependent on each aspect.

Many known vision systems are binocular-for example, those disclosed by Ballard, D. H. in"Animate Vision,"Artificial Intelligence, Vol. 48, pp. 57-86,1991; by Brown, C., in"Gaze Controls Cooperating Through Prediction,"Image and Vision Computing, Vol. 8, No. 1, pp. 10-17,1990; and by Krotkov, E. P. in Active Computer Vision by Cooperative Focus and Stereo, Springer-Verlag, New York, 1989. ISBN 0-387- 97109-3. Two common reasons for this choice are that primate vision systems work well and that knowledge about mammalian oculomotor control provides valable insight into how a vision system might work. Nature, for instance, uses one controller for both eyes in a pair of eyes. Of course, another avantage to modelling primate vision systems is their adaptability. For example, primate vision systems are not known to enter unstable modes of operation. Also, primate vision systems adapt extremely well to a loss of an eye or to a single available eye. Primate vision systems have a fovea--a specialised region of the retina giving such high visual acuity that it is used preferentially for seeing. Foveated animals generally possess image-stabilising reflexes, but also have other reflexes acting to bring selected points of interest to the fovea and then hold them there.

With regard to artificial vision systems, Rotstein and Rivlin (Rotstein H. P., and E.

Rivlin,"Optimal Servoing for Acive Foveated Vision,"Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, pp. 177-182,1996) demonstrated that the need for foveated vision sensors may be established based on control considerations, and that the optimal fovea size may actually be computed. The existence of a fovea presents an inherent trade off : a small foveal window requires little image processing time but places tight demands on tracking performance; alternatively tracking objectives may be relaxed by increasing the foveal zone, but at the expense of slower dynamics due to longer computational delays.

Thus, the optimal fovea size is formulated as a maximisation problem involving the plant,

computational delays and other hardware restrictions, and the expected bounds of target motion.

Artificial vision systems to date are generally afoveate as a result of commercial imaging technology. However, for the sake of generality and of future development, research into computational vision systems has been conducted with the notion that someday vision systems will typically be foveated. There has already been work in the development of foveated vision sensors and application of spatially varying sensors in vision systems.

Controlling gaze is the operational purpose of some vision systems. Gaze is formally defined as the line of sight measured relative to the world co-ordinate system.

The act of gazing at a single point in three-dimensional space is called fixating and the location at which the eyes are directe, the target, is termed the fixation point. The process of gaze control consists of orienting gaze in order to achieve a desired goal.

Within robotics circles, the problem of gaze control is functionally broken down into two problems: gaze holding and gaze shifting. Gaze holding refers to the facility to track a possibly moving target with a viewing system that may also be moving. It also inclues compensating for external perturbations to the vision system (due to egomotion, for example) and implies smooth tracking movements. Gaze shifting refers to the rapid redirection of gaze for the purpose of shifting attention to a possibly new target in the visual field. To a physiologist this nomenclature is awkward. Gaze holding implies that the gaze of a vision system is held constant, which is consistent with the definition of gaze. In order to part with this confusing terminology"gaze holding"is referred to herein and in the claims which follow as target stabilisation and"gaze shifting"is referred to herein and in the claims which follow as target acquisition, which is more descriptive of the occurring action.

In the case of a frontal, binocular vision system target stabilisation requires that the line of sight of each eye or imaging device be directe at the same point of interest.

Regardless of the mobility of a robot, a vision system that maintins its gaze on a target

benefits from improved visual interpretation of the world and, consequently, can interact better therewith.

There are three tasks involved in target stabilisation. A first is locating a fixation point; a second task consists of extracting fixation errors for each imaging device; and the third task is concerne with a control strategy used to servo the gaze successfully.

Vergence involves co-ordination of two eyes under near-target conditions, where proper viewing requires the crossing of the visual axes. Both stereopsis, an ability to visually perceive the third dimension, which depends on each eye receiving a slightly different image of the same object, and bifoveal fixation of a single target require precise alignment of the visual axes. This task is the responsibility of the vergence system.

Generally the first task involves image analysis, which provides a target location, the second task involves extracting retinal errors or other data for provision to a controller for performing the third task. The third task for controlling gaze is often performed by analysing each part of gaze motion separately for each imaging device and then summing appropriate control signals for provision to a plant for controlling imaging device motion.

Target-stabilisation systems should be able to follow a moving target without necessarily recognising it first. Consequently, active vision systems essentially work on the principle that the only knowledge of the target is that the"eyes"are initially pointed at it. The target stabilization problem then is one of maintaining fixation of the moving target from a moving viewing system.

Target stabilisation is known to be advantageous. For example in Coombs, D. J., and C. M. Brown,"Cooperative Gaze Holding in Binocular Vision,"IEEE Control Systems Magazine, Vol. 11, No. 4, pp. 24-33,1991 and hereby incorporated by reference, the usefulness of target stabilisation is summarised. For example, image stabilisation allows reduced blur when imaging a moving target. Fixating an object of interest brings it near the optical axis of each eye and minimises geometric distortions. Tracking of a target improves operation of many stereo vision algorithms. Also, stabilising on a moving object in a stationary scene causes the object to"pop-out"as a result of motion blur related to

the un-stabilised parts of the scene. There are other avantages and applications of image stabilisation.

Biological gaze control strategies are based upon operating modalities rather than tasks. In fact, only two modalities are known to be used: the fast-phase and slow-phase modalities. These are distinguished by their tactics and operating frequency range.

The slow-phase modality is also known as'slow control'and'smooth pursuit'and is responsible only for target stabilisation. It produces'smooth eye movements'or slow phases-s called because of the low operating bandwidth. Smooth eye movements are largely regarde as sensorimotor reflexes. The following review articles overview slow- phase system response and understanding in biology: Kowler, E., "The Role of Visual and Cognitive Processes in the Control of Eye Movement,"in Eye Movements and Their Role in Visual and Cognitive Processes, Chapter 1, E. Kowler (Ed.), Elsevier Science Publishers BV (Biomedical Division), Amsterdam, pp. 1-70,1990; Lisberger, S. G., E. J.

Morris, and L. Tychsen,"Visual Motion Processing and Sensory-Motor Integration for Smooth Pursuit Eye Movements,"Annual Review of Neuroscience, Vol. 10, pp. 97-129, 1987; and Robinson, D. A., "Control of Eye Movements,"in Handbook of Physiology Section l : The Nervous System Vol. II Motor Control, Part 2, V. B. Brooks Ed., American Physiological Society, Bethesda, Md., pp. 1275-1320,1981.

The fast-phase modality produces rapid eye movements. The operating bandwidth is much wider than that of the slow-phase modality and eye movement dynamics are faster than that of the eye plant. This modality corresponds to the target acquisition task in gaze control. The fast-phase system is influence by cognitive factors, even more so than the slow-phase system. A good overview of the fast phase modality in biological systems is found in the following sources: Leigh, R. J., and D. S. Zee, The Neurology of Eye Movements, 2nd ed., F. A. Davis Co., Philadelphia, 1991. ISBN 0-8036-5528-2; Lisberger, S. G., E. J. Morris, and L. Tychsen,"Visual Motion Processing and Sensory- Motor Integration for Smooth Pursuit Eye Movements,"Annual Review of Neuroscience, Vol. 10, pp. 97-129,1987; and Robinson, D. A., "Control of Eye Movements,"in Handbook of Physiology Section 1: The Nervous System Vol. Il Motor Control, Part 2, V. B. Brooks Ed., American Physiological Society, Bethesda, Md., pp. 1275-1320,1981.

Historically, two categories of rapid eye movements were thought to exist. They were classifie as saccades and quick phases, depending on the context under which the movements were evoked. Eventually, it was discovered that saccades and quick phases were, in fact, produced by the same neural circuitry. In accordance with this, rapid eye movements are referred to herein and in the claims that follow as"fast phases".

In a foveated vision system, the need for two operating modalities comes as a result of the conflicting goals when following a moving target as described in Coombs, D., and C. Brown,"Real-Time Binocular Smooth Pursuit,"International Journal of Computer Vision, Vol. 11, No. 2, pp. 147-164,1993. It is generally believed that the target pursuit system of primates does not favour either the position or velocity control goals. The slow-phase modality is used to minimise slip and, when the target image deviates too far from the fovea, the fast-phase modality is invoked to quickly reacquire a target-be it a new one or the same one that is falling out of view. Modality switching is a clever non-liner solution to the dilemma of how to minimise both velocity and position error simultaneously, since smooth eye movements alone cannot achieve both goals and saccadic movements cannot reduce motion blur since they don't match velocity. It is more accurate that position and velocity errors each contribute to both slow and fast phases ; only their relative importance is variable.

As briefly mentioned above, eye movements may be elicited either by visual or non-visual stimuli. Benefits of a dual-modality control strategy are observe in both instances.

A good overview of biological occumotor control is presented in Galiana, H. L., and J. S. Outerbridge,"A Bilateral Model for Central Neural Pathways in Vestibuloocular Reflex,"Journal of Neurophysiology, Vol. 51, No. 2, pp. 210-241,1984, which is hereby incorporated by reference.

The mammalian eye is essentially a globe held in a socket allowing three degrees of freedom per eye: rotations in the vertical and horizontal planes and rotations about the line of sight. A pair of eyes, is capable of only three types of motion: conjugate, vergence and torsional movements. Inputs to the brain are transformed by sensors that respond to a

specific visual or vestibular stimulation. In total there are three types of stimuli to process.

Significantly, regardless of the nature of the excitation, similar eye movements arise.

Summary of the Invention In accordance with the invention there is provided a method of controlling gaze in a binocular vision system having two imaging devices and at least a plant for moving at least an imaging device comprising the steps of : a) providing to a processor a retinal error relating to location of a target as imaged by each of the two imaging devices; b) providing to the processor a model related to each of the two imaging devices; and, c) using the processor, determining from both retinal errors absent calculation of the target location in spatial co-ordinates a control signal related to imaging device motion for reducing the provided retinal errors.

In accordance with this embodiment, the control signal may comprise two control signals for controlling device motion along a same dimension and within a same plane.

Alternatively, the control signal comprises at least three control signals for controlling device motion in each of two directions. Preferably, motion of each of the two imaging devices is controlled by a separate plant associated with each imaging device. Optionally, another plant associated with a movable platform upon which both imaging devices are dispose is used. In an embodiment, a same control signal is provided to each of the plants.

Preferably, the controller comprises a butterfly controller. In accordance with this, the controller exhibits symmetry between its two sides at a core thereof. More preferably, the entire control exhibits symmetry. In accordance with the invention, the controller supports sensory fusion accepting further sensor data for use in determining the control signal. Of course, when referring to the butterfly controller, the term syummetry Since, in a stereoscopic vision system, the controller controls vergence of the imaging devices, the controller is also useful for measuring distance. This is valable for focusing imaging devices, for manufacturing processes, for mapping tracked object trajectory or motion, and so forth.

In accordance with yet another embodiment, the butterfly controller supports two modalities-fast-phase and slow-phase, gain values within the controller different for different modalities. For example, some gain values are adjusted to 0 (zero) in one modality and to a non-zero value in the other modality.

In accordance with another embodiment of the invention there is provided a method of controlling sensor orientation in a bi-sensor system including a first sensory device, a second sensory device and a plurality of plants for controlling motion of the sensory devices, the method comprising the steps of : a) providing to a processor sensory errors relating to a location of a target as sensed by each of the two sensory devices; d) providing a model related to each plant from the plurality of plants to the processor; and, e) using a processor having a symmetric topology determining a same output signal for provision to the plurality of plants for controlling the sensory device motion, the control signal related to sensory device motion for reducing the sensory error from said sensory device, the determination in dependence upon the provided model and the provided sensory efforts.

In accordance with another aspect of the invention, there is provided a controller for a vision system having an imaging device associated with a plant for providing motion to the imaging device, the controller comprising: a first processor for controlling saccade phase movement of the imaging device, the first processor having a first input and a first output; a second processor for controlling slow phase movement of the imaging device, the second processor having a second input and a second output; wherein the second processor and the first processor are substantially same and are provided with same input signal and provide an output signal to a same plant.

There are many avantages to the present invention including support for sensory fusion, adaptability, robustness, stability, and performance.

Brief Description of the Drawings Exemplary embodiments of the invention will now be described in conjunction with the drawings, in which: Fig. 1 is a schematic block representation of a controller having a parallel architecture; Fig. 2 is a schematic diagram of an imaging device for use with a controller according to the invention; Fig. 3 is a simplifie diagram of two eyes demonstrating vergence, target, and lines of sight; Fig. 4 is a simplifie block diagram of a core of a butterfly controller; Fig. 5 is a simplifie block diagram of an expanded butterfly controller comprising a butterfly core and symmetric feedback paths; Fig. 6 is a simplifie diagram of two eyes verging on a central target; Fig. 7 is a simplifie diagram of two eyes fixating on a far distant object and exhibiting substantially conjugate displacement; Fig. 8 is a block diagram of a conjugate control system and of a vergence control system; Fig. 9 is a plot of simulation results of slow phase system operation in total darkness; Fig. 10 is a root locus diagram of a conjugate system for use in design of a controller according to the invention; Fig. 11 is a bode plot of a conjugate system according to the invention with a tracking bandwidth of 1 Hz; Fig. 12 is a block diagram of a system for providing fast-phase control of imaging device motion along one axis; Fig. 13 is a symmetrical binocular controller according to the invention demonstrating a merger of conjugate, vergence, slow-phase, and fast-phase control within a same controller architecture; Fig. 14 is a diagram of a butterfly core according to the invention wherein initial conditions are incorporated into controller design; Fig. 15 is a schematic representation of two eyes, shown in cross section, fixating a vertically displaced target; Fig. 16 is a graph of measured frequency response functions of horizontal pursuit conjugate component of a controller according to the invention;

Fig. 17 is a graph of a measured step response of a vergence component of a horizontal pursuit system according to the invention; Fig. 18 is a graph of measured frequency response functions of vertical pursuit conjugate component of a controller according to the invention; Fig. 19 is graphs of simulated responses for a conjugate target motion of increasing amplitude; Fig. 20 is a graph of a first 200 msec of conjugate step response due to a 10 degree target jump; Fig. 21 is a graph showing experimental results of conjugate and vergence imaging device movement when controlled by a controller according to the invention wherein a target is moved in a sinusoidal pattern; Fig. 22 is a plot of target tracking in three dimensions; Fig. 23 is a plot of the signals from Fig. 22 shown in time; Fig. 24 is a block diagram of a butterfly controller core according to the invention incorporating sensory fusion of other sensory data; Fig. 25 is a simplifie diagram of a platform supporte by a plurality of actuators and having a plurality of sensors dispose thereon for use in platform stabilisation; Fig. 26 is a diagram of a butterfly controller for controlling three plants, a plant for each of two imaging devices and a plant for head movement, and for accepting sensory data from a head sensor; Fig. 27 is a simplifie diagram illustrating sensory convergence and diversified motor strategies from a single integrated control circuit; Fig. 28 is a simplifie diagram showing sensory directional sensitivities for optimally exciting imbedded controller modes; Fig. 29 is a simplifie diagram of a method of adjusting distance using a controller according to the invention; Fig. 30 is a simplifie diagram of a controller supporting more than two modes and having more than two"Wings;" and, Fig. 31 is a simplifie diagram of another controller supporting more than two modes and having more than two"Wings."

Detailed Description of the Invention Referring to Fig. 1, a schematic representation of organisation of a parallel architecture controller is shown. The scheme is a classical parallel architecture where each branch represents a low-level behaviour, or task to be performed. Each sensory input 1 is provided to a processor 2 for analysis. Each path has a delay shown as dl.. d5. Of course, inclusion of other paths will result in additional delays. The system, as described above, has limited adaptability, flexibility, and stability.

In a task-based control scheme, typically used in robotics, each plant is assigne a set of tasks to be performed. Each task is then associated with a dedicated controller, so that a set of tasks requires a bank of controllers. A multi-platform system, then, would be service by an independent bank of controllers for each platform. However, with regard to oculomotor control it is known that nature does not use a bank of controllers since it does not approach the control problem from the point of view of tasks. An integrated methodology is used instead. The avantage of this approach is that additional plants and tasks do not require additional independent controllers. New demands are accommodated by introducing additional circuitry around a pre-existing, nested controller.

In contrast to prior art implementations of artificial vision systems, in nature, co- ordination of eyes and head is done simultaneously, and is really treated as one problem as opposed to two.

The brain receives substantial multi-sensory cues such as visual, auditory, somatosensory, and so forth. A biological system allows directing of gaze toward any such cue (s). Not only are head and eye movement co-ordinated, but in general the body as a whole is co-ordinated during gaze shifts. In order to achieve this, co-ordinated movement based on a number of very different sensory inputs is performed. Thus, it is evident that a brain performs some form of sensory fusion to determine a response from many sensory inputs. One site for sensory fusion is the superior colliculus (SC); it is known to be a critical structure in the control of orientation.

The nervous system imposes a high degree of organisation in the way that sensory information is distributed and represented. For example, the SC is organise in multiple

layers, some of which are purely sensory, while others combine sensory (input) and motor (output) signals: it provides a topographical map of orientation errors originating from multiple sensory sources. Generally, individual sensory representations in both superficial and deeper layers are spatially congruent (i. e., registered). Such registration is necessary to enable a unified experience of the external world. Furthermore, the"motor" organisation of the deeper layers is also topographic and is spatially registered with the sensory representations in the SC, in spite of its different role.

The alignment of the sensory and motor topographies yields a simple method of integrating multiple sensory cues for the simultaneous co-ordination of several platforms.

Conversely, the sensory-motor registration scheme may facilitate the use of common control signals in the co-ordination of multiple platforms. A unimodal (e. g., tactile) sensory cue, produces an orientation of the eyes, head, limbs, etc., that gives the subject a better view of the stimulus.

Sensory integration is not as straightforward as it might initially appear. One issue is that of accommodating stimuli coded in different reference frames. A visual stimulus, for instance, is coded as a retinal error in an eye-relative co-ordinate system. Thus, the spatial location of the target is based on the retinal error, the orientation of the eye in the head and the head in space. Auditory targets are localise using a head-centred co- ordinate system, relying on interaural differences in the timing and intensity of incoming sound waves. Finally, a tactile stimulus is localise using a third reference system, body- centre, based upon the location of the stimulus on the body and the angles of intervening joints. Clearly, signals relating target location are processed differently for each sensory system, before being fused in a shared collicular map.

Generally, gaze shifts in humans are performed with co-ordinated eye and head movements, suggesting that a common signal drives both the eye and head motor systems; the SC is believed to provide this signal. At first, it may appear that the SC, analogous to kinematic systems, uses multi-sensory information to reconstruct an absolut target position in space and then directs gaze to that goal. However, it is believed that this is not the case. Coding within the SC is in the format of a topographical motor- error signal, relating the site of activity to the size and direction of a required gaze shift.

Sensors in eye muscles encode muscle length (eye position). But large signal delays make them unusable in the control of active eye movements Consequently, a brain must estimate eye orientation internally to implement feedback control; this estimate is referred to as an efference copy.

It is widely accepte that saccadic eye movements are generated by a local feedback loop in the brain stem; that the eye is driven until an internally monitored "motor error"is zeroed. Most models of the (ocular) saccadic system place the SC inside the local feedback loop. When it became clear that the ocular saccadic system was a subset of the gaze saccadic system allowing for a free head, the previous concept of a "motor error"was extended to"gaze motor error"that controlled both eye and head. A saccade towards a stationary visible target typically falls short of its goal. Though a secondary saccade is often used to finally foveate the target, the same effect is sometimes achieved with postsaccadic drifts, or"glissades."Since the goal during postsaccadic drift is to foveate a target, the corresponding action at the SC motor layers is to zero the motor error. A reasonable claim, then, is that during slow phase the collicular goal is to zero motor error, just as in the case of saccadic gaze shifts.

Target stabilisation and target acquisition represent two conflicting goals in oculomotor control and it is generally accepte that two operating modalities (slow and fast) are required to meet these demands. The response caused by switching between these modalities is called nystagmus.

A classical model of a dual-modality system has each modality-controller operating independently and simultaneously. Here, commands to direct eye movement, the so-called pre-motor drive, consist of selective summation of the individual modality- controller outputs. Unfortunately, such a topology cannot explain recent neurophysiological observations. Head motion information (linear and angular) originates in the vestibular sensors and is known to come directly into each side of the brainstem at a neuron pool called the vestibular nucleus (VN). Furthermore, it has been discovered that most cells within each pool identifie as pre-motor also carry an eye position signal, which is updated continuously for all types of eye movements. Clearly, nature does not rely on an architecture of parallel independent slow and quick-phase controllers.

Eye movements elicited by different sensory stimuli are called reflexes such as pursuit, optokinetic, vestibular, and so forth. In prior art, these are also usually modelled as an ensemble of parallel dedicated controllers. Recent behavioural studies clearly demonstrated that interactions among reflex subsystems cannot properly be modelled as a summation of independent pre-motor commands. Non-liner interactions between the vergence and conjugate subsystems have been sited for vergence/saccadic, vergence/vestibular, and vergence/optokinetic studies. The results suggest a central coupling between subsystems, as if the whole oculomotor system was built-up around a "core"common to the different subsystems. The central coupling hypothesis is further supporte by neurophysiological evidence, demonstrating that several pre-motor nuclei participate in the control of a variety of ocular reflexes. Activity within the vestibular nucleus, for instance, is actually modulated by all types of ocular reflexes, including the vergence set point. Furthermore, neurons in the deep layers of the SC project to the vestibular nucleus as well. Thus, all sensory information seems to converge, in one way or another, onto the vestibular nucleus. This demonstrates another site for sensory fusion, besides the SC.

In order to explain a controller according to the invention, it is helpful to provide a standard platform for the controller to control. Of course, the controller according to the invention is applicable to other platforms as well. Referring to Fig. 2, a platform for use with the present invention is shown comprising two imaging devices 20 (only one is shown). As set out below, the imaging devices are preferably identical. Intensity imaging devices are commonly used, but range imaging devices or other known imaging devices will work as well. It is possible to use different imaging devices in accordance with the invention; however, due to added complexity of such an embodiment, the preferred embodiment is described with reference to two identical imaging devices.

Imaging device motion for each imaging device 20 is provided by a plant 22. The imaging devices 20 and the plants 22 are statically mounted in accordance with the preferred embodiment. Because of the nature of a vision controller according to the present invention, it is possible to mount the imaging devices 20 on one or more moving platforms (not shown). The plants comprise the imaging devices and actuators in the form of galvanometers attache to pulleys bearing wires 24. The wires 24 act to pull either

side of the imaging device 20 in order to angle the imaging device 20. Signals from the imaging devices 20 are provided to amplifiers 26 and then to filters 28. From the filters, the signals are provided to a processor 30 comprising a controller 32 for providing control signals to the plant 22.

According to the invention, a binocular controller inspire by nature is disclosed.

In order to present a description in a simplifie fashion, the preferred embodiment is explained below with reference to a binocular gaze controller confine to a single horizontal plane. Of course the system may be extrapolated to a truly three-dimensional binocular gaze controller. Further, the system may be extrapolated to use with more than two imaging devices, different imaging devices, and different forms of input sensory information.

The description of a preferred embodiment proceeds as follows: an examination of biological eye movement in a single horizontal plane and definition of a convenient co- ordinate system; a description of the core computational"engine"of the controller, a two- sided system capable of simultaneously co-ordinating two imaging devices in a plane; a description of additional circuitry for a basic pursuit system using slow-phase gaze control; a description of additional circuitry for a fast-phase gaze control; a description of a merger of the two systems; and then a description of gaze control in a three-dimensional space.

Though it is conventional to change the case of a variable when taking its Laplace transform, the notational convention used herein and in the claims which follow is not to change the case of a variable when taking its Laplace transform. Thus, L {e (t)} = e (s) and L {E (t)} = E (s). Finally, for instances when a variable is to be referenced without concern of its domain, e. g., E (t) or E (s), one of the following notations are used:'E'or'E ()'.

Co-ordinate Svstem Referring to Fig. 3, two eyes 40 observing a target confined to motion in a plane defined by the page are shown in horizontal cross-section,. The diagram depicts eyes in execution of horizontal movement; vertical eye movements are discussed below. EL and ER represent the left and right-eye rotations, respectively. By convention nasal, inward,

eye rotation is taken to be positive. In frontal eyed animals, as depicted here, it is possible to express eye positions in terms of conjugate (EC) and vergence (EV) components as defined in EL)(5.1)Ec=½(ER- EV = (ER + EL) (5. 2) The conjugate component indicates an average direction in which the eyes are pointed while a vergence component refers to an angle subtended by the lines of sight of each of the two eyes. The avantage of this co-ordinate system becomes apparent below.

Butterfly Controlle Despite an ability of recent biological models to reproduce physiological and behavioural observations, oculomotorists are generally reluctant to accept this alternate approach. This has prevented implementation of binocular controllers based on recent biological models. However, the heart of a binocular controller according to the invention is related to a biological gaze controller and more particularly to the recent biological models thereof. Referred to as a"butterfly"a controller topology and shown in Fig. 4 is a preferred embodiment of the core of a controller according to the invention. It is referred to as a butterfly controller 35 because of the substantial symmetry between the two sides of the controller, and the presence of midline interconnections. Of course, some dissimilarity between the sides of the butterfly controller may exist.

The butterfly controller 35 forms the core of the controller. Because of its symmetry, it is capable of simultaneously co-ordinating both vergence and conjugate components of imaging device position. System transfer functions are as follows:

It is clear that the butterfly controller 35 behaves as a differential amplifier where each output, DR and DL'responds to a common mode of input signals, iR+iL, and differential mode of inputs, iR-iL* Each mode has different dynamics. Prior art approches attempt to model each dynamic separately and to determine a control value for each dynamic. The use of the butterfly controller obviates a need to model each dynamic separately thereby simplifying overall design.

Imaging device position is related to motor drive through the relation En (s) = 5P (s) Dn (s), where P (s) represents the imaging device plant as defined in equation (4. 3) where pl = 460 rad/sec, (4.3) and n= L, R. The factor of 5 in En (s) accounts for the fact that, statically, a 1 V galvo drive produces 5° of shaft rotation. Given this, we can employ equations (5.1) through (5.4) to compute conjugate and vergence components of imaging device position and obtain the following equations: (5.5) (5.6) From the point of view of algebraic overhead, it is more desirable to use a conjugate- vergence co-ordinate system than it is to use individual expressions of imaging device responses. More importantly, the correspondence between the differential mode and the conjugate response as well as the correspondence between the common mode and the vergence component are noted. This link between the butterfly modes and the components of imaging device position are exploite in the controller according to a preferred embodiment.

In passing, it is enlightening to express the above sets of transfer functions as transfer matrices, as in: The Laplace variable, s, has been omitted for the sake of compactness and technically signals should be represented in the following notation {EL (s), ER (s), P (s), M (s), iR (s), iL (s), EV (s), Ec (s)}. It is evident that the co-ordinate mapping from {EL, ER} to f EC, EVI diagonalises the transfer matrix and hence decouples the conjugate and vergence systems.

A butterfly controller according to the invention is well suited for accommodating various sensory inputs and for performing sensory fusion. In a preferred embodiment, butterfly controller inputs are driven with retinal position error and retinal slip. Of course in another embodiment, other sensor inputs are used either with the retinal errors or instead of the retinal error data.

System response is a significant concern in any controller design. In a biological setting, the imaging device plant (eye) is sluggish and must be compensated for by the controller to achieve high-speed movements. This plant is typically modelled as a first- order low-pass filter, and compensation is achieved by cancelling its pole with a plant inverse imbedded in the controller. Alternatively, a same compensation is achieved using model feedback as in the preferred invention embodiment. In Fig. 4, M (s) blocks represent a model of the imaging device plant (i. e., when M (s) = P (s)). Since M (s) is in the feedback branch its poles become the zeros of the closed-loop system, thereby achieving compensation. This approach has been shown to be more robust in the presence of plant model uncertainty. Of course, M (s) need not be identical to P (s); a reduced form of the plant representation is often sufficient.

The characteristics of a mechanical imaging device plant used in this work represent a scenario that is different from that seen in biological systems: the plant does not need"speeding up"and, hence, complete pole-zero cancellation is not necessary.

Each M (s) block need not accurately model the imaging device plant beyond the intended controller operating frequency range and consequently is chosen to promote simplicity of analysis. Since the imaging device plant is represented as an all-pole, 2nd-order, critically damped system, each M (s) block is chosen to be a I st-order, low-pass system with a same pole as the imaging device plant. Thus, not every plant pole is matched with a controller zero since the true imaging device plant is of higher order than the model reference block.

Expanding and simplifying the butterfly transfer functions, (5.5) and (5.6), yields where TVerg and TConj are defined as <BR> <BR> <BR> <BR> <BR> 1+g<BR> (5.11)Tverg= <BR> <BR> p1(1+g-d)<BR> <BR> <BR> <BR> <BR> <BR> <BR> T,;-pal-g-d5.12 Tverg and TConj represent"dark"vergence and conjugate time constants, respectively, and are inherent to the butterfly controller structure.

Stability requires that TVerg > 0 and TConj > 0, which is satisfied if for each time constant expression the sign of the numerator and denominator are same. From a purely mathematical point of view, there are four cases to consider; however in the spirit of the invention, physiological observations are used to restrict the cases to those that are analogous or representative of biological systems. Firstly, in mammal oculomotor systems, connections across the midline and within each brainstem half provide positive

feedback loops, which increase the conjugate time constant beyond that of the plant. This is known; see for example Mano, N., T. Oshima, and H. Shimazu,"Inhibitory Commissural Fibers Interconnecting the Bilateral Vestibular Nuclei,"Brain Research, Vol. 8, pp. 378-382,1968 and Nakao, S., S. Sasaki, R. H. Schor, and H. Shimazu, "Functional Organization of Premotor Neurons in the Cat Medial Vestibular Nucleus Related to Slow and Fast Phases of Nystagmus,"Experimental Brain Research, Vol. 45, pp. 371-385,1982, which are hereby incorporated by reference. Furthermore, Tconj > 1/p1 <BR> <BR> <BR> <BR> which gives rise to the following inequality:<BR> 1-g<BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> >1/p1 (5.13)<BR> <BR> <BR> <BR> p1(1-g-d) Accordingly, dependent parameters g and d are solved from (5.11) and (5.12) to yield (5.14) and (5.15). In the present embodiment, the requirement that d > 0 implies that ß + a > 0. It then follows that P-cc > 0 in order that g > 0. Also, the requirement that ß- a > 0 is satisfied if TConj > Tverg- d = 2/ß+α (5.14) ß-α g = (5.15) ßα <BR> <BR> <BR> <BR> <BR> <BR> <BR> Tconjp1Tvergp1<BR> <BR> <BR> where α = , and ß = .<BR> <BR> <BR> <P> Tconjp1-1Tvergp1-1 Pursuit: The Slow Phase System The slow phase system implemented according to the present invention provides a feedback controller for adjusting gaze to reduce small retinal errors. Such a system is highly advantageous in tracking target motion or improving target alignment.

Referring to Fig. 5 a slow-phase system according to the invention is shown. At a centre of the system, the aforementioned butterfly controller 35 is shown for driving the imaging device plants 22, P (s). M (s) are"estimates"of the imaging device plants in the

form of models 38 and form part of feedback loops within the butterfly controller 35. An important by-product of this configuration is that an output of M (s), En* (s), provides estimates of actual imaging device positions, En (s). These estimates are advantageous for use in a fast-phase system, described below and forcing part of the preferred embodiment.

Retinal errors en (s) are combine to yield controller inputs in (s) in such a way as to exploit a correspondence between the differential mode/conjugate component and the common mode/vergence component of the controller mode/imaging device responses.

Figs. 6 and 7 demonstrate the workings of this through extreme examples. For an arbitrarily positioned target initial retinal errors are somewhere between the two examples shown and are modelled as being a linear combination of the two presented cases. In general, retinal errors have both conjugate and vergent components. Returning to our analogy with biological systems, the combination of retinal error signals from both imaging devices are analogous to a linear combination of the binocular zones from each superior colliculus, thereby representing the binocular zone of the whole visual field. The in (s) drive the control system and hence act as motor errors, for zeroing, consistent with the role of the colliculus output in biology.

As in the butterfly controller analysis, it is far more convenient to analyse the slow-phase system in a framework of the conjugate-vergence co-ordinate system. In order to do so, the notion of a conjugate and vergence target (TC (s) and TV (s)) is hereby introduced. Referring again to Fig. 3 it is apparent that a same form of relationship guiding the mapping between {EL (s), ER (s)} and {EC (s), EV (s)} applies to the case of the target signal, giving rise to k1A(s)(TR(s)-ER(s))(5.16)eR(s)= k1A(s)(TL(s)-EL(s))(5.17)eL(s)= Considering the vergence system, from Fig. 5, iL(s)-2kverg(eR(s)+eL(s))(5.18)iR(s)+ Substituting (5.16) and (5.17) into (5.18) yields iL(s)=2kvergk1A(s)[(TR(s)+TL(s))-(ER(s)+EL(s))](5.19)iR(s)+ Substituting (5.19) into (5.6) yields <BR> <BR> <BR> <BR> <BR> <BR> 10kvergk1A(s)P(s)<BR> ER(s) + EL(s)= (5.20)<BR> <BR> <BR> 1+g-dM(s) The final expression then, relating the vergence component of imaging device movement to the common-mode input is obtained by simplifying (5.20) to yield (5.21).

Through similar steps the conjugate system is analyse to yield a transfer function shown in (5.22). Because the conjugate and vergent modes operate independently of each other, it is possible to decompose the slow-phase system of Fig. 5 into two, simpler, independent control systems as shown in Fig. 8.

Gain Parameters for use in the Controller The mechanical frequency response of the imaging device-plant combination affect selection of parameters. For example, some equivalent biologicl response parameters may not be achieved with some apparats. In general, a parameter set can be deduced that satisfis a wide scope of design specifications. Alternatively, mechanical components are selected to match biological system components. Further alternatively, parameters are selected to achieve desired results independent of biological system abilities.

Consider operation of the slow-phase system without any target to pursue, for example in total darkness. With visual information absent (eL = eR = 0) the system degenerates to that of a butterfly controller operating in isolation-open-loop. Substituting

for P (s), from equation (4.3), in equations (5.9) and (5.10) yields equations

where TVerg and Tconj are defined in equations (5.11) and (5.12). This also explains the choice of the term"dark"time constants for TVerg and TConi.

The exact values of these time constants are not critical but are typically larger than time constants of the plant. In this embodiment, TConi is set to 15 seconds, which is within a normal range often reporte in neurological studies. Of course other values of Tconj are also applicable to the invention. The vergence time constant in biological systems is poorly documente in the literature. Apparently, Tverg < Tconj, which is a mathematical requirement for system stability. In the present embodiment TVerg is selected as 5 seconds.

From equations (5.23) and (5.24), it is apparent that the conjugate and vergent systems include a cascade of two lowpass systems. However, the term p,/ (s + p), having a time constant in the order of 2 ms, is negligible in view of the butterfly controller time constants of 5 and 15 seconds. Accordingly, a dominant pole approximation shown below each equation is used. The large values of the dark vergence and conjugate time constants give rise to a gaze holding ability of the controller. In other words, if a light target is fixated in darkness, the states of the controller are driven to non-zero values. When the target is extinguished, the controller states gradually decay to zero. Therefore in the absence of a target, the imaging device positions return smoothly to a null position at a rate related to the dark conjugate and vergence time constants. This is a desirable feature

since the imaging devices and plants are protected against jerking motions and consequent mechanical stresses. Fig. 9 shows simulation results for a slow-phase response when a target is extinguished and the system continues to operate in darkness. In order to take avantage of this feature, the controller merely requires a"no-target"signal such as constant open-loop zero inputs. Determining target visibility is a"cognitive"issue and methods of performing target extraction and determinations of retinal error are well known in the art of computer vision.

Substituting for known parameters into equations (5.14) and (5.15) one obtains values to four significant figures for d = 9.997 1 e-01 and g = 1.4497e-04; in simulations results of which are shown herein, d and g are used to full numerical resolution. TVerg and TConi are highly sensitive to these parameters.

Returning to the slow-phase system operating in the presence of visual feedback, system transfer functions EV (s)/TV (s) and EC (s)/TC (s) are shown in equations (5.21) and (5.22). Only two parameters remain to be determined, kVerg and kconi. Unfortunately, the high order of antialiasing filters A (s) makes analytic treatment of these systems somewhat painstaking, so a numerical approach was used instead. Furthermore, because the conjugate and vergence systems are similar in this embodiment, only the conjugate system is explore in detail. Final results are provided for the vergence system. Design and analysis were carried out using MATLAB numerical computation software.

Fig. 10 shows the root locus plot of the system shown in Fig. 8 (a) along with final pole/zero placements for this embodiment. There is some freedom in selecting conjugate tracking bandwidth but an upper bound is approximately 9.3 Hz where the system becomes"critically damped"when only considering the dominant poles. For the purpose of demonstration, the closed-loop conjugate bandwidth was set to 1 Hz corresponding to kconi = 3. 8273 x 10-3. The Bode plots for this system are shown in Fig. 11.

BUITERFLY PARAMETERS Symbol Parameter Value Definitions p 1 Eye plant pole 460 rad/s (doublepole) Tverg Vergence dark time S s constant Tconj Conjugate dark time 15 s constant gain(ß-α)/(ß+α)α=Tconjp1/(Tconjp1-1)ß=gCross-midline Tverg p I l (Tverg P,-1) d Model reference gain 2/(ß+a)

Table 1: Summary of the parameters used in the butterfly. This corresponds to operating the slow-phase system in total darkness without a visible target to pursue.

The root locus and Bode plot for the vergence system are very similar to those of the conjugate system. A closed-loop vergence bandwidth of 1 Hz is obtained by letting kverg=3.7400 x 10-3. The resulting DC gain is 0.9661. The near-unity DC gain of the systems is due to the butterfly controller's large open-loop time constants allowing the controller to behave as a non-ideal integrator. Tables 1 and 2 summarise the parameters for the slow-phase system according to the present embodiment operating in darkness as well as with visual feedback.

SLOW-PHASE SYSTEM PARAMETERS Parameter Conjugate System Vergence System Controller gain Kconj = 3. 8273 x 10-3 kverg = 3.7400 x 10-3 Bandwidth 1.001 Hz 0.999 Hz DC gain 0.9661

Table 2: Summary of the parameters for the slow-phase system operating with visual feedback.

The Fast-Phase System : Saccadic Movement Saccadic movements are rapid eye movements from a first eye gaze direction to a second other eye gaze direction. Since the present invention is modelled upon biological

systems, implementation of saccadic or fast-phase movement of imaging devices is desirable. It is widely accepte that saccadic eye movements are generated by a local feedback loop in the brainstem, and that the eye is driven until an internally monitored "motor error"is zeroed. Since a fast-phase is executed open-loop with respect to vision, the internal efference copy of eye position is used to form the local feedback loop. An existing problem is that nobody knows where to place the feedback in modelling saccadic motion of eyes. An artificial fast-phase system according to the invention is based upon a motor-error control scheme.

Referring to Fig. 12 a system according to the invention for generating fast-phase control signals is shown. The controller is intended to quickly change rotation of an imaging device, by Rdeg (.), relative to the current imaging device orientation. One such system is needed for each imaging device axis of rotation. For implementation of the fast- phase, a bilateral structure used during slow-phase is dissolve and each imaging device is now controlled by the independent system remaining on each side.

The control system operates as follows. During slow-phase, switch SW of the sample-and-hold unit is closed and Cdeg (.) is continually updated. It provides an estimate of the target position in space, since Rdeg (.) is a function of retinal error and E* deg (-) is an estimate of the current eye position. When the need for a fast-phase arises switch SW opens, thereby taking a sample of the input, and causes Cdeg (.) to hold a command signal. Except for block M (s), which is the only controller element that is unaltered across modalities, the rest of the controller is not active during slow-phase. Alternatively, a portion of a response from the rest of the controller is substantially damped. Thus, at the start of a fast-phase the controller assumes the configuration shown and responds to a step input, Cdeg (.). The system operates open-loop with respect to Edeg (.) since there is no visual feedback and the control system relies on local feedback based on the efference copy of imaging device position. Efference imaging device position is subtracted from a command signal producing evolts(#), a motor error driving half a butterfly controller. At the start of the fast-phase eVolts (.) represents a size of the saccade and during saccade movement, it tends to zero. Alternatively, this result is achieved without explicitly

computing the step input for the saccade.

A transfer function between the fast-phase command, Cdeg (s), and an estimated imaging device position, E*deg (s) when the system state, E*volts (s), is zero, SW is open and that M (s) = p1/ (s+pl) is as follows: Low-frequency gain of this system is 5k1ka (5.26)ß# <BR> 1+kakc-kb which is also the same low-frequency gain as for Edeg (s)/Cdeg (s) s=0 since P (s) has a same DC gain as M (s). Also of interest is a transfer function shown below <BR> <BR> <BR> <BR> <BR> <BR> evolts(s)s+(1-kb)p1<BR> <BR> = k1 (5.27)<BR> <BR> <BR> <BR> Cdeg(s)s+(1-kakc-kb)p1 since the error signal eVolts (t) is useful to determine when the saccade is to terminate. Of course, other means of determining a termination condition for a saccade are also possible. When Cdeg (s) represent a saccade command in the form of a step of amplitude A degrees, Cdeg (s) = A/s, the following equations describe resulting step response functions where u (t) represents a unit step function. Determination of parameters ka kb, and kc is now possible.

Consider an error fonction comparing an internally estimated fast-phase trajectory, E*deg (t), to a desired goal: êdeg (t) = A u (t)-E*deg (t). E* (.) is used instead of E (.) because it is available for signal processing and because steady-state gain between the desired goal and either of these two signals is identical. Substituting equation (5.28) for E*deg (t) in êdeg (t) yields Because êdeg (t) is strictly monotonically decreasing, a fast-phase movement is terminated when the error fonction falls below a prescribed threshold as indicated in Design contraint I below. The actual threshold value is preferably normalise with respect to fast-phase height in order to maintain a same relative error amongst all fast-phases. Since êdeg (t) is not physically computed in the controller, relating it to motor error drive, evolts (t), is convenient. Using equations (5.29) and (5.30) and eliminating a common exponential term yields thereby revealing a static relationship between edeg and volts. Design constraints such as those that follow are helpful in fixing remaining free parameters.

Design contraint 1: evolts (t) approches 0 as t approches co in order to have a well behaved error function. The above contraint is easily satisfied by setting kb = 1.

Design contraint 2: êdeg proportional to eVOltS in order to facilitate saccade termination decisions. Examining equation (5.31) in light of the above restriction on kb, results in P=I. It then follows from equation (5.26) that kc = 5k, and, after simplification, that êdeg = evolts/kl'It is evident that the fast-phase system drives the imaging device until an internally monitored motor error falls below a predetermined threshold.

Design contraint 3: The bandwidth of the fast-phase system is c0c. In order to provide a

system meeting this constraint, the overall system transfer function is considered, as shown in equation (5.32), where P (s) = pl2l (s+pl) 2. ka is found such that EaeJOCdegJ=1/. After algebraic manipulation and parameter substitution one obtains equation (5.33). All free parameters are now accounted for and values determined therefore.

The above analysis was performed when the system begins from a zero state. This assumption is generally incorrect. When the assumption is incorrect, a further analysis establishes that the fast-phase generator will change the value of eVolts (t) by A degrees (k, is a units-conversion factor) regardless of the initial value of E*Volts (t). Therefore, the above design constraints are applicable to a real fast-phase controller.

Because the fast-phase system operates without visual feedback, it is not subject to the same stability restrictions as the slow-phase system and, consequently, can operate at a much higher bandwidth. The bandwidth, c°c, is an ajustable parameter in the fast- phase system. In order not to place excessive strain on the mechanical parts of the imaging devices and associated plants, the fast-phase bandwidth is arbitrarily set to 20 Hz (03c = 125.7 rad/s). Of course, other fast phase bandwidths are also possible. A summary of parameter settings for the fast-phase system is presented in Table 3.

In this embodiment, the mechanical bandwidth of an imaging device and associated plant (s) is greater than the specified fast-phase bandwidth. Consequently, positive feedback is used to lower the system bandwidth. In the case of a sluggish imaging device and associated plant, negative feedback is used to increase bandwidth, or alternatively a different bandwidth is selected.

FAST-PHASE SYSTEM PARAMETERS Symbol Parameter Value 2 K1 PSD/amp conversion factor 331.8 mV/° Bandwidth (-3dB point) 407r rad/s Ka Controller gain see Eq (5.35) Kb Model reference gain 1 Kc Feedback gain 5 kl

Table 3. Summary of parameters used for fast-phase control.

Integrating Slow and Fast Phase Control Svstems According to the invention, the slow and fast-phase systems are not separate and distinct. According to the embodiment described above, the two systems share information in the form of a state (s) imbedded in each model reference block, M (s). Thus, information from an end of a slow-phase segment is available at the start of a fast-phase segment and information from the end of a fast-phase segment is available at the start of the slow-phase segment. It is because of this passing of information at the core of the binocular controller, the butterfly controller 35, that the co-ordination between slow and fast segments is achieved without effort. Referring to Fig. 13 the two controllers are shown fused to form a single controller unit. Alternatively, it is possible to duplicate much of each controller within each of two controllers in order to create two distinct controllers one for the slow-phase and one for the fast-phase, but the internal states in each controller would still have to be updated appropriately at each modality transition, essentially coupling the systems through initial conditions.

Elements 40 shown at the bottom of Fig. 13 and enclose by a dashed contour denote a portion of the controller analogous to the superior colliculi. Retinal errors are separate from the fast-phase inputs symbolising the different collicular layers impinged by these signals. The retinal projections target the superficial layers, whereas the saccadic motor errors could target the deep layers. Collicular activity caused by either the slow or fast-phase systems is output on the same projecting pathways to produce a motor error drive for the butterfly controller 35.

Digital Implementation According to an embodiment of the invention, the controller is implemented as a discrete-time system. An s-domain approach is used herein for ease of understanding.

This also renders the description somewhat sampling rate independent. Though it is true that an analogue version of a signal cannot be uniquely determined from a sequence of samples, for a sufficiently high sampling rate a good approximation can be obtained by interpolating the sequence with a smooth curve. In digital control a"sufficiently high" sampling rate is expressed relative to the closed-loop bandwidth of the system and as a general rule a digital controller running at twenty times the system bandwidth will substantially match the performance of a continuous controller. Since the controller described herein operates at a rate of fifty times the closed-loop bandwidth, an analogue treatment of the controller is merited.

Surprisingly, the discrete realisation of the controller is not straightforward. It has been found that three issues need to be addressed for implementation of the butterfly controller: (i) how to implement the basic model reference loop (i. e., half a butterfly controller); (ii) butterfly controller performance at transitions between the two operating modalities, fast and slow; and (iii) implementation of the fast-phase system. Each of these issues is addressable by those skilled in the art with reference to the disclosure herein.

Digital implementation problems are systematically resolved and the controller is treated in continuous time.

The equations for the slow-phase system, (5.21) and (5.22), were derived under an implicit assumption of zero initial-conditions. In other words, since M (s) is a 1 st-order (t) and E, (t) are the only states in the system and, consequently, the assumption was that EL (tO) = ER (to) = °. However, as a result of switching between operating modalities, the end of a fast-phase segment generally results in a system state of arbitrary value. Effects of non-zero initial conditions on a subsequent slow-phase segment, are determined in order to correct, where necessary, the parameters or the equations set out above. Initial-condition effects on fast-phase segments need not be addressed because of the way that the fast phase system operates.

Referring to Fig. 14 slow-phase initial conditions are shown incorporated into the butterfly controller. Because the slow-phase system is linear the principle of superposition may be applied to sum the forced imaging device response and the transient effects of the initial conditions to yield the total imaging device movement. Results in terms of vergent and conjugate responses are as follows: Initial conditions can have a significant effect on resultant eye movements. In fact in Galiana, H. L., "Oculomotor Control,"in Visual Cognition and Action, D. N. Osherson, S. M. Kosslyn, and J. M. Hollerback (Eds.), MIT press, Cambridge, MA, Chapter 4, pp.

243-283,1990. ISBN 0-262-15036-0, it is demonstrated that frequent resetting of initial conditions through nystagmus significantly alters the slow-phase envelope profile during angular vestibulo-ocular reflex (VOR). This results in a phenomenon known as velocity storage. It is possible for a bilateral controller to operate primarily with transient responses and never reach a steady state.

Vertical Eve Movements Thus far only horizontal imaging device movements have been considered. Of course, a controller according to the invention is useful in controlling imaging device motion in three-dimensions as well.

Vertical imaging device movements as described herein are referred to a horizontal plane located midway through the imaging devices field of view. As before,

imaging device movements relative to this plane are assigne a same polarity. Fig. 15 shows two eyes observing a vertically displaced target. The co-ordinate system for imaging device movements is defined in accordance with the reference plane described above. Individual imaging device positions are used to express conjugate (EC) and vergent (EV) components as shown in the following equations (5. 36) Ev =|ER EL| (5.37) Although these equations are diffferent from those corresponding to the case of horizontal imaging device movement, geometrically they represent substantially same quantities: the conjugate component represents the vertical imaging device rotation of a cyclopean eye (at O), and the vergence component represents the angle subtended by the lines of sight.

For vertically aligned imaging devices, as shown here, EV = 0. EV is evaluated with an absolute-value function in order to maintain a positive vergence angle; negative values arise when the right imaging device is higher than the left imaging device.

Here, the controller common mode is associated with conjugate response and the difference mode is associated with the vergence response. This correspondence is exploite by combining the retinal errors as follows: kconj(eR+eL)-kverg(eR-eL)(5.38)iL= kconj(eR+eL)+kverg(eR-eL)(5.39)iR= These are essentially the same equations used in the horizontal case where constants kConj and kVerg have been interchanged. Thus, the controller for vertical eye motion is substantially identical in form to that used for horizontal imaging device control (shown, for example, in Fig. 13), except for the controller-constant interchange mentioned above.

Integrated three-dimensional imaging device motion controller In integration of the controller to support movement of an imaging device in

multiple planes, independent control for each plane is provided by duplication of significantly same controller described above. Cross talk between the horizontal and vertical systems is possible at the sensory level only, in that the vertical system responds to inadvertent vertical eye deviation caused by horizontal eye movement, and vice versa.

Alternatively, cross talk implemented at other levels is possible in some instances, and may even be a requirement given geometric restrictions for 3D rotations. The modality of the controller is universal; one system cannot be operating in slow-phase while the other is operating in fast-phase. Controller modality is determined as described by the four points below, using horizontal and vertical control for illustrative purposes.

NQ 1. At system start-up the modality is set to slow-phase. This is an arbitrary choice. For example, the system could start up in fast phase with a desired predetermined gaze orientation instead.

NQ 2. While the system is in slow-phase a vectorial representation of the retinal error is monitored. The fast-phase modality is invoked and latched whenever the magnitude of the vector exceeds a threshold relating to a size of a"fovea."Often, the"fovea"size is an arbitrary size selected based on functionality of the vision system. The process is described as follows. i) An auxiliary retinal error is computed by adding a derivative component to an original error as follows: £ij = (1 + kft s) eij, where i = L, R denoting left and right imaging device; j = H, V denoting horizontal or vertical system, and s is the Laplace variable. eij is the same eL and eR mentioned previously in the text but is now augmente with a delimiter, j, identifying the system to which the error corresponds. The inclusion of retinal slip is consistent with the notion that physiologically both position and velocity error contribute to triggering a fast-phase response. ii) The horizontal and vertical auxiliary retinal errors are added vectorially to produce a radial auxiliary retinal error for each imaging device. The magnitude of this error is Rj = sqrt (EHZ + Ez), where i = L, R. iii) The modality is set to fast-phase when either of the auxiliary radial errors exceeds a

threshold, i. e., whenever 4i > T fast on for either i = L, R. Formulation of auxiliary radial error effectively defines a circular"fovea,"of radius T fast on at the centre of each"retina."An arbitrary foveal zone is created by appropriately defining 4i. Of course, the size of the foveal zone is determined based on a particular application and physical limitations of the vision system.

NQ 3. Once the system is in fast-phase each bilateral controller is decoupled, so that left- and right-hand sides work independently producing four separate controllers. Each controller executes an open-loop re-orientation of an imaging device that terminates once a given percentage of the fast-phase step is executed by each imaging device. The process is as follows. i) Upon entering fast-phase, the required corrective step size is computed using: Rij = (I + kf s) eij, where i, j, and s have the same definitions as above. This quantity is stored.

Retinal slip is included in order to predict and compensate for target position and motion. ii) The stored horizontal and vertical fast-phase step sizes are added vectorially to produce a radial saccade amplitude. The magnitude of this step is pi = sqrt (RiH2 + RiV2), where i is as above. iii) The fast-phase modality is released when the instantaneous radial fast-phase trajectory equals, or exceeds (1-TfaSt Off) x 100% of pi, for each imaging device.

NQ 4. Once the fast-phase modality is terminated slow-phase resumes, and the controller is placed into a refractory period where it is forced to remain in slow-phase for a fixed period of time, trefraCt. After the refractory period has elapsed normal slow-phase operation resumes as in N° 2 above. The refractory period, when used, prevents system instability by allowing sluggish components within the controller to register the effects of a rapid eye movement recently executed. Alternatively, when the refractory period is substantially short, it is preferred that another method of preventing system instability be used.

Because the fast-phase termination point is normalise relative to its amplitude, all fast-phase segments last the same length of time. However, the residual retinal error varies with step size (a 5% residual of a 1 ° step is smaller than a 5% residual of a 10° step). In order to avoid perpetual chattering between the two modalities, it is important that in the worst case, for the largest expected fast-phase step, the fast-phase"on" threshold is chosen to be larger than the residual error at the end of a fast-phase. In other words, a fast-phase preferably terminates such that it places the controller in a position where it naturally begins to operate in slow-phase. Of course, the refractory period, when not substantially short, guarantees that slow-phase operation always follows a fast-phase segment. Table 4 presents the remainder of the parameters defined for the three- dimensional controller; some are arbitrarily chosen, such as Toast on and Toast off whereas others are selected to provide desired controller performance.

BINOCULAR CONTROLLER PARAMETERS Symbol Parameter Value K1 PSD/Arnp conversion factor 331.8 mV/° Kft Retinal slip velocity gain; to monitor tracking performance 3e-03 Kf Retinal slip velocity gain; to setup fast phase goal 3e-03 Tfast on Fast phase on threshold (absolut, fovea motor error size) 2.0 * kl Tfast off Fast-phase off threshold (relative residual error) 5% Trefract Fast-phase refractory period 30 ms Table 4. Summary of the parameters used by the binocular pursuit system.

The present embodiment provides an integrated controller for gaze control in a binocular vision system. The integration occurs at two levels. First, separate slow and fast-phase systems are not identifiable since a single controller generates both dynamics by changing its topology-gain parameters. Secondly, both the conjugate and vergent components of imaging device movement are co-ordinated simultaneously. Modifying gain parameters as used herein and in the claims that follow inclues setting some of the gain parameters to zero (0) in order to eliminate effects of some paths within the

controller.

The controller is organise in such a way that information is shared in a"core" butterfly controller 35 allowing automatic self-co-ordination of imaging device movements. The controller is simple, preferably having only as many states as are imbedded in the models of the imaging device plants. Hence, computational demands are low; however, modality switching typically produces complex imaging device trajectories because of effects of initial conditions. In order to avoid chattering between the slow and fast modalities, switching parameters are carefully chosen. This is apparent to those of skill in the art based on the present disclosure.

Experimental Results A prototype of the invention was produced in order to co-ordinate imaging device movements of a system as described above. The prototype was tested and simulated.

Experimental results and those of simulations are compare. Some individual aspects of the controller are evaluated as is general controller performance.

Because the invention is independent of a target identification algorithm, a prototype of the invention was built using imaging devices that limit a need for such an algorithm by using a point target. The imaging devices comprise a dual-axis position sensing detector (PSD) using lateral-effect photo-diode technology. For each axis, the PSD and signal conditioning amplifiers provide the co-ordinates of the point target on the detector surface relative to its centre (see below for details).

The transimpedance amplifier limits the bandwidth of the detector to 5kHz, which is still significantly higher than the bandwidth of the control system (< 25 Hz); hence, the PSD/amp combination are treated as a static gain. Though the"retina"is technically afoveate, the centre of the optical sensor functionally defines a fovea since retinal error is substantially zero thereabout.

Referring to Fig. 2 a schematic representation of an imaging device and an associated target positioning system is shown. The controller was tested on a mechanical eye. The ball of the eye consists of a plastic sphere 23 (Ping-Pong ball, 38 mm ) and

houses the artificial retina R and a wide angle lens 21 with a focal length of 20mm from a Fuji Quick SnapTm disposable camera. The lens 21 comes equipped with a pin-hole aperture which is removed in order to improve its light gathering efficiency. The"retina" R is positioned at the focal distance of the lens 21. The four PSD sensor wires 25 travel on the inside of the sphere 23 and emerge adjacent to the lens 21. Once outside the eye cavity, the wires 25 pair off appropriately (X1 with X2, etc.) and are routed as twisted pairs toward a connector interfacing with the signal conditioning amplifiers 26. In order to allow horizontal and vertical rotation of the eyeball two perpendicular tendon pairs 24 (only one pair is shown) comprising steel spring wire, 152 mm are attache to the sphere 23 such that their orientation coincides with the axes of the PSD. Each tendon pair 24, in the X and Y-planes, is attache to a pulley driven by its own galvanometer (galvo) 22 giving the imaging device a range of motion in each plane. Each galvo 22 is driven by a dedicated amplifier/PID-servo module so that galvo control is closed-loop; the module also conveniently outputs the instantaneous galvo shaft position.

The control system is implemented in discrete time, running at a sampling rate of I kHz. Naturally, analogue signals such as retinal errors are sampled. According to the Nyquist sampling theorem it is necessary to bandlimit these signals to less than half the sampling frequency. To this end, 6th-order Bessel filters, having a-3dB cutoff frequency of 55Hz are used.

In use, a computer, at the heart of the system, positions a laser point target on the screen by commanding targeting galvos 29 with signals Tx and Ty via galvo amplifiers 31. For each axis, the PSD (UDT Sensors, DL-10) produces two photocurrents, (Ix1, Ix2) and (Iy1, Iy2), which are a function of the position and intensity of the centroid of the light distribution on the detector surface. The signals from the PSD are amplifie and normalise by transimpedance amplifiers 26 (UDT Instruments Model 301DIV) and are then anti-alias filtered at filters 28 to produce a retinal error (RX, Ry) along each axis of the detector. The retinal errors are then brought into the computer, via two internal data acquisition cards (Burr Brown PCI-602W) arrange in a master-slave configuration. The computer, implementing a control algorithm, outputs galvo command signals EX and Ey

to galvos 22 causing the eye to rotate in order to centre the image of the target at the centre of the PSD.

Slow-Phase motion A frequency response of the conjugate pursuit system was determined. Ex rand ER are referred to as"true"imaging device positions since they are physically measured quantities. However, the imaging devices are manipulated by wire"tendons"24, which are capable of twist and slack, particularly for large oblique imaging device deviations.

Such variable mechanical coupling between the galvo and imaging device results in EL and ER being estimates of the actual imaging device position. True imaging device position, in general, is not determinable from the experimental apparats and the only means of assessing tracking performance was by monitoring retinal error.

In this experimental paradigm a vertical position of the laser target is fixed at mean vertical imaging device level, while a dynamic signal analyzer controls its horizontal position (SIG OUT). A sept-sine technique is used whereby the target is displaced sinusoidally with changing frequency. During the sweep, the analyzer determines the dynamic output/input relationship (CH2/CH1) of the system under test.

Note that CH1 and CH2 measure instantaneous galvo shaft positions reporte by the galvo amp/PID-servo modules. Dynamics of the target positioning system are taken into account during measurement and are eliminated in the analysis. This is not the case when the target command signal is taken as a system input.

The extent of the horizontal target sweep is kept small for two reasons. Firstly, in practice the two sides of the bilateral controller are not perfectly balance. Thus, a vergence stimulus will produce conjugate eye movement (and vice versa). For a conjugate study, then, it is desirable to stimulate the system such that the vergence angle remains constant. This generally requires that the target move along a horopter, which was impossible because of the flat target screen being used. However, a small target displacement approximates a constant-vergence stimulus because the target distance from the imaging devices (800 mm) is much larger than the distance (72 mm) between imaging devices. Secondly, large repetitive imaging device deviations, 90% of range of motion, do

not have a one-to-one mapping with retinal error, but that for small rotations a relationship is almost linear. For the extent of the horizontal target displacement used in this expriment a horizontal vergence component was observe measuring 6% of the conjugate response (peak-to-peak values measured, at 500 mHz). It is not possible to tell, from this measurement, what portion of the vergence response is an undesired cross-talk due to controller asymmetry and what portion is a normal response due to excitation of the vergence mode by the stimulus.

Referring to Fig. 16, an experimental frequency response of the horizontal conjugate pursuit system is shown. Because of the recorde units, volts instead of degrees, the gain portion of the frequency response is correct to within a scale factor.

Comparing results with Fig. 11, the design goal, the bandwidth of the horizontal pursuit system meets design parameters. The larger phase shift beyond 10 Hz is consistent with the fact that the actual imaging device plant is of higher order than its representation in the controller.

Because of limitations of the system used for testing the prototype, vergence was characterized using a step input. The controller motor drives, DL and DR, were used to compute a vergence output (Dv n DR + DL). Because the motor drives precede the imaging device plants, the resultant frequency response represents an accurate imaging device response up to the plant bandwidth (47 Hz), where an error of-3dB results.

Fig. 17 (a) shows a vergence step response function of the horizontal pursuit system. In the experiment, measured conjugate response (peak-to-peak) was determined to be 7% of the vergence step amplitude. In order to determine the frequency response, the step response is (i) differentiated, to produce the impulse response function, (ii) zero- padded to increase the spectrum resolution, and (iii) Fourier-transformed to yield the frequency response function, as shown in Fig. 17 (b). Because of the units of measurement, and the fact that the area of the impulse response function is not normalized to the system's DC gain, the low-frequency portion of the frequency response function (< 47 Hz) is correct to within a scale factor. The bandwidth of the vergence response is 900 mHz, which is comparable to the design goal.

An examination of simulation and experimental results for target tracking within a plane and more specifically of conjugate motion is presented below. Target movement is considered along the horizontal axis.

Figure 19 shows simulation results for conjugate target steps; vergence drive is zero. In Fig. 19 (a), displaced target remains within the foveal (motor-error) zone. Since a fast phase is not required, the target is stabilise using only the slow-phase modality. Both (b) and (c) illustrate small target steps in which a single fast-phase is sufficient to foveate the target. Post saccadic drift following each fast-phase segment causes the eyes to coast toward the target. The high operating bandwidth of the fast-phase modality is clearly demonstrated by step-like segments. In (d) a large target step requires two fast-phases in order to reduce retinal error. Note that the corrective (2nd) fast-phase places conjugate eye position squarely on target, but that subsequent slow-phase operation gradually displaces the eye to the slow-phase steady-state level.

For step targets, controller parameters kft and kf play a significant role at the start of the step where the retinal slip is large. Essentially, kft determines the fast-phase-trigger sensitivity to retinal slip velocity, and kf is intended to compensate for some of the filtering due to the anti-aliasing filters. Both parameters were set such that step-targets of small amplitude could be acquired with a single fast-phase. Because of modality switching the effects of kft and kf on the step response are non-linear. Furthermore, changing Tfast on requires that these parameters be adjusted as well. The refractory period, in this particular case, is shown for the sake of generality; the period may be set to zero without destabilizing the system. With no refractory period, however, back-to-back fast phases are separated by a one-sample slow-phase segment that is needed to visually sample the world in order to determine the next fast-phase goal. A non-zero refractory period is useful for limiting the number of possible fast-phases in a given period of time.

In biological systems that the fast-phase refractory period is in the order of 50 ms.

Example (d) contrasts somewhat with observations made in biological systems. In the case of double-saccade target acquisition by a biological vision system, the primary saccade typically falls short of the goal by about 10%. This is not the case here; the

reason for the different behaviour being the anti-aliasing filters interacting with the Tfast on threshold. That is, because the fast-phase'on'threshold is fixed, increasingly larger target steps produce greater underestimates of the saccadic goal. Parameter kf compensates for some of the underestimate, but only for small target displacements. This highlights the value of careful selection of filters for use with the present invention.

Fig. 20 shows a close-up view of the last example in Fig. 19, as well as the signal used to determine the controller modality. Note in (a) the smooth transitions between the two controller modalities; there are no discontinuities in eye position. Fig. 19 (b) illustrates that fast-phases are triggered as long as the retinal motor error of either imaging device is greater than Tfast on In this embodiment, there are three significant parameters to which the controller is sensitive: (i) variations in the plants, (ii) variations in the anti-aliasing filters, and (iii) variations in the PSD/Amp gain, k1. Here, both the slow-and fast-phase systems operate below the mechanical bandwidth of the imaging device plant. Hence the cancellation of a plant pole is not crucial and consequently the sensitivity to plant variations is of little concern. Similarly, the high bandwidth of the anti-aliasing filters desensitizes the controller with respect to these filters. This leaves k1, which is the parameter to which the controller is most sensitive. Besides influencing the bandwidth of the slow-and fast- phase systems, k1 appears as a scale factor in each system's transfer function (s). In some system designs, it is necessary to reset k1 periodically due to drift. For example, a parameter variation could manifest itself as excessive under/overshoot following a fast- phase. Of course, a non-zero refractory period provides some robustness against parameter variations causing this effect.

In a further experiment, combine conjugate and vergence movements are analyse. The target is made to oscillate at 4/6 Hz along the horizontal axis. A card, intercepting the target laser beam, is brought nearer to the eyes. Data recording commences and shortly after the card is dropped. Because the laser beam emanates from below the imaging devices and projects steeply onto the screen, the approaching card causes both substantial downward imaging device rotation and horizontal vergence.

Furthermore, if the beam does not project midway between the eyes a bias in horizontal conjugate eye position also results. Thus when the card is dropped 2D imaging device rotation occurs.

Fig. 21 shows the results of the expriment with the controller able to use both operating modalities. The retinal errors, in (c), reveal that tracking performance is indeed good, as each fast-phase acts to reduce error.

Fig. 22 shows simulation and experimental results for three-dimensional target tracking. Again, practical results are in agreement with simulation results. Fig. 23 shows the vertical and horizontal conjugate imaging device responses in time. These figures demonstrate that the first and last fast-phase segments in (a) are due primarily to the large retinal error in (b) at these instances in time. Hence, when a fast-phase is triggered both the horizontal and vertical controllers are affecte.

The prototype implementation of a bilateral controller according to the invention performs as desired. Both stable tracking and rapid reorientation are integrated into a single controller, which further provides independent, integrated co-ordination of the vergence and conjugate components of imaging device movement.

The bilateral controller disclosed herein is for control of binocular vision, without added complications of image processing for target selection. As indicated above, the present invention is applicable with any method of target identification for determination of retinal error. Alternatively, when a sensor is designed to detect retinal error directly, the system is also useful. Though the focus of the present invention is a controller for coordinating various aspects of imaging device motion, it also provides a simple architecture for integrating multi-sensory information and for coordinating imaging device motion with additional platforms.

The preferred binocular controller presented herein is distinguishable over the prior art in several ways. For example, the topology: a two-sided, symmetrical and inter coupled configuration, where each side controls an imaging device's motion. Another example is the integrated means by which both conjugate and vergent components of imaging device movement are coordinated. That is, a"vergence"control system and a

"conjugate"control system are not individually identifiable. Nor, for that matter, is an explicit"saccadic"control system found since slow and rapid imaging device movements are coordinated in an integrated manner. All four components of imaging device movement-conjugate, vergence, slow, and fast-are coordinated in one integrated controller. Using such an integrated approach produces a simple and compact system, having two operating modalities. A significant avantage is low computational demands.

Another avantage of the present invention is an ability to fuse multi-sensory stimuli. This is a natural consequence of the architecture described herein and resulting controller modes. This is illustrated by the addition of a biological-like vestibulo-ocular reflex (VOR), which stabilizes gaze during head perturbations, even in the dark.

Referring to Fig. 24, an extract from a slow-phase system schematic of Fig. 5 is shown. In the case of angular VOR of human perception systems, additional sensory inputs are derived from the semi-circular canals. During head rotation the canals operate differentially. That is, they produce signals that are of opposite sign. Hence, simple projection of such differential motion signals onto the butterfly controller excites a differential mode and, for the horizontal system, produces conjugate movements of imaging devices, which are appropriate to compensate for detected rotation of a common support platform. Other forms of sensory information may be integrated into the controller in a similar way. Preferably, in the present invention sensory information excites appropriate butterfly controller modes in order to produce required imaging device movements.

Sensory fusion is achieved in the controller by making the butterfly act as a data funnel. Thus, incorporating additional sensory information does not require that separate and distinct control systems be added to the existing one. Advantageously, this is achieved without the problematic parallel architecture studied by Brown. In principle, then, coordinating a richer set of imaging-device-movement components does not necessarily make the control problem excessively difficult. Generally, the addition of an extra plant in the form of a platform for the imaging devices is straightforward.

The superior colliculus plays a key role in integrating multi-sensory information and in coding activity in various sectors of the visual field. The functionality of the

colliculi is represented in the controller as a linear combination of retinal errors from each imaging device. This process precludes the use of a dominant imaging device, for example a master-slave arrangement, as is often used in other artificial vision systems.

System stability ultimately places an upper limit on attainable pursuit bandwidth.

The fast-phase system, on the other hand, operating open-loop with respect to vision, complements the slow-phase pursuit system due to its ability to operate beyond the bandwidth of the latter. Switching between these two operating modalities generates a mixed system response, which on the whole, performs better tracking than the slow-phase system could achieve alone.

The key element in the butterfly controller structure is the use of model reference feedback to provide an estimate of imaging device position. Optionally, plant non- linearities are included within the model reference branch, when advantageous. For example, a saturation non-linearity can be included in series with a plant in order to provide a form of protection against large drives. In that case, the same non-linearity is preferably included in the feedback path, to preserve accurate state estimates.

An important issue during the operation of a vision system is its rection to losing sight of a target. As noted above, the present controller has a predictable and acceptable rection to loss of sight of a target.

Alternative Embodiments In an embodiment, retinal error and imaging device control signals are used to predict information for future control. For example, the predicted information is used for maintaining imaging device focus, or, for example, for improving ramp tracking where, absent such a prediction, the slope of a slow-phase segment, following a fast-phase, does not match the slope of the target trajectory. For example, many tracking systems employ some form of Kalman filtering to predict target trajectory. Alternatively, the type of the system is increased by including integrator elements. Since integrators tend to destabilize a system this is not preferred.

The simplicity and compactness of the controller according to the invention make it a natural candidate for miniaturization and eventual implementation in integrated- circuit (IC) form. Thus, it will be necessary to adapt the present invention as a digital controller clocked at appropriate clock rates. Once implemented in an IC, the present invention is useful in controlling many different systems. Binocular vision systems, of course, can be controlled with the controller. So can monocular systems where target tracking is desired. Also, platform stabilisation applications are well suited to the controller. In an alternative embodiment, the controller is adapted to evoke binocular imaging device movement toward that target, even though one of the two imaging devices initially does not image the target. This also allows the controller to operate reasonably in the event that the target is outside the view of an imaging device. This will support monocular operation of the controller.

The invention has applications in general platform stabilisation. In the case of a non-motorised support platform (A) under positioning devices (B), a VOR capability is used to null positioning errors through compensatory motion of the devices (B) according to motion sensing of the support platform (A), thereby reducing the effects of perturbations on orientation. Additionally, for a controllable support platform (A), positioning errors in devices (B) and motion of the support platform (A) are reduced simultaneously, providing even better stabilisation of the last platforms in a chain (here B). An example is provided in Fig. 25, where a platform 100 for stabilisation is shown.

The platform 100 is shown as a triangular platform having three actuators 102, one at each corner thereof. Also at each corner is dispose a stability sensor 104. Sensed stability data in the form of placement error and/or simple motion from a plurality of sensors 104 is provided to a controller and the platform 100 is stabilise in accordance with the sensed data. Because the controller provides a stable feedback loop tending toward a null error-null motion, the system is believed to perform well in this application. Analogous to gaze control, an objective of the controller according to the present embodiment is 3-D error reduction and surface stabilisation, by nulling sensor signals and/or internal estimates of platform positions. A preferred application would use at least two stacked platforms. Optionally fewer or more than three sensors are used. As

such, it will be evident to those of skill in the art, that the invention is applicable to a wide variety of applications.

In a further embodiment, the imaging devices are mounted on a platform analogous to a head, and eye-head co-ordination is necessary. This is facilitated using the present architecture since a plant for controlling platform motion and plants for controlling imaging device motion each receive a same signal and effectively move in a co-ordinated fashion. The resulting system operates correctly because the present architecture provides a control signal in an attempt to correct error and not in order to move the imaging devices'line of sight s to a target location in global co-ordinates. For example, the method propose by Galiana and Guitton in Galiana, H. L., and D. Guitton, "Central Organization and Modeling of Eye-Head Coordination During Orienting Gaze Shifts,"Annals of the New York Academy of Sciences, Vol. 656, pp. 452-471,1992, which is based on motor-error control as opposed to world-co-ordinate control. The idea behind motor-error based control is straightforward; both oculomotor and head controller systems are driven with a same control signal. A feedback pathway for the head is added in parallel with the eye controller feedback pathway to converge on the same butterfly summation nodes. Additionally but not necessarily, when a sensor exists for head- platform motion or orientation, its signal is optionally incorporated in this additional feedback pathway and provide avantages in the presence of perturbations. Motor-error based control, providing a same control signal to each plant, is essentially unheard of in the field of robotic control.

Referring to Fig. 26, a high level schematic is presented illustrating basic principles of operation of a controller according to the invention, as applied to the co- ordination of 2 imaging devices with a supporting head for simple rotation/perturbation of platforms about a vertical axis and 2-D movement of a target in a plane normal to the rotation axis. Alternatively using only a differential mode of Fig. 26, co-ordination of a monocular vision system on a moving head is accomplished.

This is best explained by referring to the basic properties of the central core 'butterfly', shown in Fig. 27. Given combine sensory inputs on each side (IR, IL), two outputs are available as motor drives for all plants (DR, DL)-These have a particular

relationship, which is useful for independent tuning of two imbedded response modes: the common mode and the difference mode, with respective transfer fonctions.

The control signals provided to the plants are used appropriately to control several platforms in a co-ordinated fashion, provided the co-ordinates of each are selected appropriately. For example, each plant receives same control signals. Since the controller is attempting to zero error, the addition of extra plants for correcting error will reduce the time for performing error correction.

Given a symmetrical camera co-ordinate system, the conjugate (panning) component of the two-camera system will depend only on the difference between sensory inputs on the two sides. Vergence (angle between cameras'lines of sight), on the other hand, only depends on the sum of all inputs. Each mode has its own dynamics, though in a visual tracking system similar dynamics are preferred. The response mode elicited therefore simply depends on the pattern of sensory signals. In contrast to previous systems, which use a different controller for each desired mode, here asymmetric sensory patterns automatically produce mixed-mode responses.

This describes the use of the controller to co-ordinate two imaging devices and to simultaneously drive a supporting head rotating in a plane parallel to that of the eyes. The head is driven with the difference mode. Further, sensory transducers sense head motion, whether controlled or unexpected (perturbed).

A controller according to the invention is tolerant of non-linearities in motor drives (e. g. slew rates) or motor excursion limits, provided they are included in the internal models. Despite these limitations, the combine camera/head system provides apparent linear tracking performance over wider amplitude ranges and bandwidths than cameras alone and with less complexity than prior art systems.

The controller complexity depends only on the order of all internal estimated models and not on the number of desired control modalities. The latter are executed via parametric changes in a single shared controller while platform control relies on suitable combinations of same available outputs.

The controller has been illustrated to fonction with two identical imaging device plants. However, by allowing a parametric asymmetry in the controller without losing topological symmetry there is potential for accommodating different imaging device plants. This ability, and the tolerance of the controller topology to plant dissimilarities is an avantage of the present controller.

The present embodiment relies on two modalities, each linked to a topology and a parameter set (slow in symmetrical coupled system, fast in uncoupled system). Further embodiments achieve multiple modalities in each of the slow and fast controller topologies by defining multiple parameter sets linked to an array of error thresholds. This does not increase the dynamic complexity of the controller from its current level, nor computation time and it allows, for example, the selection of emergency high-speed modes under defined conditions.

Referring to Fig. 29, an embodiment of the invention is shown for accurate placement or measurement. A platform 200 is dispose on rails 202 for allowing the platform to slide forwards and backward. On the platform a controller 204 according to the invention and two imaging devices 206 are dispose. The imaging devices 206 are at a fixed angle of vergence. The controller 204 provides a signal from one mode to a plant (not shown) for moving the platform toward and away from a target 208. The platform moves until retinal error is within predetermined limits then, an actuator signal is provided. This allows for accurate focus within an image capture with a lens having a fixed focal length and depth of field. It is also useful in painting and other applications where a machine's position and distance to a surface affects machine operation and/or results of the operation. Additionally the other mode in the controller is optionally used for lateral positioning of the above positioning system. Further additionally, signals from motion sensors on the platform are fused with visual errors to provide compensation for unexpected perturbations. This extends useful applications to higher precision.

Some potential applications for the invention include modelling of human eye/head motion for human operators to reduce delays in remote loops, tele-operation system design and implementation, tracking target objects for use in photography,

robotics, manufacturing, security and so forth, measurement of distance, and platform stabilisation.

The invention is applicable to, for example, multi-segmental control. Referring to Fig. 9, the 2-node butterfly described in the preferred embodiment of the invention is augmente to include more'wings'of the butterfly around a central core, with each wing attache to a node interconnected to the other wing nodes. The result shown provides architecture for a controller capable of controlling at least as many plants as there are 'wings'-four (4) in Fig. 9. Control of the plants is performed in a number of modes governed by the number of wings and the pattern of sensory signals impinging on the multiple nodes. Again, complexity is limited to the number of states in the plant models in each wing, but co-ordination of multi-segments is preserved using a shared motor error. Also, an increased number of modalities is supporte since a larger number of gain parameters are modifiable. Referring to Fig. 31, another diagram of a controller similar to that shown in Fig. 30 is presented. Here, instead the wings are arrange in a chain having its two ends unconnected. As is evident, there is sufficient information flow from any wing to another to support the plurality of modes and the plurality of modalities. Use of a closed loop of wings or a chain of wings is determined based on an application and other design criteria.

For example, when an arm is added to a head platform, a third wing is added to the butterfly controller. The third wing need not be symmetrical to the other two wings since it controls a different plant. When desired, visual feedback from the arm to the visual system is provided so as to provide additional sensory information for tracking the arm and for use in controlling arm motion.

The present invention is also useful in control of cameras for use in a telepresence system. Such a system is useful for remote operation of space vehicles, mining vehicles and the like. By providing retinal errors to the controller, the telepresence system can better simulate eye motion for a user of such a system.

Numerus other embodiments of the invention may be envisioned without departing from the spirit or scope of the invention.