Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN APPARATUS AND METHOD FOR CONSTRUCTING A DIRECTION CONTROL MAP
Document Type and Number:
WIPO Patent Application WO/2009/056864
Kind Code:
A1
Abstract:
A method of constructing a direction control map for a capture device comprises detecting an image stimulus and redirecting the image capture device such that the stimulus coincides with a reference location on the image.

Inventors:
LEE MARK HOWARD (GB)
CHAO FEI (GB)
Application Number:
PCT/GB2008/003714
Publication Date:
May 07, 2009
Filing Date:
November 03, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ABERTEC LTD (GB)
LEE MARK HOWARD (GB)
CHAO FEI (GB)
International Classes:
H04N5/232; G08B13/196
Domestic Patent References:
WO2006111734A12006-10-26
WO2006012645A22006-02-02
Foreign References:
US20060197839A12006-09-07
US6977678B12005-12-20
Other References:
None
Attorney, Agent or Firm:
KILBURN & STRODE LLP (London WC1R 4PJ, GB)
Download PDF:
Claims:
Claims

1. A method of constructing a direction control map for an automatically directable image capture device, comprising detecting an image stimulus at a stimulus position in a captured image, redirecting the image capture device according to redirection information and storing redirection information corresponding to said stimulus position if, following said redirection, said stimulus coincides with a reference location on the image, in which the redirection information is not known, prior to said redirection to cause the stimulus to coincide with the reference location

2. A method as claimed in Claim 1 further comprising repeating redirection of said image capture device to one or more intermediate positions until said stimulus coincides with said reference location.

3. A method as claimed in Claim 2 further comprising storing redirection information for the stimulus position as the resultant of the multiple redirections.

4. A method as claimed in Claim 2 or Claim 3 further comprising storing redirection information for at least one stimulus position corresponding to an intermediate position.

5. A method as claimed in any preceding claim in which the stimulus position comprises a stimulus position region.

6. A method as claimed in Claim 5 in which the region resolution varies across the image.

7. A method as claimed in Claim 6 in which the resolution decreases with distance from the reference point.

8. A method as claimed in any preceding claim in which the reference location comprises the centre of the image.

9. A method as claimed in any preceding claim in which the reference location comprises a reference region.

10. A method as claimed in any preceding claim in which, where redirection information is stored for at least some positions in the image, the method comprises identifying a neighbour position to a stimulus position for which redirection information is stored and redirecting the image capture device according to said redirection information.

11. A method as claimed in Claim 10 in which the redirection information is stored for the stimulus position if, following said redirection, said stimulus coincides with the reference location on the image.

12. A method as claimed in Claim 10 or 11 in which, following redirection, a new neighbour position is identified and the steps repeated.

13. A method as claimed in any preceding claim in which the redirection information is stored as a mapping from a position in an image to a corresponding movement value in a motor field.

14. A method as claimed in any preceding claim further comprising detecting an image stimulus at a position in relation to which redirection

information is stored and redirecting the image capture device according to the redirection information.

15. A method as claimed in any preceding claim in which the image capture device comprises at least one of the group of a security camera, a search camera, a touch-screen controlled camera, a zoomable camera, a digital device camera or a web camera.

16. A method as claimed in any preceding claim in which the stimulus comprises at least one of the group of an image change, an image movement or a predetermined image parameter.

17. A method as claimed in any proceeding claim in which the image capture device has a plurality of configuration and redirection information is stored for respective configurations.

18. A method as claimed in any preceding claim in which the redirection vector comprises a randomly determined redirection vector.

19. A method as claimed in any of claims 1 to 17 in which the redirection information comprises a predetermined redirection vector.

20. A method as claimed in any preceding claim in which the redirection information comprises a redirection vector and in which, where the redirection vector moves the stimulus position to an intermediate position, redirection information is stored at an image position which would be rendered coincident with the reference location by said redirection vector.

21. A method as claimed in any preceding claim in which the redirection information comprises a redirection vector and in which redirection vectors stored for image position corresponding to multiple intermediate positions as well as for image positions corresponding to redirection vector combinations.

22. A method as claimed in any of claims 10 to 12 in which, if a stimulus has a plurality of neighbour positions then redirection information is derived as a function of the redirection information from at least two of said neighbour positions.

23. A method as claimed in any preceding claim in which, if following said redirection said stimulus falls outside an image capture region, a further redirection is applied until the stimulus falls within the image capture region.

24. A method as claimed in any preceding claim in which, if a physical parameter varies such that stored redirection information is incorrect, the stored redirection information is replaced with updated redirection information according to the method as claimed in any preceding claim.

25. A method as claimed in claim 24 in which, if following redirection according to stored redirection information a stimulus does not coincide with the reference location then the method as claimed in any preceding claim is repeated to replace the stored redirection information with newly derived redirection information.

26. A method of constructing a direction control map for an automatically directable image capture device, comprising detecting an image stimulus at a stimulus position in a captured image in which, where redirection information is stored for at least some positions in the image, the method comprises identifying a neighbour position to the stimulus position for which redirection information is stored and redirecting the image capture device according to said redirection information.

27. A method as claimed in claim 26 in which, if a stimulus has a plurality of neighbour positions then redirection information is derived as a function of the redirection information from at least two of said neighbour positions.

28. A method of constructing a direction control map for an automatically detectable stimulus capture device comprising detecting a stimulus at a stimulus position, redirecting the capture device according to randomly determined redirection information and storing said redirection information if, following said redirection said stimulus coincides with a reference location on, in which the redirection information is not known, prior to said redirection, to cause the stimulus to coincide with the reference location

29. A method as claimed in Claim 28 in which the stimulus is an image or tactile stimulus.

30. A method of controlling the relative position of a stimulus capture device and a stimulus comprising detecting the stimulus and positioning one of the stimulus and stimulus capture device according to redirection information corresponding to said stimulus location.

31. An image or stimulus capture system comprising a capture device and a controller in which the controller is arranged to implement the method of any of Claims 1 to 30.

32. A computer programme comprising a set of instructions arranged to implement the method of any of Claims 1 to 30.

33. A computer readable medium storing a computer programme as claimed in Claim 32 and/or redirection information stored according to the method.

34. Computer arranged to operate under the instructions of a computer programme or computer readable medium as claimed in Claim 32 or Claim 33.

35. A method as claimed in claim in any of claims 1 to 30 in which a common direction control map is constructed for multiple automatically directable image capture devices.

36. A method as claimed in claim 35 further comprising detecting an image stimulus in a captured image in a first image capture device, redirecting the first image capture device to track the stimulus and redirecting a second image device to track the stimulus in a captured image thereon.

37. An image stimulus capture system comprising a plurality of capture devices and a controller in which the controller is arranged to implement the method of claim 35 or claim 36.

Description:

An apparatus and method for constructing a direction control map

The invention relates to a method and apparatus for constructing a direction control map for example for an automatically directable image capture device such as a motorised camera.

Such an approach is known for example for ocular-motor systems comprising a motor driven camera requiring sensory-motor coordination to provide the motor variables that drive the camera to centre the image on an image stimulus.

Referring to Fig. 1 and Fig. 2, one known way of calibrating a motorised visual system can be further understood. Referring to Fig.1 a camera such as a video or a CCD device 100 is automatically movable in two dimensions allowing both panning (M p ) and tilting (M t ) . Referring to Fig. 2 the corresponding image is shown as a Cartesian grid 200 having grid positions 202, 204 etc. Each reference position on the image 200 has a corresponding motor value for pan and tilt, (M p , M 1 ). As a result when an image stimulus appears at that position in the grid the corresponding motor values (M P; M t ) are retrieved and the camera is redirected accordingly to bring the image stimulus to a reference point such as the centre point X of the image , 206. So, for example, when an image stimulus 208 appears in grid location 204 the corresponding motor values (M p , M t ) are retrieved, the values fed to the camera motor and the camera moved such that the image stimulus 208 falls upon the centre of the image 206.

According to the conventional approach the motor values (M p , M t ) for each location are obtained during a calibration exercise. For example the camera

may be moved under operator control to each of the grid positions and the corresponding motor movements recorded and stored against each position. However this means that for a lens, motor or other variable change or potentially in the case of lens aberration complete recalibration will be required in time requiring operator intervention and a potentially long down time.

The invention is set out in the claims. According to one aspect, camera-motor coordination uses a redirection information such as a vector when a stimulus is detected. If the camera movement according to the re-direction vector results in the image stimulus coinciding with a reference point on the image then the corresponding redirection information is stored. As a result operator controlled calibration is not required, as randomly or naturally occurring image stimuli can be used to generate redirection information and instead the mapping is learnt. The redirection vector can be randomly or pseudo-randomly determined, or can follow a pre-determined search pattern, but is not based on any knowledge of what redirection is required, ie is not known to cause the stimulus to coincide with the reference.

According to another aspect where redirection information is already stored for at least some of the positions in the image when a new image stimulus is detected the image capture device is redirected according to redirection information from a nearby image position for which redirection information is already stored. As a result it will be seen that the stimulus image will be moved closer to the reference point after redirection at which point it will either be coincident with the reference point in which case the redirection information is stored against the image stimulus point or the process can be repeated and the sum of the movements stored, allowing the system to "zero in" on the reference point in a reduced number of movements. According to further aspects, where the stimulus moves through intermediate positions, mappings

can be created for these too, and vector combination can be used to derive yet further mappings. According to a further aspect still, interpolation can be used to weight and apply the redirection vector attributed to nearby image positions.

Embodiments of the invention will now be described, by way of example, with reference to the drawings of which:

Fig. 1 is schematic diagram of a directable image capture device;

Fig. 2 is a schematic representation of an image;

Fig. 3 is a flow diagram showing at a high level steps implemented according to the method described herein.

Fig. 4a to 4h show an image stimulus in an image during successive redirection step according to an aspect of the method described herein;

Figs. 5a to 5e show an image stimulus during successive steps according to a further aspect of the method described herein; Figs. 6a to 6g show an image stimulus for successive steps according to another aspect;

Fig. 7 is a flow diagram illustrating at a low level steps implemented according to the method described herein;

Fig. 8 is a schematic diagram illustrating a computer system for implementing the method described herein; and

Fig. 9a to 9c are schematic diagrams showing population of additional fields using vector combination

In overview the approach described herein relates to learning issues involved in the sensory-motor control of a directable image capture device such as a camera or robotic eye. As a result machine learning or automatic learning of the correspondence between camera motion and fixating on a point in the image captured by the camera is provided.

Referring to Figs. 3, 4 and 5, the method in which the construction of a direction control map - for example a set of values to be fed into a motor driving a camera according to a control scheme in a motor layer to centre an image stimulus in an image or visual layer on a reference location such as the centre point of the image can be further understood. It will be noted that a polar coordinate system is shown rather than the Cartesian system shown in Fig. 2, but any coordinate system can be adopted.

Referring firstly to Fig. 3, at the outset before learning has commenced the image layer is unpopulated as shown in Fig. 4a and the control value motor layer is shown in Fig. 4b with pan (P) and tilt (T) values for from 0 to 100, and starting position P=(50, 50) (Pol). The maps are not pre-wired or pre- structured for any specific spatial system.

In the image of Fig. 4a a reference location comprising a centre point or region is shown at 400. Fields comprising areas such as groups of pixels sharing common redirection information are created when new sensory-motor values are to be recorded and the maps become populated according to the patterns of experiential events. Hence the system at this stage does not know how to move the camera to a position (P) to fixate it on a given point and has no information regarding the relationship between camera movement and effect of what is in the image field.

At step 302 a first stimulus image is created. This may be in any manner. For example a light point, object, movement or image or any distinguishable or definable visual feature in the image may be placed or appear in the camera field of view and this may be done under operator control or may rely on random occurrences in the image. In addition the stimulus image may be a point image corresponding to a single pixel in the image or may be of greater

dimension in which case, as discussed in more detail below, the centre pixel or any other appropriate point within the image stimulus may be selected as a control point. Hence, as can be seen in Fig. 4a, an image stimulus 402 is detected in the image at (75, 75). The system must now learn what motor values will move the camera such that the stimulus is centered.

At step 304 the camera is moved randomly as shown in Fig 4c, for example according to a randomly determined redirection vector δM = (20, 40) providing new camera position a (70, 90) shown in Fig. 4b. Any other movement unrelated to the image stimulus location can alternatively be adopted for example accordingly to a pre-programmed position independent value.

At step 306 if the image stimulus is centred or otherwise coincides with reference location on the image then the redirection information corresponding to the redirection vector, is stored against the original image stimulus location 402 as shown in Fig. 4b for example by creating a mapping between the values.

According to one approach if, after the first random repositioning of the camera the image stimulus is not centred then the system simply resets, does not store any values and instead waits for the next image stimulus and attempts to find a mapping once again. As shown however in Figs. 4d to 4f in the embodiment shown in fact additional redirection vectors δM = 20, - 20, P = 90, 70 (position b), and δM = (-15, 0) P = 75, 70 (position c) are adopted until, in Fig. 4e the stimulus is within a tolerance range of the centre. Hence a field can be created at the original stimulus position with motor values Po - P =25, 20 or σδM = 25, 20, as shown in Fig. 4f and position X in Fig. 4b It will be seen that this can be achieved irrespective of the number of movements of the image stimulus to centre it. Thus, if a stimulus is detected in the future at that position, it can be immediately centred using the stored motor movement values.

According to this aspect, as can be seen in Figs. 5a to 5e, intermediate fields are generated. Accordingly after the first redirection vector δM= 20, 40 the stimulus 402 is repositioned at 404 and a corresponding field for a point on the image 406 that would be mapped to the centre by the corresponding vector is created with values 20, 40. In other words the redirection vector is translated so that it ends at the centre and a field is created at its other end for which the mapping information is entered.

The manner in which the origin point of the vector can be determined can use any appropriate vector mathematics approach. For example the angle of the vector can be determined against a predetermined origin angle (for example degrees clockwise from vertical) and the length of the vector determined by simple trigonometry to allow the vector to be translated relative to the centre or reference point to establish its start point for positioning of the intermediate field. Because the motor movements corresponding to the movement vector on screen are known, and the reference location is known once centred, the corresponding start point of the vector can be populated as a field.

In Fig. 5c, similarly, the stimulus is mapped by vector δM= (20, - 20) to position 408 and a corresponding field is created the point which would be mapped by the corresponding vector to the centre. Finally at Fig. 5d where the stimulus is moved to position 412 by redirection vector δM= (- 15, 0), the corresponding field is created at point 414 with redirection values - 15, 0. Then, in Fig. 5e, the final image mapping is shown where not only do field exist for the original position but also for the intermediate positions 406, 410, 414 simply using the information obtained during the centering exercise. As will be further discussed below additional features are contemplated. For

example for each intermediate location of the image stimulus while it is being centred the corresponding redirection information can be stored.

The image can be treated as multiple regions or fields of overlapping elements such that any image stimulus falling within a given field is assigned the same redirection information. Similarly the centre point or reference location can be a point or feature of predetermined dimension. According to a further aspect described in detail below, once the image redirection mapping is partially populated redirection information can be found for an image stimulus in a location not yet having a mapping more quickly by centring the image on the nearest neighbour to the image stimulus for which a mapping does exist.

As a result it will be seen that simply by relying on successive image stimuli being centred and adopting a machine learning approach to finding the redirection information or vector for each point or field in the image , a system that does not require calibration but automatically learns the mappings between image position and motor value can be obtained. Yet further, by assigning common redirection information to fields having a predetermined dimension the resolution can be varied so as to accelerate the process. Yet further by deriving redirection information for each intermediate position during centring multiple mapping can be created during a single centring operation. Further still by identifying a near or nearest neighbour point to an image stimulus without an existing mapping and redirecting the image capture device to centre the nearest neighbour, the image stimulus can be quickly centred in one or more iterations of this approach. As further image stimuli are detected and mappings created it will be seen that the population of the redirection information will become quicker and will require fewer iterations.

Turning to the approach in more detail, when populated as shown in Fig 4g there is provided a two dimensional map consisting of many elements or fields and the corresponding motor map is shown in Fig. 4h. Although a mapping can be created for every pixel in the image this is clearly date intensive and so according to another aspect multiple field are created comprising a region of pixels showing the same mapping vector. The fields may be of any shape and size distribution and may be contiguous or overlapping elements. These elements represent patches of receptive area in which the values are equivalent.

The system thus has image data as the sensory input and a two degree of freedom motor system for moving the image, in conjunction with the map layers illustrated in Figs. 4 and 5. In an embodiment the map uses polar coordinates because a polar mapping is the natural relation between central and peripheral regions on the image. The motor map (Fig. 4b) is in two degrees of freedom (we ignore axial rotation of the camera) and encodes the usual left- right, up-down movements (pan and tilt). As correspondence between fields on different layers are discovered by experience so they become directly linked. That is, when a movement causes an accurate shift of the image field to a periphery stimulus, then the sensory field (giving the stimulus location) is explicitly coupled to the motor field (giving the motor variables that produce the change). By this means, the sensory-motor relations for accurate saccades (i.e. rapid eye-like movements) are discovered and learned.

According to one simple approach adopting the method described herein, an autonomous learning algorithm can be developed to reflect the above learning process as follows: if an object (or other stimulus) occurs in periphery vision, a visual sensor detects the coordinates of the stimulus position. The detected location is then used to access the ocular-motor mapping. If a field that covers the location already exists, the motor values associated with the field are sent to

the ocular motor system which then drives the visual sensor to fixate the object; otherwise, a spontaneous movement is produced by the motor system. After each fixation, i.e. when the visual sensor detects that the object is in the central or foveal region, a new field is generated and the movement motor values are saved with respect to this field. This is summarised as pseudo code below:

For each session

If object in peripheral vision at θ, 7

Access the ocular-motor map If a covering field exists

Use motor values for this field Else

Record the object's position, Make a spontaneous motor move If the object is within foveal region (reference location)

Generate a new field,

Enter the object's location and the associated motor values Else

Iterate a new session End if

End if Else

Do not move End if Iterate a new session

In a further development referred to above, prior experience of the system can be invoked allowing more rapid learning and in particular a reduction in the number of movements required to find the right motor values. This can be

understood with reference to Figs. 6 and 7. According to this approach, where the mappings are partially populated, that is, redirection information is stored for at least some positions or locations in the image, use is made of this existing information when an image stimulus is detected for which no mapping currently exists.

Referring to Fig. 6a it will be seen that mappings have been created on the motor map for each of the stimulus positions 404, 406, 410, 414 shown in Fig. 5e. The corresponding moves in the image field can be seen in Fig. 6b. When a new stimulus 600 is detected as shown on the image in Fig. 6c and on the motor map in Fig. 6a, for example at image position 20, 70, the system checks whether there is a "near neighbour" depending on some predetermined "nearness" criterion (see below). In the present instance no near neighbour is detected and hence a randomly or otherwise determined redirection vector δM = (- 35, - 35) is applied corresponding to a motor position P = 15, 15. In fact as can be seen at Fig. 6d in that instance the stimulus is shifted out of the visual image (position 602) and so a further redirection vector δM = (- 5, 25) is applied to provide a resultant position 604 corresponding to a motor movement P = IO, 40. As discussed above at the same time an additional field is created at 606 at the start point of where, if the resultant vector were applied, the field would be mapped to the centre.

At location 604 the repositioned stimulus is close to pre-populated field 406 and hence the corresponding redirection vector δM = (20, 40) from that field is applied at Fig. 6e such that the stimulus is repositioned to point 608 which is close enough within a predefined tolerance to be considered as centred in Fig. 6f. As a result the final value is added to the image map in a new field 610. In addition, as discussed above, the fields can be created for the intermediate positions as well as appropriate.Referring to Fig. 7, therefore, at step 700 the

n image stimulus at X and initial position P = Po is detected. If it is identified that redirection information exists in a corresponding field then the stimulus is centred. Otherwise information does not exist for that region of the image (i.e. X is not covered by field) and at step 702, the nearest field for which a mapping does exist is identified. This can be obtained in any appropriate manner. For example, supposing that the ocular-motor map has not yet generated any fields that cover the current stimulus location, let this be (θ, γ). The nearest field to the stimulus can then be selected as an approximation to the target. First an angular tolerance is set to select the fields which have a similar angle with the target field (θ ± δ{). Then, a distance tolerance is set to select the fields nearest to the target field from amongst the candidate fields in the above set. The distance gap is defined as: γ ± δ 2 pixels. The angular parameter is given precedence over distance because, in polar coordinates, the angular coordinate alone is sufficient to determine the trajectory to the origin. From this we obtain a set of fields which fall within the (broad) neighbourhood of the stimulus, and the following formula

MINU 7 - Tx) 2 + (β - θ /)

is used to choose the nearest field from this collection, where γ χ and θ x are the access parameter of the fields in the collection. This is summarised as follows:

If no field exist for location θ, y a. For each field, f χ G fields lf θ - δ ] <fχ(θ)< θ + δ ι

Candidates = Candidates U {/ x } b. For each field,/χ G Candidates

If 7- δ 2 >/ χ (γ) or/ χ (γ) > γ + δ 2 Candidates = Candidates - {/ x }

c. Apply the MIN formula to Candidates to find nearest field to θ, 7.

Accordingly at step 704 where a neighbouring field exists the camera /image is moved to centre the nearest neighbour field using the corresponding δM value as can be seen in Fig. 6f. It will be seen that this will either bring the new image stimulus closer to the centre in which case the process of moving the stimulus position using redirection information is repeated at step 706 or, if it is coincident with a field for which a mapping exists, will centre the image stimulus.

In either case the position P is updated as P = P +δM and if centred the field is populated with (Po-P) at step 708. It will be seen that the more populated the fields become the more quickly mappings for image stimuli detected in previously unmapped regions of the image can be obtained.

It will be noted that where a stimulus is found to fall in an existing field then of course it is centred using the existing data and the field corresponding to its original position is populated. Conversely when the mappings are relatively unpopulated there is a possibility that there will be no field dependent on the selection criteria used - in this case the process can perform one or more random redirection steps as described above until a nearest neighbour is found. As discussed above, in a further aspect, rather than simply storing the redirection information for the first detection location of the image stimulus, for example, by summing vectors of all of the intermediate movements to find the resultant vector, redirection information can also be obtained for each intermediate position the image stimulus occupies in the image during the iteration described above. This aspect recognises that a new field cannot be generated until the camera has fixated an object at that location, and this process typically takes a long time because most spontaneous moves will not

result in a target fixation. However, there is a change in the location of stimulus in the image after each movement. A vector can be produced from this change by where Postion o i d denotes the object position before movement and Position new the object position after. This vector represents a movement shift of the image produced by the current motor values to allow access to a field in the image layer together with its corresponding motor values on the motor layer. In so doing, a new field can be generated after each spontaneous movement.

Usually, during learning, many spontaneous movements will be needed until a fixation is achieved and by using the movement vector idea each fixation can generate many vectors. The current vector will be a sum of the previous vectors, thus:

Vector sum = ∑ Vector i

And the corresponding motor values can also be produced by summation:

This is an incremental and cumulative system, in that the resultant vectors can be built up over a series of actions by a simple recurrence relation:

Vector sum {t + 1) = Vector sum (t) + Vector ,- (t + 1)

Referring, therefore, to Fig. 7 once again at step 710 the redirection information is saved for each intermediate position on the image. For example referring to Fig. 6c if redirection information did exist for the position occupied

by the image stimulus 606 then this could be derived and stored as well according to the algorithm described above. FOR each session

If target, x, in peripheral vision at (theta, gamma) access the ocular-motor map

IF covering field exists, f x use motor values for this field = M(Q, EXIT FOR ELSE

LOOP Perform Neighbouring fields test,

IF neighbouring field, f n found, make move using M(Q, to location y ELSE make a spontaneous motor move, to location y END IF

IF point y is within foveal region (centred)

Generate a new field, f x for the target point x, Using (theta, gamma) and Enter the associated motor values.

EXIT LOOP

ELSE

IF a covering field for y exists, f y

Use motor values for this field = M(f y ), EXIT LOOP ELSE y is not covered by a field,

Create new field f Vi and enter motor data GOTO LOOP END IF

END IF END LOOP END IF ELSE Do not move

END IF Iterate a new session

As indicated above, mappings can be created for each pixel or point location on the field. In order to accelerate the mapping process and reduce the data storage considerations, however, instead fields containing multiple pixels can be adopted. The field density can be higher in the central areas than the periphery for example by allowing the radius of central fields to be smaller than those on the periphery; a simple generation rule allows field radius to be proportional to distance from centre. The motor coordinate system is simply

Cartesian, as each motor is independent and orthogonal, and so the motor map simply stores values.

Similarly it is recognised that the image stimulus may be a point coincident with a single pixel on the image or may be an object covering multiple pixels or fields. In the latter case the image stimulus may be centred by centring its centre pixel according to any appropriate approach. Similarly the field size can be decreased after initial learning is complete and the first mapping is obtained, such that a low - resolution map is obtained quickly and a higher resolution map can be obtained in run-time as required. It will further be noted, of course that any appropriate distribution of field site and indeed any appropriate field shape or range of shapes can be adopted. It will also be noted that the stimulus can be of any appropriate type and detected accordingly, for example the

colour of a laser pointer spot, a flashing highlight or indeed coordinate of a selected pixel input directly for example from a key board or from a touch screen that covers the image or any other feature that can be detected.

Similarly the manner in which it can be detected that the image stimulus has entered the reference location can be any appropriate approach such as image processing to detect when it enters a circular centre region. The time to complete learning of the map is inversely proportional to the field sizes given even coverage of stimuli. Fine resolution is possible but would require many small fields and in practice the resolution required is determined by the degree of error allowed in centering, that is, the size of the centre region or reference location and processing considerations.

Approaches described herein require a level of linearity in the motor map in order to be optimised, for example based on the assumption that a redirection vector applied upon detection of a stimulus will cause the same shift elsewhere in the image irrespective of where the stimulus is detected. However it will further be noted that motor values can be linearised using an intermediate map which can also be created in a learning phase.

In cases where there is extreme lens non-linearity then it will be seen that the resultant movement to shift a stimulus to the centre as a sum of the individual movements required to shift it will be entirely accurate but that intermediate fields may be affected by the lack of linearity. In this case just the initial stimulus position can be populated and intermediate fields do not need to be populated in such an instance.

It will further be seen that, for linear or generally linear systems at least, yet further intermediate field positions can be obtained using vector mathematics.

Referring to Fig. 9 where, in order to centre the stimulus it is moved by redirection vector sa, 900, ab, 902, be, 904 and cd, 906 then, as discussed above and shown in Fig. 9b, fields can be populated for each of the corresponding positions as shown in Fig. 9b at respective positions 908, 910, 912, 914.

However it will be seen from Fig. 9a that in addition by vector addition, a further vector from starting point 5 to point b can be derived by the sum of vectors sa + sb. Accordingly as discussed above, the corresponding field can be populated at the starting point of this vector translated to directed to the centre of the image. As shown in Fig. 9c, therefore, information can be obtained for example for vectors sb, 916, sc, 918 as well as vectors such as vector bd 920 and so forth. In fact for n moves the number of populatable fields is n (n+1) /2.

According to yet a further aspect, in generally linear arrangements it is possible to use interpolation to obtain an improved estimate of a starting redirection vector from neighbour fields to centre a stimulus point. Where, for example, a stimulus point is near two already populated fields than instead of simply taking the motor values from the nearest field and shifting the camera accordingly, a redirection vendor can be applied as a weighted average of the redirection vectors from two or more neighbouring field, weighting being related to the distance of the stimulus point from the respective fields. For example a normalised set of weighting factors can be applied proportional to the respective distances of the nearby fields relied on.

In operation the approach can be implemented in a range of different applications. For example in the case of operator control security cameras a static surveillance camera could detect, for example, movement and centre the

image on the area of most movement alerting an operator. By being sensitive to movement it would automatically follow the source and keep it central. In the case of non-operated systems improved quality image and storage could be obtained by moving the camera to points of interest such as movements allowing the camera to centre on any such detected movement allowing improved quality recorded footage and the possibility of linking to alarms or surveillance centres.

In a search application, changes or movements can be detected by a search camera allowing the camera to automatically centre on an area of interest allowing an operative to decide whether it requires attention or not. This can be of benefit for example where an image remains unchanged for long periods of time.

Systems can be yet further enhanced if definitions are provided for the specific image stimuli being monitored such as a colour, type of movement, type of shape and so forth. For example the stimulus could be a red dot allowing tracking of a laser pointer which could be of use in lectures and video conferencing. In such a case if the central area or reference location is large enough/of low enough resolution then tremors and jitters from the user will not be followed. Similarly this can be used as an aiming device allowing the camera to be aimed at a dot causing any mechanism attached thereto to be similarly directed for example a hose, an x-ray device, particle accelerator, search lights, infrared torch and so forth. Yet a further possibility is providing a motorised web camera such that the web camera can be moved to keep an object of interest in the centre of the image without requiring any prior knowledge of the camera for use in video conferencing, messaging or computer games for example.

A camera fitted with a variable zoom lens can provide mapping for a series of settings of the zoom either by an automated approach when the zoom is motorised or by user selection of a map for a zoom setting. In yet a further approach a mobile camera on the end of an endoscope can allow finer control of the image during medical procedures for example by centring on a formation of interest for a photograph or intervention without requiring mechanical repositioning of the endoscope.

It will further be seen that the system can be used in reverse. Where movement of the object of interest is controlled, for example, by motors then the system can move the object to keep it in the centre of the image no matter where the camera is pointing. Referring for example to Fig. 6b, where the camera is fixed and the object 606 is detected in field 604 then the corresponding redirection information for field 604 can be fed to the motors controlling the object to shift the object on to the centre point 600. This can be of benefit in controlling robotic devices or gantries.

In yet a further application if a recording facility is available (as in typical camcorders etc) then various different applications are possible. For example, considering a configuration with fixed camera and moveable objects of interest, a desired movement or set of movements can now be learned. Having set the device to record mode, an operator or other agent moves the object in a desired movement pattern, and plays the recording back to the learnig system. The location of the object in the visual image is made to be the reference point (or "centre") of the system and so the movement pattern is learned, even over a long sequence of movements. The recordings become templates for desired movement patterns and so the system can use recordings from other sources or systems. In this way the system could imitate or learn from another system.

When a stimulus point is covered by two or more overlapping fields, there are several options for selecting motor values. According to one option, the system uses the closest field, as defined by geometric or vector distance. Alternatively the system can use a function which biases towards the outer fields - this will give more undershoot than overshoot in the resulting redirections or saccades. Alternatively still, the system can use other functions to give bias for high or low aim, or in the direction away from the previous most recent stimulus, or any other bias that may be beneficial. In all cases different selection functions will allow a wide range of bias and subtly different but useful behaviours.

The approach as described above can be implemented in any appropriate manner. For example a motorised camera system can be provided in conjunction with a motor sub-system and two software vision sensors. The motor system is implemented by a motorised pan-and-tilt device and the sensor system by video camera and associated image processing software of any appropriate type.

The pan-and-tilt device provides two degrees of freedom: the pan motor can drive the video camera to rotate about a vertical axis, giving left-right movement to the image, and the tilt motor can drive the camera to rotate about a horizontal axis, giving up-down movement. Combined movements of pan and tilt motors cause motion along an oblique axis. The Pan/tilt device can effectively execute saccade type actions based on supplied motor values from the learning algorithm. Each motor is independent and has a value (M p for Pan and M t for Tilt) which represents the relative distance to be moved in each degree-of- freedom.

The sensor sub-system consists of two sensors: a periphery sensor and a centre or foveal sensor. The periphery sensor detects new objects or object changes in

the visual periphery area and also the positions of any such changes (encoded by polar coordinates). The centre sensor detects whether any objects are in the central (foveal) region of the visual field. In an embodiment the camera capture rate is one frame per second however faster rate are of course possible, for example video frame rates. Each object is represented by a group of pixels flocking together in the captured image. The position of the central pixel amongst these pixels is used as the position of that object. The image processing program compares the currently captured image against the stored previous image. If the number or the position of any central pixels within these two images differs, the program regards these differences as changes in the relevant objects, and encodes the positions of both previous and current central pixels of those changed objects in polar coordinates. Note that an object "change" here signals either of the following three situations, (i) an object is moved to a new location in the environment; (ii) an object is removed from the environment; and (iii) a new object is placed in the environment. In an embodiment a circular area, of radius 20 pixels, in the centre of the image is defined to be the foveal region. If the central pixel of an object is in this central area, it is considered that the object is fixated; otherwise the object is not fixated.

Once the object is fixated the mapping is created in any appropriate manner.

For example the fields in the sensory (image) layer can be plotted in polar coordinates and marked by numeric labels which keep correspondence with the motor fields. If there are changes or problems, e.g. if a camera lens is changed as in a microscope say, the algorithm can be restarted and a new map learned. Maps can be easily stored in files and so a map could be stored for each lens, thus allowing a switch to another map instead or relearning. This means that imperfect or changing lenses/video systems, imperfect motor systems, are no barrier to learning the relationship.

Referring to Fig. 8 it will be appreciated that the approach as described above can be controlled by a computer system for example a personal computer of a type well known to the skilled reader

Accordingly the system comprises a computer designated generally 800 including memory 802 and a processor 804. The computer includes or is connected to an image processing module 806 which receives signals from a camera or other image capture device 808. The camera 808 is controlled to move under the control of a motor module 810 which can be integral or separate from the camera and steps or otherwise moves to predetermined pan and tilt values under the control of the computer 800. Accordingly in operation when an image stimulus occurs at the image capture device 808 this is detected by the image processor module 806 and reported to the processor 804. The computer implements the approach as described above to either instruct the motor module 810 to move the image capture device 808 randomly or to relocate it according to redirection information stored for the image stimulus location or its nearest neighbour. The camera is then moved under the control of the motor module 810 until centring is achieved and the corresponding redirection information for any previously unmapped image stimulus location is stored against the location on the image in memory 802.

According to the approach a simple automatic learning process is provided with out requiring calibration of the device. In particular it is found that rapidly learning is achieved according to the approach as described herein. Once some initial population has taken place it is found that movements using nearest neighbour fields increases sharply and then declines and that direct accurate movements using the correct corresponding fields has an extremely fast rate of increase until only this type of movement exists as the rate of field creation drops. Hence the system is fast, incremental and cumulative in its

learning providing a range of desirable characteristics for real-time autonomous agents.

The system can learn both linear and non-linear relationships including any monotonic relation between distance of the image and motor movement and can learn most quickly when stimuli locations are not repeated and have an even distribution. Yet further learning can take place during use - some little used part of the map may not be learned at all during early stages but can be incorporated automatically when required. Yet further selectable resolution is obtained by varying the field size, distribution or shape as appropriate. Yet further no prior knowledge of the image or motor system is required and relearning of the map is possible at any time.

It will be recognised that various aspects of the embodiments described above can be interchanged and juxtaposed as appropriate. Any form of image capture or other imaging or imaging dependent device can be adopted and any means of identifying regions of the image field similarly can be used. Similarly any means of moving and controlling the device can be implemented according to any required coordinate or other system. Although a simple two-dimensional mapping is discussed herein, additional dimensions can be added. For example stereoscopic vision can be implemented or a depth dimension otherwise obtained. In addition to pan and tilt motion, axial rotation or movement in the Z direction may be implemented for the imaging device as well as more complex zoom approaches as described above. Any appropriate field of view, shape, coordinate system, lens, sub-field, shape distribution or dimension and any appropriate positioning, shape or resolution for the reference point can be adopted. Although discussion is made principally of imaging in the visual spectrum of course any image detected in any manner can be accommodated by the approach as described herein. For example a tactile or touch-based

approach can be adopted for detecting and centring stimuli for example of the type known from atomic force microscopes (AFM) or an artificial skin based on an array of sensing patches allowing movement of the supporting structure such that a touched point is moved to a central reference location. Any appropriate stimulus can be used to teach the system, for example a "test card" or predetermined image containing multiple stimuli can be applied to drive the learning process.

Yet further if there is a change in, for example, a physical parameter of this system such as a lens so that existing redirection information in populated fields no longer centres a stimulus falling within that field then the system can simply re-learn and re-populate the redirection information with replacement information in the manner as described above. This may be detected, for example by noting that a stimulus falling in a populated field and redirected according to the corresponding redirection is not centered in which case a re- learning algorithm can be commenced following new procedures discussed above to provide replacement information for that field. Of course this can be extended to all fields and all intermediate fields during the re-learning process as appropriate.

It will be seen that alternative functionalities can be implemented using the invention described herein. One such implementation is in the field of camera to camera tracking. This approach is useful for example, where a field of view is shared by two or more cameras or other imaging devices which may have partially or fully overlapping common zones of field of view. For example this may be used in a closed circuit (CCTV) implementation. Currently the use of CCTV to track a subject or other stimulus from one camera to the next requires human intervention which can be costly and complex.

According to the approaches described herein the method of constructing a direction control map can comprise incorporating a "shared" image map that will allow communication between multiple cameras. For example in the case of two cameras each camera will have its own map and there will be a third shared image map, the maps being populated as described herein. This will allow detection of a moving object stimulus from a scene, centring of the object in the field of view and tracking the object using a first or primary camera followed by a secondary and potentially further cameras until out of range. Information from the first camera can be used position the second camera to pick up the subject before it leaves the first camera's field of view by using the shared map.

Detection of stimulus appearing at the edge of the lens will be permitted and in addition in all of the embodiments described herein, one or more moving stimuli from a single field of view containing multiple similar stimuli can be detected, centred and tracked.

As a result a stimulus can be tracked by a sequence of cameras without human intervention allowing a more automated and integrated CCTV or other monitoring system.

The approach can be used in range of applications including CCTV surveillance systems and other object tracking systems.