Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DIRECTING STEERABLE CAMERA WITH USER BIAS TRACKING
Document Type and Number:
WIPO Patent Application WO/2014/119991
Kind Code:
A1
Abstract:
A surveillance camera system contemplated in our invention includes at least one steerable or PTZ camera (305) and a plurality of static or fixed view cameras (303), comprising individual static cameras SCI, SC2, SC3 which are configured or implemented to automatically select and track an object 307 last detected by any one of the cameras. Our method comprises the following steps or process stages, enumerated as F1 to F6 for reference: F1 - Listen for incoming event-detection message, i.e. when an object of interest is detected. F2 - If only one event is detected, the object-tracking data or information is then derived from the object-detection message and the process proceeds to F6 below; if more than one event is detected, proceeds to F3. F3 - The user selects from among the multiple objects detected a global target the particular target to be tracked by the system overriding all other objects previously detected and tracked by the system. F4 - Local offset of the selected target is then determined. F5 - Object-tracking data is then obtained from the target whereby the pan, tilt and zoom (PTZ) movement commands may be derived. F6 - The derived PTZ commands is sent to the steerable camera to start tracking the user selected target.

Inventors:
CHAN CHING HAU (MY)
CHOONG TECK LIONG (MY)
YUEN SHANG LI (MY)
Application Number:
PCT/MY2014/000012
Publication Date:
August 07, 2014
Filing Date:
January 29, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MIMOS BERHAD (MY)
International Classes:
G01S3/786; G08B13/196; H04N7/18
Domestic Patent References:
WO2009079809A12009-07-02
Foreign References:
US7173650B22007-02-06
US7173650B22007-02-06
US7750936B22010-07-06
US7990422B22011-08-02
Attorney, Agent or Firm:
CHEW, Kherk Ying (Level 21 The Gardens South Tower,Mid Valley Cit, Lingkaran Syed Putra Kuala Lumpur, MY)
Download PDF:
Claims:
CLAIMS

What we claim are:

1. A method for enabling user to re-select a target to be tracked from at least an object detected in an automated video tracking system comprising at least one steerable camera and a plurality of fixed view cameras implemented to automatically select and track object last detected by any one of said fixed view cameras, the method comprising the steps of:

(i) listening for incoming object-detection message [F l] ;

(ii) upon detecting an object-detected message, checking if more than one object- detection message is received [F2] ;

(a) if only 1 event is detected, deriving object-tracking data from said object-detection message and proceeding to step (vi) below; or

(b) if more than 1 event is detected, proceeding to step (hi);

(iii) allowing user to select a target [F3] ;

(iv) determining local offset of said target [F4];

(v) deriving object-tracking data of said target [F5] ;

(vi) deriving pan, tilt and zoom (PTZ) movement commands from said object- tracking data; and

(vii) sending said PTZ movement commands to said steerable camera [F6].

2. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein step (i) listening for incoming object-detection message is looped until an object-detected message is received.

3. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein step (ii)'s checking whether more than one object-detection message is received is performed within a preset duration of time, to.

4. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein step (iii)'s allowing user to select a target [F3] further comprises checking if user has activated Free mode to enable user input to select a target and, where the user has not activated Free mode, determining the current detected camera's relative direction to the steerable camera.

5. A method for enabling user to re-select a target to be tracked according to Claim 4 wherein a circular overlay and object detected camera is drawn on the steerable camera's view.

6. A method for enabling user to re-select a target to be tracked according to Claim 5 further comprising checking for any user input within a predetermined time period tj and, upon detecting no user input within said predetermined time period, setting the target to be tracked as the latest object detected camera location (x,y); alternatively upon detecting an input from user whereupon the direction of said input relative to the steerable camera is obtained.

7. A method for enabling user to re-select a target to be tracked according to Claim 6 wherein the target to be tracked is set to be the event-detected camera location (x, y) nearest to the user input direction.

8. A method for enabling user to re-select a target to be tracked according to Claim 4 wherein the user activating Free mode is detected, whereupon user's input is obtained and the selected target is set to zero, including setting to zero PTZ values of a PTZ Lookup Table.

9. A method for enabling user to re-select a target to be tracked according to Claim 4 wherein no user input is detected and local offset values of the selected target is set to zero.

10. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein step (iv)'s determining local offset of target comprises checking if user has activated Local Offset mode. 11. A method for enabling user to re-select a target to be tracked according to Claim 13 wherein

if Local Offset is not activated, then local offset is set to zero; alternatively, if Local Offset is activated, then both the drawn circular overlay and other event detected images are cleared.

12. A method for enabling user to re-select a target to be tracked according to Claim 11 further comprising checking for any user input within a predetermined time period t2 wherein

- if no user input is made, then the local offset is set to zero; alternatively, if user input is made, then the direction, magnitude and zoom factor are derived from said input and set as local offset values (x, y, z).

13. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein step (v)'s deriving object-tracking data of the target comprises checking if Free mode has been activated.

14. A method for enabling user to re-select a target to be tracked according to Claim 13 wherein Free mode has not been activated whereupon the next decision step comprises checking if Local Offset is set to zero.

15. A method for enabling user to re-select a target to be tracked according to Claim 14 wherein Local Offset has not been set to zero whereupon the target value and local offset values is summed to obtain the target offset value and upon which a window around the target offset value is defined as the field-of-vision (FOV window) of the steerable camera.

16. A method for enabling user to re-select a target to be tracked according to Claim 15 wherein a bounding box is drawn around the target and is checked if falling within the FOV window, whereby

upon the bounding box falling within the FOV window, the pan, tilt and zoom (PTZ) values are obtained from the Look Up Table using target offset value, and thereafter deriving PTZ movement commands from said PTZ values obtained; alternatively;

- upon at least part of the bounding box falls outside the FOV window, the pan, tilt and zoom (PTZ) values are obtained from the Look Up Table using both target location (x,y) and size (z) values, and thereafter deriving PTZ movement commands from said PTZ values obtained.

17. A method for enabling user to re -select a target to be tracked according to Claim 16 wherein the resultant PTZ movement commands are translated into limited movable range of the steerable camera for automatic continuous target tracking without overshooting the target.

18. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein the resultant PTZ movement of the steerable camera in step (vii) [F6] is limited to a range to prevent overshooting the target being tracked, said PTZ range-limiting method comprising the steps of

(a) obtaining target tracking information, including determining the distance between the target and centre of focus of the steerable camera;

(b) inputting local offset by user as determined from the movement of the user interface device in placing the target at the centre of focus or field-of-vision (FOV) of the PTZ camera;

(c) checking the PTZ Look Up Table to match the PTZ view and PTZ view coverage of the target with respect to the fixed view camera;

(d) checking if a bounding box defining the target falls within a field-of-vision (FOV) window;

(e) computing new position of the target and send fine movement commands to PTZ camera.

19. A method for enabling user to re-select a target to be tracked according to Claim 18 wherein the PTZ camera movement's new position is calculated as the sum of centre focus distance and user offset.

20. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein any user input received at step (iv) leads to the previously drawn overlay and event detected images being cleared, whereupon the direction, magnitude and zoom factor are derived from said input and set as local offset values (x, y, z).

A method for enabling user to re-select a target to be tracked according to 1 wherein steps (iii) [F3], (iv) [F4] and (v) [F5] are provided as add-on, plug- in or modification to an existing surveillance camera system's software running steps (i) [Fl], (ii) [F2] and (vi) [F6].

22. A method for enabling user to re-select a target to be tracked according to Claim 1 wherein steps (iii) [F3], (iv) [F4] and (v) [Fo] are provided on standby until triggered by events including user-input received and more than one event- detection message is received.

23. A method for enabling user to re-select a target to be tracked from at least an object detected in an automated video tracking system comprising at least one steerable camera and a plurality of fixed view cameras implemented to automatically select and track object last detected by any one of said cameras, the method comprising the steps of:

(i) receiving an incoming event detection information from any one of said cameras;

(ii) deriving object tracking information from said event detection information;

(iii) determining target (x, y) to be tracked;

(iv) determining local offset (x, y, z) of said target; and

(v) calculate steerable camera's pan, tilt and zoom (PTZ) movement is restricted to within a limited range around said target.

24. A method for enabling user to re-select a target to be tracked according to Claim 23 wherein

the relative direction of the fixed view camera with event detection to the current steerable camera view is determined;

- a circular overlay is drawn on the current view of the steerable camera; and the target (x, y) is set by the value of any one of the latest fixed view camera location or fixed view camera location nearest to user input direction.

25. A method for enabling user to re-select a target to be tracked according to Claim 23 wherein the local offset (x, y, z) of the fixed view camera is determined by said user input comprising direction, magnitude and zoom factor.

26. A method for enabling user to re-select a target to be tracked according to Claim 25 wherein the target is ensured to be within a hmited movable range of the steerable camera's field of view.

27. A method for enabling user to re-select a target to be tracked according to Claim 23 wherein the PTZ movement commands to be sent to the steerable camera is calculated by evaluating the overlapped area between said steerable camera window as defined by the local offset and bounding box of the moving object of said detected event.

k- -

Description:
Directing steerable camera with user bias tracking

TECHNICAL FIELD

[001] This invention generally relates to camera systems for automatic tracking of moving objects as targets. Specifically, it concerns a camera system comprising a plurality of static or fixed view cameras and at least a steerable camera working in combination to track targets, such as surveillance and security monitoring camera systems. More particularly, it concerns such automatic camera tracking systems where user input to select or re-select a target to be tracked is enabled.

BACKGROUND ART

[002] A monitoring or surveillance camera system typically comprises a plurality of cameras to cover those areas needed to be viewed or monitored as shown in FIGURE 1A and FIGURE IB. To enable the tracking of an object of interest, at least a steerable camera 5 may be added to the system so that the target may be automatically tracked by this steerable camera 5 upon any one of the other cameras, usually static or fixed view cameras 6a, 6b, 6c picking up an object of interest 7. A steerable camera 5 usually has fine controllable movements in respect of its pan, tilt and zoom (PTZ) options to enable it to move to the desired location of view and to zoom in and focus on the object interest 7 which may be marked as the target to tbe tracked. When an object of interest enters the field of view (FOV) of one of the static cameras 6a, 6b, 6c and is detected as a target to be tracked, the PTZ camera 5 will need to know the location of the particular static camera which made the event detection and move in the direction of that static camera location to pick up the target.

[003] A major improvement to automate the tracking of object of interest in such camera systems includes having a program built-in its firmware 8 to automatically detect and track objects of interests 7a, 7b, 7c. All the cameras may typically provide video signal links 4 to be processed by a CPU 2 or firmware 8 running a video content analysis (VCA) program for object identification and automatic object tracking software while processed video may be output to display 3 which may comprised of multiple monitors. User input means to the CPU 2 is also shown in form of a joystick 9. Various video content analysis (VCA) methods and algorithms may be used to recognize changes in the environment being monitored by monitoring the change of pixels generated by the video captured by the camera. Generally, the tracking program picks up the pixel variation as it tries to centre the pixel fluctuation as a moving group of pixels representing the object detected. There may be additional features such as estimating the size of the moving object, distance from the camera, adjusting the camera's optical lens to stabilize the size of the pixel fluctuation, etc. When an object 7a goes out of all of the cameras' field of view (FOV), the steerable camera 5 returns to its starting or parked position until another event of pixel variation is detected and the process starts over again.

[004] If multiple events or objects are detected, the steerable camera 5 is typically pre-programmed to move to track the latest event or object detected. When objects move in and out of the camera's FOV boundaiy, video generated may be stuttering, jerky or dithering between the last object and the latest object entering 7b or leaving 7a the FOV as the steerable camera 5 quickly alternates from tracking the last object to the latest object detected. Such rapid PTZ view switching causes disorientation to the user apart from committing to tracking a non-desired object while a genuine target may be dropped and allowed to move on untracked. To resolve this problem, option may be provided to the user monitoring the camera system to provide his input to indicate which of the multiple objects detected to be the one of interest and designated as the target to be tracked. Such user bias target tracking may require certain manual input and control over the PTZ movements of the steerable camera 5, failing which the system will revert to tracking of last detected object again. The user would normally have difficulty in manually controlling the fine steps of the camera's PTZ movements and tends to overshoot and lose the target.

[005] In U.S. Patent No. 7,173,650 (Philips Electronics) this switching to manual user bias is assisted by automatic tracking based on a model created of the unique features of the desired target which is also helped by the automatic centering of the target's image on the monitor display. Otherwise, the PTZ controls in the user- desired target selection is basically done manually before automatic tracking of the selected target resumes when the manual control device is released by the user. It is expected that this prior art method faces the aforementioned problems of overshooting and would lose the target as the user grapples with the sensitity of the input device such as joystick in controlling the fine PTZ camera movements.

[006] U.S. Patent No. 7,750,936 (Sony Corporation) endeavours to provide an immersive surveillance view with 4 wide-angle fixed cameras facing outwardly from a pole position with a PTZ camera on top providing a 360° or all-round tracking capability. However, the manual user input option still requires the user to manually control the PTZ movement of the camera although it may be guided by a laser range finder and filter system. It does not address how the fine PTZ movements may be moderated when taken over by the user manually. Another U.S. Patent, No. 7,990,422 (Grandeye Ltd) also has a wide-angle camera acting as a master providing a distorted wide angle video which is to be rectified by a slave PTZ camera. While there is a manual mode for user to select an object of interest, the manual calibration of the slave PTZ camera is limited to keystroke sequences and providing BNC connectors to the camera without any method disclosed to avoid overshooting and address the sensitivity of the PTZ movements.

[007] There is therefore a need for surveillance camera system with automatic tracking system to allow for user bias in which tracking or option is allowed for user to re-select object to be tracked to resolve target determination and video dithering issues. There is also a need for such user re-selection capability to include modulating user mechanical input to enable fine PTZ movements of the steerable camera in defining and initiating tracking of the target.

SUMMARY OF INVENTION

[008] Our present invention endeavours to improve upon surveillance camera system with automatic object tracking so that user is enabled as an option to select a desired target to be tracked over that would otherwise be selected automatically by the system. In addition, in allowing for such user bias, our invention also strives to enable the user to direct PTZ camera movements to within fine controllable sensitivity, movement range and/or speed automatically. This may be achieved by extending the autotracking features of a steerable camera's PTZ movement to include such controllable fine tracking capability. In particular, our method enables a user to select an object as a target to be tracked, view the target in detail and direct the steerable camera to move in fine movement to the target within a range determined automatically.

[009] Broadly defined, our method enables a user to re-select a target to be tracked from at least an object detected in an automated video tracking system comprising at least one steerable camera and a plurality of fixed view cameras implemented to automatically select and track object last detected by any one of said fixed view cameras, the method comprising the steps of (i) listening for incoming object-detection message; (h) upon detecting an object-detected message, checking if more than one object-detection message is received - either (a) if only 1 event is detected, deriving object-tracking data from said object-detection message and proceeding to step (vi) below; or (b) if more than 1 event is detected, proceeding to step (iii) viz allowing user to select a target; (iv) determining local offset of said target; (v) deriving object-tracking data of said target; (vi) deriving pan, tilt and zoom (PTZ) movement commands from said object-tracking data; and (vii) sending said PTZ movement commands to said steerable camera.

[010] Preferably, step (i) listening for incoming object-detection message is looped until an object-detected message is received. Step (ii)'s checking whether more than one object-detection message is received is preferably performed within a preset duration of time, to. Step (iii)'s allowing user to select a target preferably comprises checking if user has activated Free mode to enable user input to select a target.

[011] In one aspect of our method, the user has not activated Free mode, whereupon the current detected camera's relative direction to the steerable camera is determined. A circular overlay and object detected camera is drawn on the steerable camera's view, preferably further comprising checking for any use"r input within a predetermined time period, ti. If no user input is detected within said predetermined time period ti, the target to be tracked may be set as the latest object detected camera location (x,y). If the user makes an input, the direction of the input relative to the steerable camera is obtained. Preferably, the target to be tracked is set to be the event-detected camera location (x, y) nearest to the user input direction.

[012] In a second aspect, user activating Free mode is detected whereupon the user's input is obtained and the selected target is set to zero, including setting to zero PTZ values of a PTZ Lookup Table. If no user input is detected, the local offset values of the selected target may be set to zero. Step (iv)'s determining local offset of target may preferably comprises of checking whether user has activated Local Offset mode. If Local Offset has not been activated, the local offset may be set to zero. If Local Offset has been activated, then both the drawn circular overlay and other event detected images may be cleared. The checking for any user input may be set to be performed within a predetermined time period ts. If no user input is made within the predetermined time period f?, the local offset may be set to zero. Where user input is provided, the direction, magnitude and zoom factor may be derived from said input and set as local offset values (x, y, z).

[013] In a third aspect, step (v)'s deriving object-tracking data of the target comprises checking whether Free mode has been activated. If Free mode has not been activated, then the next decision step comprises checking if Local Offset is set to zero. If Local Offset has not been set to zero, then the target value and local offset values may be summed to obtain the target offset value. Next, a window around the target offset value is defined as the field-of-vision (FOV window) of the steerable camera. A bounding box is then drawn around the target and is checked if it falls within the FOV window. If the bounding box falls within the FOV window, the pan, tilt and zoom (PTZ) values may then be obtained from the Look Up Table using target offset value. Thereafter, PTZ movement commands may be derived from said PTZ values obtained. Where at least part of the bounding box falls outside the FOV window, the pan, tilt and zoom (PTZ) values may then be obtained from the Look Up Table by using both target location (x, y) and size (z) values. Thereafter, the PTZ movement commands may be derived from said PTZ values obtained. In effect, the resultant PTZ movement commands are translated into limited movable range of the steerable camera for automatic continuous target tracking without overshooting the target.

[014] In a specific embodiment, where the resultant PTZ movement of the steerable camera in step (vii) [F6] is limited to a range to prevent overshooting the target being tracked, said PTZ range-limiting method comprises the steps of (a) obtaining target tracking information, including determining the distance between the target and centre of focus of the steerable camera; (b) inputting local offset by user as determined from the movement of the user interface device in placing the target at the centre of focus or field-of- vision (FOV) of the PTZ camera; (c) checking the PTZ Look Up Table to match the PTZ view and PTZ view coverage of the target with respect to the fixed view camera; (d) checking if a bounding box defining the target falls within a field-of-vision (FOV) window; (e) computing new position of the target and send fine movement commands to PTZ camera. Preferably, the PTZ camera movement's new position is calculated as the sum of centre focus distance and user offset.

[015] In another specific embodiment where user input is received at step (iv), the previously drawn overlay and other event detected images will be cleared, whereupon the direction, magnitude and zoom factor are derived from said input and set as local offset values (x, y, z). Alternatively, steps (iii) [F3], (iv) [F4] and (v) [F5] may be provided as add-on, plug-in or modification to an existing surveillance camera system's software running steps (i) [Fl], (ii) [F2] and (vi) [F6]. Preferably, steps (iii) [F3], (iv) [F4] and (v) [F5] may be provided on standby until triggered by events including user-input received and more than one event-detection messages are received.

[016] In a fourth aspect of our method for enabling a user to re-select a target to be tracked from at least an object detected in an automated video tracking system comprising at least one steerable camera and a plurality of fixed view cameras implemented to automatically select and track object last detected by any one of said cameras, the method comprising the steps of (i) receiving an incoming event detection information from any one of said cameras; (ii) deriving object tracking information from said event detection information; (iii) determining target (x, y) to be tracked; (iv) determining local offset (x, y, z) of said target; and (v) calculate steerable camera's pan, tilt and zoom (PTZ) movement is restricted to within a limited range around said target. Preferably, the relative direction of the fixed view camera with event detection to the current steerable camera view is determined, a circular overlay is drawn on the current view of the steerable camera; and the target (x, y) is set by the value of any one of the latest fixed view camera location or fixed view camera location nearest to user input direction. [017] In a specific embodiment, the local offset (x, y, z) of the fixed view camera may be determined by said user input comprising direction, magnitude and zoom factor. The PTZ movement commands to be sent to the steerable camera may be calculated by evaluating the overlapped area between said steerable camera window as defined by the local offset and bounding box of the moving object of said detected event. The target may thus be ensured to be within a limited movable range of the steerable camera's field of view. The methods of our invention may be incorporated in an apparatus for implementation, including in systems having multiple steerable cameras, or implemented as an embedded or add-on module, firmware or system.

LIST OF ACCOMPANYING DRAWINGS

[018] The drawings accompanying this specification as listed below may provide a better understanding of our invention and its advantages when referred in conjunction with the detailed description that follows hereinafter as exemplary and non-limiting embodiments of our method, in which:

[019] FIGURE 1A (Prior Art) and FIGURE IB (Prior Art) show a conventional configuration of surveillance camera system comprising a plurality of fixed view cameras and at least a steerable camera linked to user interface, processor and display means, configured to track the last object detected;

[020] FIGURE 2A embodies an overall flowchart of a camera system incorporating the methods of our present invention;

[021] FIGURE 2B shows one embodiment wherein one event is detected; [022] FIGURE 3A exemplifies a detailed flowchart of a first aspect of our invention's method for determining an object as a target to be tracked;

[023] FIGURE 3C illustrates another embodiment of the object-determining method wherein more than one event is detected;

[024] FIGURE 4A embodies a flowchart of a second aspect of our invention's method for determining local offset;

[025] FIGURE 4B exemplifies one embodiment of the local offset determining method in Free mode with more than one event detected;

[026] FIGURE 4C shows a second embodiment of the local offset determining method in Local mode with no local offset; [027] FIGURE 4D illustrates a third embodiment of the local offset determining method in Local mode with user input as local offset;

[028] FIGURE 4E represents a fourth embodiment of the local offset determining method in returning to Free mode;

[029] FIGURE 5A illustrates a flowchart of a third aspect of our invention's method for determining PTZ commands;

[030] FIGURE 5B exemplifies a fifth embodiment of the local offset determining method by user input of center focus distance;

[031] FIGURE 5C and FIGURE 5D show a sixth embodiment of the local offset determining method with bounding box and window offset; and [032] FIGURE 5E represents an embodiment of the PTZ command determining method in deriving new camera positioning.

DETAILED DESCRIPTION OF [SPECIFIC / PREFERRED] EMBODIMENTS

[033] As a broad, general description of our invention, it may be described as a method for enabling a user to select or re-select an object of interest to be tracked from among objects detected in an automated video tracking system. For avoidance of doubt, the term "re-select" in this specification is used to mean user input for choosing an object to be tracked by the system, including first-time selection, selection overriding the system's auto-tracking, or re-selection over a previous user selection. Similarly, an "object of interest" may also be referred to as an "event detected", and the particular object chosen by the user to be tracked may be referred to as "target" or "global target".

[034] Generally, with reference to FIGURE 2A and FIGURE 2B, the surveillance camera system contemplated in our invention typically includes at least one steerable camera 305 (which may be interchangeably referred to as "PTZ camera" in this specification) and a plurahty of static or fixed view cameras 303, comprising individual static cameras SCI, SC2, SC3 which are configured or implemented to automatically select and track an object 307 last detected by any one of the cameras. The method comprises the following steps or process stages, which have been enumerated as Fl to F6 for ease of reference to the accompanying drawings:

(i) Fl— Listen for incoming event-detection message, i.e. an event where an object of interest is detected.

(ii) F2— Upon detecting an incoming event-detection message, check if more than one object-detection event is detected. If only one event is detected, the object-tracking data or information is then derived from the object- detection message and the process proceeds to step (vi) below; otherwise, if more than one event is detected, proceeds to next step (iii).

( i) F3— The user is allowed to select from among the multiple objects detected a global target, i.e. the particular object chosen by the user to be tracked as a target by the system overriding all other objects detected and tracked by the system.

(iv) F4 - Local offset of the selected target is then determined.

(v) F5— Object-tracking data or information is then obtained from the target whereby the pan, tilt and zoom (PTZ) movement commands may be derived.

(vi) F6 - The derived PTZ commands is then sent to the steerable camera to start tracking the user selected target.

[035] A camera system incorporating the methods of our present invention may be shown in FIGURE 2A as a flowchart generalizing the entire process wherein the novel stages of the process is shown boxed 100 wherein 3 steps or stages therein are listed, i.e.

Determining global target (F3) 300;

Determining local offset (F4) 400; and

- Determining PTZ command (F 5) to be sent to the steerable camera 500.

[036] Depending on the original configuration of the camera system whether the process flow or algorithm for automatic tracking of object is embodied as a software run from the CPU, remotely from a server, or as on-site firmware, or combination thereof, our methods F3, F4 and F5 may be incorporated in the system as a replacement software, software or firmware update, add-on module, plug-in, etc. These 3 stages F3, F4 and F5 may be implemented within existing camera system as one or combination of such modification means. Of course, new camera surveillance system may be configured afresh with all stages Fl, F2, F3, F4, F5 and F6 included. It is also important to note that our invention is being described as a method for enabling a user to re-select target, i.e. an enablement or option made available to the user and which option need not be exercised or put into operation if, for instance, only one event or object is being detected, which the original or conventional program or algorithm is capable of handling.

[037] As shown in FIGURE 2A, step (i) Fl in listening for incoming object- detection message is looped back 12 ("NO") until an object-detected message is received 14 before proceeding to the next step, i.e. getting object tracking information from the message F2. At this stage, the process checks for whether more than one object -detection message is received within a predetermined duration of time, to 18. Obviously, if only one object is detected, such as the example situation depicted in FIGURE 2B where only one object is detected by static camera SC3 which relative direction to the PTZ camera 305 would be readily known, the object tracking information may thus be obtained directly from the single incoming event detection message. The PTZ camera 305 may thus be maneuvred to the target to be tracked easily and the requisite PTZ commands sent to the steerable camera 305. The method may proceed directly to stage F5 wherein PTZ commands may be conventionally determined 500 before being sent to the PTZ camera in stage F6. If only one object is detected, there would not be other target or object options to choose from and the automatic tracking the object 307 would proceed to track it.

F3 - Determining Global Target

[038] If more than one object has been detected, the first stage of our method, F3 or determining global target 300 may then come into play which detailed flowchart is exemplified ' in FIGURE 3A and the object-detection and target tracking illustrated in FIGURE 3B . The process for determining global target F3 starts with checking for user having activated Free mode 302 which would enable the user to input and select a target from among a plurality of objects detected earlier. If Free mode 302 has not been activated, the currently object or event-detection camera's relative direction to the steerable camera may be determined 304. Since the event-detection camera's position is static, a simple identification of the camera 303 would be sufficient to provide its relative direction to the steerable camera 305. Upon determining the relative direction of SC3 to PTZ camera 305 in step 304, the next step 306 comprises drawing a circular overlay and the view of the event- detection camera SC3 on the displayed view of PTZ camera 305. The purpose of the overlay is to provide the user with the relative position of the static camera to the PTZ camera. With information on the position of the static camera (XJ, yi) amd the PTZ camera (x¾ ys), the invention may compute the distance between the two cameras. Next, the gradient of the PTZ camera is calculated using the trigonometric formula— gradient=arctan [(j2 - yi)l (y2 - yi)] and the correct overlay position can be determined. Subsequently, the presence of any user input is checked 308 within a predetermined time period, ti If no input is detected within time period ti, the global target will be set 310 as the latest event detected camera location (x,y) to be tracked. [039] At this juncture, we may explain that user input may be made via a suitable input device such as keyboard, joystick, touchpen, touchscreen, touchpad, computer mouse and the like input devices for a computer. In another embodiment, if the user has not activated Free mode 302 but has made an input within predetermined time period ti 308, the direction of the input relative to the steerable camera may be calculated 318. Global target may then be set 320 as the event-detected camera location (x, y) nearest to the user input direction. When Free mode is activated by the user 302, user input is obtained and Free mode PTZ values is obtained 312, whereupon the global target is set to zero 314. At this juncture, it is to be noted that our proposed system includes a Pan-Tilt-Zoom (PTZ) Lookup Table which is a database in which all the positions and movements of the PTZ camera's pan, tilt and zoom values are recorded and stored during the initialization stage in order to locate each of the static cameras. The PTZ Lookup Table is checked so that the PTZ view and the PTZ view coverage of the object of interst matches with respect to the static camera as each point in the display grid has the attribute of pan, tilt and zoom data. Setting the global target offset to zero entails setting the Look Up Table, pan, tilt and zoom (PTZ) values to zero. The local offset is thus set to be zero 316.

F4 - setting local offset

[040] The next stage of our method, F4, wherein local offset is set, will be described with reference to the flowchart in FIGURE 4A and its various embodiments in FIGURES 4B - 4H. This stage's method comprises checking if user has activated Local Offset mode 402. Where Local Offset has not been activated, the Local Offset is set to zero 410. Where the user has activated Local Offset, the previously drawn overlay and other event detected images are then cleared 404. Next, the method checks for any user input within a predetermined time period tz 406. If no user input is made within the predetermined time period t * , as illustrated in FIGURE 4C, local offset will be set to zero 408 which embodiment is shown with the Local mode activated and locked to static camera SCI. If user input is made within the predetermined time period tz as shown in FIGURE 4D, information such as direction, magnitude, and zoom factor may be derived from the user input 412. These data are then used to set the local x, y, and z offset values 414. The system returns to Free mode by adding circular overlay and event images to the PTZ view while leaving the Local mode locked to static camera SCI.

F5— Determining PTZ command

[041] As shown in FIGURE 5A, this PTZ movement command determining process or stage commences with checking if Free mode has been activated by the user 502. Where Free mode is determined to be not activated, it will then check if Local Offset has been set to zero 506. If Local Offset has not been set to zero, the global target's value and the local offset value are summed up to obtain the global offset value 516, whereupon a window around the target offset value is defined as the field of view (FOV) of the steerable camera 518. Next, as shown in FIGURES 5C - 5E, a bounding box is drawn around the target and is checked if it falls within the FOV window 520. Where the bounding box falls within the FOV window, step is taken to get the pan, tilt and zoom (PTZ) values from the Look Up Table using global target offset value to derive the PTZ movement commands 522 to be sent in step 514. Where at least part of the bounding box falls outside of the FOV window 512, the pan, tilt and zoom (PTZ) values are obtained from both the global target location (x, y) and its size (z). Thereafter, the the PTZ movement commands may be derived from these PTZ values 514. This may entail translating the resultant PTZ movement commands into limited movable range of the steerable camera for automatic continuous target tracking without overshoot or losing the target. Such switching from Free mode to Lock mode (or vice versa) may be achieved by pushing a user-selectable button.

PTZ camera movement limits

[042] It is to be noted that the target to be tracked according to our invention where the resultant PTZ movement of the steerable camera in the next stage F6 is achieved by limiting the corresponding PTZ movements to a limited range to prevent overshooting the target or having the target out-of-view of the PTZ camera. This objective is achieved principally in the following steps:

(a) obtaining target tracking information, including determining the distance between the target and centre of focus of the steerable camera;

(b) inputting local offset by user as determined from the movement of the user interface device in placing the target at the centre of focus or field-of-vision (FOV) of the PTZ camera;

(c) checking the PTZ Look Up Table to match the PTZ view and PTZ view coverage of the target with respect to the fixed view camera;

(d) checking if a bounding box defining the target falls within a field-of-vision (FOV) window;

(e) computing new position of the target and send fine movement commands to PTZ camera.

[043] The PTZ camera's new position may be calculated as the sum of the centre focus distance and user offset as shown in FIGURE 5B. An alternative embodiment of our method at stage F4 is to accept any user input received at this stage to the effect of clearing previously drawn overlay and other images of event- detection, whereupon the direction, magnitude and zoom factor are derived from said input and set as local offset values (x, y, z).

[044] The goal of limiting the PTZ movements of the steerable camera may also been seen from the perspective of another embodiment of the method, i.e. comprising the steps of:

(i) receiving an incoming event detection information from any one of said cameras;

(ii) deriving object tracking information from said event detection information; (hi) determining target (x, y) to be tracked;

(iv) determining local offset (x, y, z) of said target; and

(v) calculate steerable camera's pan, tilt and zoom (PTZ) movement is

restricted to within a limited range around said target.

[045] In such an embodiment, the relative direction of the fixed view or static camera with event detection to the current steerable camera view may be determined, followed by drawing a circular overlay on the current view of the steerable camera. Next, the target may be set as the value of any one of the latest fixed view camera location (x, y) or fixed view camera location nearest to the user input direction. In a preferred embodiment, the local offset (x, y, z) of the fixed view camera may be determined by the user input which comprises of the direction, magnitude and zoom factors. Thus, the target is ensured to be within view of the steerable camera's FOV by limited the movable range of the camera's PTZ values as a result of the derived PTZ movement commands to be sent to the steerable camera being calculated by evaluating the overlapped area between the steerable camera window as defined by the local offset and bounding box of the moving object or target to be tracked. The direction, magnitude and zoom factor may also be provided as an integrated, collectively or as a combination input in form of a motion vector or other motion estimation parameters.

[046] As previously stated, depending on the original configuration of the camera system, the process flow or algorithm for automatic tracking of object is embodied as a software run from the CPU, remotely from a server, or as on-site firmware, or combination thereof. Thus, our methods F3, F4 and F5 may be incorporated in the system as a replacement software, software or firmware update, add-on module, plug-in, etc., in form of an apparatus implementing any of these methods. It should also be noted that while the foregoing is described as an example of implementation in a surveillance camera system having a single PTZ camera, our invention may be implemented or duplicated in a system comprising multiple steerable cameras. Most preferably, our methods are implemented in toto from Fl— F6 in new camera systems.

[047] Apart from the above embodiments, our proposed method of tracking a target may also be provided in any one or combination form of a routine, sub-routine, module, procedure or function as part of the source code within a larger computer program operating the surveillance camera system. As would be apparent to a person having ordinary skilled in the art, the aforedescribed methods and algorithms may be provided in many variations, modifications or alternatives to existing camera systems. The principles and concepts disclosed herein may also be implemented in various manner or form in conjunction with the hardware or firmware of the system which may not have been specifically described herein but which are to be understood as encompassed within the scope and letter of the following claims.