Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBJECT TRACKING SYSTEMS AND METHODS FOR TRACKING AN OBJECT
Document Type and Number:
WIPO Patent Application WO/2020/145883
Kind Code:
A1
Abstract:
According to various embodiments, there is provided a method for tracking an object, the method including: tracking the object in a first area, using a first surveillance system including a first plurality of cameras surveying the first area; determining whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system including a second plurality of cameras surveying a second area.

Inventors:
SETIAWAN BONDAN (SG)
KAZAMA YORIKO (SG)
Application Number:
PCT/SG2019/050013
Publication Date:
July 16, 2020
Filing Date:
January 10, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HITACHI LTD (JP)
International Classes:
H04N7/18; G06K9/62; G06T7/33
Foreign References:
CN104038729A2014-09-10
US7450735B12008-11-11
CN102436662A2012-05-02
CN106878666A2017-06-20
Attorney, Agent or Firm:
VIERING, JENTSCHURA & PARTNER LLP (SG)
Download PDF:
Claims:
CLAIMS

1. A method for tracking an object, the method comprising:

tracking the object in a first area, using a first surveillance system comprising a first plurality of cameras surveying the first area;

determining whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and

upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system comprising a second plurality of cameras surveying a second area

2. The method of claim 1, further comprising:

ceasing tracking of the object using the first surveillance system, upon determining that the object is exiting the first area, or that the object is in the second area.

3. The method of claim 1, wherein tracking the object using the first surveillance system comprises:

matching a digital signature of the object against the video captured by the at least one camera of the first surveillance system.

4. The method of claim 3, further comprising:

upon determining that the object is exiting the first area, transmitting the digital signature of the object from the first surveillance system to the second surveillance system.

5. The method of claim 3, further comprising:

upon determining that the object is exiting the first area, broadcasting the digital signature of the object to a plurality of surveillance systems comprising cameras surveying a corresponding plurality of other areas.

6. The method of claim 1, wherein the first area is distinct from the second area.

7. The method of claim 1, wherein the first area partially overlaps with the second area.

8. The method of claim 1, wherein determining whether the object is exiting the first area comprises:

detecting the object in a vicinity of an exit of the first area.

9. The method of claim 1, wherein determining whether the object is exiting the first area comprises:

detecting the object at a periphery of the first area.

10. The method of claim 1, wherein determining whether the object is exiting the first area comprises:

detecting the object moving along a predefined pathway between the first area and the second area.

11. The method of claim 1, wherein determining whether the object is exiting the first area comprises:

predicting a future trajectory of the object based on a velocity of the object.

12. The method of claim 1, further comprising:

determining that the object is entering the second area while tracking the object using the first surveillance system.

13. The method of claim 12, wherein determining that the object is entering the second area while tracking the object using the first surveillance system comprises:

detecting the object in a region where the first area overlaps with the second area.

14. The method of claim 12, wherein determining that the object is entering the second area while tracking the object using the first surveillance system comprises:

detecting the object moving along a predefined pathway between the first area and the second area.

15. The method of claim 12, wherein determining that the object is entering the second area while tracking the object using the first surveillance system comprises:

predicting a future trajectory of the object based on a velocity of the object.

16. The method of claim 1, wherein determining whether the object is exiting the first area comprises:

determining that the object is absent from the first area.

17. An object tracking system comprising:

a first surveillance system comprising a first plurality of cameras surveying a first area, wherein the first surveillance system is configured to tracking an object in the first area; a second surveillance system comprising a second plurality of cameras surveying a second area, wherein the second surveillance system is configured to tracking the object in the second area;

a path processor configured to determine whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and

a controller configured to initiate tracking of the object using the second surveillance system, upon the path processor determining that the object is exiting the first area.

18. The object tracking system of claim 17, wherein the controller is further configured to cease tracking of the object using the first surveillance system, upon the path processor determining that the object is exiting the first area.

19. A non-transitory computer readable medium comprising instructions, which when executed, performs a method for tracking an object, the method comprising:

tracking the object in a first area, using a first surveillance system comprising a first plurality of cameras surveying the first area;

determining whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and

upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system comprising a second plurality of cameras surveying a second area.

20. The non-transitory computer readable medium of claim 19, wherein the method comprises: upon determining that the object is exiting the first area, ceasing tracking of the object using the first surveillance system.

Description:
OBJECT TRACKING SYSTEMS AND METHODS FOR TRACKING AN OBJECT

TECHNICAL FIELD

[0001] Various embodiments relate to object tracking systems and methods for tracking an object.

BACKGROUND

[0002] There is an increasing need for surveillance of wide areas, such as bus terminals, train stations, airports, theme park, or other public facilities. While existing surveillance systems may be able to track objects in confined spaces, these existing systems may not be scalable to monitor wide areas which require a large number of cameras which in turn, generate a vast amount of data.

SUMMARY

[0003] According to various embodiments, there may be provided a method for tracking an object, the method including: tracking the object in a first area, using a first surveillance system including a first plurality of cameras surveying the first area; determining whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system including a second plurality of cameras surveying a second area.

[0004] According to various embodiments, there may be provided an object tracking system including: a first surveillance system including a first plurality of cameras surveying a first area, wherein the first surveillance system is configured to tracking an object in the first area; a second surveillance system including a second plurality of cameras surveying a second area, wherein the second surveillance system is configured to tracking an object in the second area; a path processor configured to determine whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and a controller configured to initiate tracking of the object using the second surveillance system, upon the path processor determining that the object is exiting the first area. [0005] According to various embodiments, there may be provided a non-transitory computer readable medium including instructions, which when executed, performs a method for tracking an object, the method including: tracking the object in a first area, using a first surveillance system including a first plurality of cameras surveying the first area; determining whether the object is exiting the first area based on a video captured by cameras of the first plurality of cameras; and upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system including a second plurality of cameras surveying a second area.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments are described with reference to the following drawings, in which:

[0007] FIG. 1 shows a scenario diagram that illustrates a problem of an existing method of tracking an object.

[0008] FIG. 2 illustrates a scenario diagram of a method of tracking an object according to various embodiments.

[0009] FIG. 3 illustrates an example of a floor plan of a venue being monitored using an object tracking system according to various embodiments.

[0010] FIG. 4 illustrates a system architecture diagram of an object tracking system according to various embodiments.

[0011] FIG. 5 illustrates a block diagram of an object tracking system according to various embodiments.

[0012] FIG. 6 illustrates a flowchart of a method for monitoring a wide area according to various embodiments.

[0013] FIG. 7 illustrates an example of an object table according to various embodiments.

[0014] FIG. 8 illustrates a flowchart of a method of tracking an object according to various embodiments.

[0015] FIG. 9 illustrates a flowchart of an object key transfer sequence according to various embodiments.

[0016] FIG. 10 illustrates a flowchart of an object key broadcast sequence according to various embodiments. [0017] FIG. 11 illustrates a diagram showing a scenario for transferring the key image according to various embodiments.

[0018] FIG. 12 illustrates a diagram showing a scenario for broadcasting the key image according to various embodiments.

[0019] FIG. 13 illustrates example screens of a graphical user interface (GUI) of a surveillance system according to various embodiments.

[0020] FIG. 14 illustrates an example screen of a GUI of a surveillance system according to various embodiments.

[0021] FIG. 15 illustrates an example screen of a GUI of a surveillance system according to various embodiments.

[0022] FIG. 16 illustrates an example screen of a GUI of a surveillance system according to various embodiments.

[0023] FIG. 17 illustrates a flow diagram of a method for tracking an object according to various embodiments.

[0024] FIG. 18 illustrates a conceptual diagram of an object tracking system according to various embodiments.

DESCRIPTION

[0025] Embodiments described below in context of the systems are analogously valid for the respective methods, and vice versa. Furthermore, it will be understood that the embodiments described below may be combined, for example, a part of one embodiment may be combined with a part of another embodiment.

[0026] It will be understood that any property described herein for a specific object tracking system may also hold for any object tracking system described herein. It will be understood that any property described herein for a specific method may also hold for any method described herein. Furthermore, it will be understood that for any object tracking system or method described herein, not necessarily all the components or steps described must be enclosed in the device or method, but only some (but not all) components or steps may be enclosed.

[0027] In this context, the object tracking system as described in this description may include a memory which is for example used in the processing carried out in the object tracking system. A memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).

[0028] The term “coupled” (or “connected”) herein may be understood as electrically coupled or as mechanically coupled, for example attached or fixed, or just in contact without any fixation, and it will be understood that both direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.

[0029] In order that the invention may be readily understood and put into practical effect, various embodiments will now be described by way of examples and not limitations, and with reference to the figures.

[0030] In the context of various embodiments, the phrase“key image” may be but is not limited to being interchangeably referred to as“object key” or“digital signature”. The key image may be an image of a target object that is to be tracked by the object tracking system. The object tracking system may search for the target object in any image by comparing the key image against the other image.

[0031] In the context of various embodiments, the phrase“perimeter camera” may refer to a camera of a surveillance system that is positioned in the vicinity of an exit or entrance of a surveillance area such that the exit or entrance is within the field-of-view of the camera.

[0032] FIG. 1 shows a scenario diagram 100 that illustrates an existing method of tracking an object. Multiple cameras 106a to 106h may be set up to monitor overlapping surveillance areas 108a to 108h respectively. Typically, a user may have information on the initial location of the target object, which in this example, is a bicycle 104. The bicycle 104 may initially be in the field of view of the camera 106a. The user may input the initial location of the bicycle 104, which is area 108a, into an object tracking system. The user may also define in the object tracking system that the object to be tracked is the bicycle 104. In an example scenario, the bicycle 104 may be missing and suspected to be stolen. The user may wish to know if the bicycle 104 had entered a particular surveillance area, for example the area 108e. To track the bicycle 104 as it travels from surveillance areas 108a to 108e, the object tracking system may need to continuously track the bicycle 104 across the surveillance areas 108a to 108e which cover a continuous stretch of area with no gaps in between. In other words, the object tracking system may track the bicycle 104 frame-by-frame on images from each camera where the object is captured. While such an object tracking system may be effective in tracking the target object, the system may not be scalable to track the target object across a wide area as the system would require a vast amount of computation resources to constantly search the wide area for the target object.

[0033] FIG. 2 illustrates a scenario diagram 200 of a method of tracking an object according to various embodiments. The method may include segmenting a wide surveillance area into more than one smaller section, such as surveillance section- 1 208a and surveillance section-2 208b. Each surveillance section may be monitored by a respective surveillance system. Each surveillance system may include a set of cameras and an analysis infrastructure such as a server or a processor to perform tracking on target objects in the corresponding surveillance section. Unlike the existing method described with respect to FIG. 1 which tracks the target object frame-by-frame in the images captured by each camera, the present method may perform search and identification of the target object only using the images captured at some particular areas, for example, near the section perimeters. The method may involve detecting object movement section by section. As the target object moves, or is predicted to move, from one section to another section, a key image of the target object is passed from a first surveillance system to a second surveillance system. The surveillance systems may track the target object only when they are in possession of the key image. In other words, when the key image is transferred out of the first surveillance system, the first surveillance system may stop tracking the target object. The second surveillance system which receives the key image may start tracking the target object. As such, the burden of identification and tracking the target object may be transferred from one surveillance system to another, as the target object moves from one surveillance section to another. Consequently, the computational resources required to track an object across a wide surveillance area may be optimised. Referring to the scenario diagram 200, the bicycle 104 may travel from section- 1 208a to 208b. Section- 1 208a may be under the purview of the first surveillance system, while section-2 208b may be under the purview of the second surveillance system. The first surveillance system may include a plurality of cameras, each of which has a field-of-view that covers a respective area 210a. The second surveillance system may also include a plurality of cameras, each of which has a field-of-view that covers a respective area 210b. The areas 210a and the areas 210b may not need to cover the entire span of the section- 1 208a and section 208b, as long as they include entry/exit points of section- 1 208a and section-2 208b. For example, the first surveillance system may detect the bicycle 104 in section- 1 208a after the bicycle 104 is defined as the target object. The first surveillance system may detect the bicycle 104 by comparing a key image of the bicycle 104 against videos captured by its cameras. The key image may include or encode the image features of the target object. As the bicycle 104 moves in the section- 1 208a, the first surveillance system may continue to track the bicycle 104 when the bicycle 104 appears in any one of areas 210a. Alternatively, the first surveillance system may track the bicycle 104 only when it appears in an exit area 212. When the first surveillance system detects the bicycle 104 in the exit area 212, or predicts that the bicycle 104 is about to leave section- 1 208a, or predicts that the bicycle 104 is about to enter section-2 208b, the first surveillance system may transfer the key image of the bicycle 104 to the second surveillance system. Upon receiving the key image, the second surveillance system may detect the bicycle in section-2- 208b by comparing the key image against videos captured by its cameras. The second surveillance system may first look for the bicycle 104 in the entry area 214 based on predicted trajectory information received from the first surveillance system. The first surveillance system may stop tracking the bicycle 104 after transferring the key image to the second surveillance system, or may stop tracking the bicycle 104 after the second surveillance system notifies the first surveillance system that the bicycle is in the surveillance section-2 208b, or may stop tracking the bicycle 104 after determining that the bicycle is not in the surveillance section- 1 208a.

[0034] FIG. 3 illustrates an example of a floor plan 300 of a venue being monitored using an object tracking system according to various embodiments. The venue may be segmented into a plurality of surveillance areas. The surveillance areas in the venue may be separated by distance, or function, for example platforms in a train station, or terminals of an airport, or gaming zones of an amusement park. The surveillance areas may be physically segregated, for example by walls or barriers, but connected by pathways or doors or entryways. The surveillance areas may also be physically indistinct, for example, part of an open space, but arbitrarily defined as separate sections. The floor plan 300 shows an example where the venue includes a first surveillance area 310, a second surveillance area 320 and a third surveillance area 330. The object tracking system may include, or may be connected, to a plurality of surveillance systems configured to monitor a corresponding plurality of surveillance areas. For example, a first surveillance system including cameras 306a, 306b, 306c, 306d, and 306e may monitor the first surveillance area 310. The cameras 306a, 306b, 306c, 306d, and 306e may be spread out across the first surveillance area 310 so that their respective field-of-views (FOV) may collectively cover a maximum area, or they may be positioned at important locations (also referred herein as identification areas) such as entryways 336, 332 and 302 of the first surveillance area 310. The entryway 302 may also be an exit of the venue, whereas the entryway 332 may connect to the second surveillance area 320 and the third surveillance area 330, and the entryway 336 may connect to the second surveillance area 320 only. Entryways of a surveillance area may also be referred herein as entrances, or doors, or exits. The second surveillance system may similarly include cameras 306f, 306g, and 306h. The third surveillance system may similarly include cameras 306i and 306j. The surveillance systems may be identical, or at least similar, in capabilities and functionalities. Each surveillance system may be accessed or controlled separately by security guards 316, 326 and 336 from their respective locations in the different surveillance areas 310, 320 and 330. The security guards 316, 326, 336 may patrol or be stationed in a vicinity of the venue exits 302, 304, 308 of their respective surveillance areas. Each surveillance area may optionally include its respective command centre 314, 324, or 334.

[0035] In an example of a use case, the object tracking system may be used to locate an object 114 in the venue. The object 114 may be, for example, a missing object that was last seen in the venue. In FIG. 3, the object 114 is represented by a drawing of a bicycle. A member of the public may approach the security guard 316 in the first surveillance area 310, about the object 114 having gone missing from the first surveillance area 310. The security guard 316 may provide user input to the first surveillance system through a client terminal. The user input may include information about the last known position of the object 114, which may be within the FOV of the camera 306a. The first surveillance system may retrieve the video footage of the camera 306a at the reported last seen time, based on the user input. A user input may also be provided to the first surveillance system to identify the object 114 in the retrieved video footage, for example to mark out an image of the object 114 in the retrieved video footage. The marked out image of the object 114 may be referred herein as the key image. The first surveillance system may thereby register the object 114 as a target object that the first surveillance system may locate and/or provide alerts when the object 114 is close to any entryway 336, 332, or 302 of the first surveillance area 310. The first surveillance system may identify the object 114 based on image analysis, using the key image. When the first surveillance system detects the object 114 in the FOV of any one of its cameras, it may generate an alert which will be displayed on a client terminal. The security guard 316 may then intercept the object 114 or inform a patrolling security guard about the location of the object 114. If the first surveillance system identifies that the object 114 was last seen in the first surveillance area 310 in the FOV of the camera 306b, it may conclude that the object 114 has left the first surveillance area 310 through the entryway 336. Since the entryway 336 is situated in between the first surveillance area 310 and the second surveillance area 320, the first surveillance system may determine that the object 114 is likely to have entered the second surveillance area 320. The first surveillance system may thus transfer the key image to the second surveillance system, so that the second surveillance system may continue to track the object 114 in the second surveillance area 320. On the other hand, if the first surveillance system identifies that the object 114 is absent from the first surveillance area 310 and last appeared in the video captured by the camera 306c near the entryway 332, the first surveillance system may transfer the key image to both the second surveillance system and the third surveillance system as the probabilities of the object 114 being in the second surveillance area 320 and being in the third surveillance area 330 may be similar. The first surveillance system may stop tracking the object 114 upon transferring the key image to another surveillance system. The first surveillance system may wait for the other surveillance system(s) to confirm the presence of the object 114 in their respective surveillance areas, before ceasing to track the object 114. The first, second and third surveillance systems may work one at a time, to track the object 114 when the object 114 enters their respective surveillance areas. The surveillance systems may issue an alert or notification to the user, when the object 114 is in a vicinity of any exit 302, 304, or 308 of the venue, so as to prevent the object 114 from leaving the venue.

[0036] FIG. 4 illustrates a system architecture diagram 400 of an object tracking system according to various embodiments. The object tracking system may include more than one surveillance system 410, for example a first surveillance system, a second surveillance system, and a third surveillance system, like described with respect to FIG. 3. Each surveillance system 410 may cover a respective surveillance area, using its own set of cameras. Each surveillance system 410 may be connected to at least one camera to capture images at specific locations within the respective surveillance area. The surveillance systems 410 may optionally include the cameras 306a, 306b, ... 306j. Each surveillance system 410 may also be connected to a command center terminal 414 and a guard terminal 416. Each surveillance system may include an analytics processor to perform image analysis and image processing for identifying the object 114, and also to retrieve videos from the cameras or from a storage device. Each of the command center terminal and the guard terminal 416 may include a client computer that allows a user to provide inputs to the object tracking system, and to receive alerts or notifications about detections of the object 114. Each surveillance system 410 may be connected to, or may include, a network 412. The network 412 may provide a means of data transfer and communication between the surveillance system 410, the cameras, the guard terminal 416 and the command center terminal 414. The plurality of surveillance systems 410 may communicate with one another through another network 442. [0037] FIG. 5 illustrates a block diagram of an object tracking system 500 according to various embodiments. The object tracking system 500 may include more than one surveillance system 410. The surveillance system 410 may include a Central Processing Unit (CPU) 502, a network device 504, an input/output (PO) interface 506 and a storage device 508. The CPU 502 may be configured to perform image and video analysis. The CPU 502 may alternatively be a Graphical Processing Unit (GPU). The storage device 508 may be a logical unit capable of storing programs 522 and databases 524. The storage device 508 may also temporarily store processing data when the CPU 502 is running programs 522. The storage device 508 may be an internal memory such as Random Access Memory (RAM), Solid State Drive (SSD), or Hard Disk Drive (HDD). The storage device 508 may also be a partially separated physical storage system as Network Attached Storage (NAS), or Storage Array Network (SAN). The program 522 may execute steps of the method for tracking objects. The database 524 may store information, such as user input, location, description, time relating to the object to be tracked, the target object, as well as the key image or features extracted from the image and videos captured by the surveillance cameras. The network device 504 may perform data send and receive with external devices that are connected at the same network. The network device 504 may connect the surveillance system 410 to the network 412 or 442 described with respect to FIG. 4. The network device 504 may be a wired Local Area Network (LAN), a connected ethemet device, or a wireless network connected device, etc. The I/O interface 506 may perform data send and receive with the input device 554 and the display 556. The I/O interface 506 may be a serial or parallel data interface such as Universal Serial Bus (USB), or High Definition Multimedia Interface (HDMI). The I/O interface 506 may also be a wireless connection such as Bluetooth, or Wireless LAN. The object tracking system 410 may be connected via the network device 504 and the I/O interface 506 to the image capture device 506, the input device 554 and the display 556. The object tracking system 500 may optionally include the image capture device 506 which may provide image frames to the surveillance system 410. The image capture device 506 may include cameras such as 306a, 306b, ... 306j, or video management systems (VMS). The object tracking system 500 may optionally include the display 556 which may be a Liquid Crystal Display (LCD), Plasma Display, Cathode Ray Tube (CRT) Display, or projector display, etc. The object tracking system 500 may optionally include the input device 554 which may be keyboard, mouse, or a touch screen. The display 556 and the input device 554 may also be a separate device such as a browser in a computer, or an application on a tablet, that is connected to the surveillance system 410 or the object tracking system 500. [0038] FIG. 6 illustrates a flowchart 600 of a method for monitoring a wide area according to various embodiments. The method may include a method of tracking an object. In 602, a surveillance system 410 of the object tracking system 500 may receive videos. The videos may be received from image capture devices 506. The videos may include surveillance videos or images captured by a plurality of cameras at a corresponding plurality of locations. The surveillance system 410 may receive the videos from the image capture device 506 through a network, a direct connection or wireless transmissions over data protocol such as Real Time Transport Protocol (RTSP). The surveillance system 410 may extract every image frame from the received videos and pass the image frame to 604. In 604, each image frame may be processed to detect objects, including the target object. 604 may include extraction of data relating to the objects and registering the extracted objects to an object table in the database 524. The extracted data may include the signature of the key image of the objects, which may be used to identify any unique object. 604 may also include utilizing a trained object detection neural network model trained to localize object positions within an image frame and to classify the target object. The process of localizing and classifying the target object may include evaluating the image signature or features extracted from each image frame. The image signature may be represented in an array of numbers. The image signature of the detected object may be registered on a database 524. Whenever a user, for example a security officer, wants to perform an object search or tracking, the user may input a search command into the surveillance system 410 using an input device 554. In 606, the surveillance system 410 may receive a user operation input for registering an object to be tracked. The user operation input may be received from the command center 414. The user operation input may include selecting a key image for the search or identification, where the key image portrays the target object to be searched. On performing definition of the key image, the surveillance system 410 may present a Graphical User Interface (GUI). The GUI may present an interface for the user to enter further information for the search. In 608, the surveillance system 410 may generate a query based on the user operation input and extracted signature data in the database 524. The surveillance system 410 may initiate searching or tracking of an object that resembles the key image. The query may include a combination of search time duration, camera selection and extracted features of the key image. The query may also relate the key image with its corresponding signature from the database 524. In 616, the surveillance system 410 may compare the signature of the key image to images provided by the image capture devices 506, which may be stored in the database 524, to identify the target object. 616 may include comparing the key image to images captured by cameras sited to monitor a vicinity of the exits of the surveillance areas. The matching process can be performed by calculating similarity of object signature, for example by calculating the vector distance between signatures of the key image and images captured by the image capture devices 506. After finding images with a matching signature to the signatures of the key images, the location of the target object may be estimated based on the location of the camera that captured the matching images. The matching process may further include determining the respective accuracies of the estimated locations and selecting the estimated locations with the highest determined accuracy. Determining the accuracies of the estimated locations may include comparing the similarity level of the signatures. The display 556 may present the results of the identification process 616. In 610, the surveillance system 410 may receive an object key, i.e. the key image from another surveillance system 410b. This may occur as a result of a target object exiting the surveillance area monitored by the other surveillance system 410b. In 612, the surveillance system 410 may identify the object based on the received object key to confirm that the object had entered its surveillance area. In 614, the surveillance system 410 may generate a query for object search based on the received object key. 614 may include extracting the digital signature of the object key, so that the surveillance system 410 may compare the digital signature against images captured by its image capture device 506.

[0039] FIG. 7 illustrates an example of an object table 700 according to various embodiments. The object table 700 may be generated in 604 shown in FIG. 6. Column 702 may store object identifiers (ID). The object IDs may be unique identification codes for every detected object. Column 704 may store the image data. The image data may be the numerical representation of the image, for example, an array of binary code or hexadecimal code. Alternatively, column 704 may store a pointer or a web link to an archived image file. Column 706 may store information on the type, i.e. category or classification that the detected object belongs to. For example, the detected object may be a person, a bicycle, a bag, an animal or any other types of objects. Column 708 may contain the signature of the image, in other words, extracted features of the image. The signature may include a matrix or a vector that encodes the image. Column 710 may store a unique camera identifier that indicates the camera that captured the image. Column 712 may store the frame identifier that indicates a frame within the video captured by the camera. The frame ID may represent time information, since the sequence of the frame corresponds to the time that the image frame was captured. The frame ID may be an encoding of time and date, for example in the epoch format. Column 714 may store information on bounding box. The bounding box may contain x, y coordinate location position within the frame and x width and y width of the image, as information to be used to create bounding box of the object on top of the frame image. The distance between objects within an image frame may be calculated from the bounding box of each object. The object table 700 may be stored in the database 524. Column 71 may store track IDs. Each track ID may include a unique identification number of sequential position movement of the object across frames.

[0040] FIG. 8 illustrates a flowchart 800 of a method of tracking an object according to various embodiments. The flowchart 800 may illustrate the process of identifying an object, which may correspond to 616 of FIG. 6. In the following, the method may be described using the example of FIG. 3. In 802, a first surveillance system 410a may initiate the process of identifying a target object. In 804, the first surveillance system 410a may use the object key to search for the target object in its allocated surveillance area. The first surveillance system 410a may search for the target object at least at in the videos captured by perimeter cameras, which are cameras positioned at entryways of the surveillance area. In 806, the first surveillance system 410a may perform identification to determine whether the target object is exiting the surveillance area. If the first surveillance system 410a finds that the target object has exited its surveillance area, the first surveillance system 410a may proceed to 814, to transfer the object key to a second surveillance system 410b. For example, with respect to FIG. 3, if the first surveillance system 410a finds that the object 114 has exited the first surveillance area 310 from entryway 336, the first surveillance system 410a may send a notification message to the second surveillance system 410b. The notification message may include an image of the object 114, in other words, an object key or a key image. The first surveillance system 410a may also consider the target object to have exited from its surveillance area, if it does not find the target object within its surveillance area within a predetermined time. After successfully transferring the object key to the second surveillance system 410b, the first surveillance system 410a may de-register the object key in 816. In 818, the first surveillance system 410a may also generate a de -register notification to be shown on the display 556, for example by displaying a message that the object has left the first surveillance area 310. The process of identifying the object may end at 812. If the first surveillance system 410a detects the target object at an identification area, for example at any of the exits of the venue such as entryway 302, the first surveillance system 410a may generate a warning, for example to display an alert on the guard client terminal 416. If the target object does not exit the first surveillance area 310 from the exit of the venue, and also does not appear in the field of view of the camera monitoring the exits, the first surveillance system 410a may return to 802 to continue the identification process using the newest image frames received from the cameras 306a, ... 306e. If in 814, the transfer of the object key to another surveillance system fails, for example, if the other surveillance system reports back that the target object is not detected in its surveillance area, the first surveillance system 410a may return to 802 to re-start the identification process. The target object may still remain in the first surveillance area 310.

[0041] FIG. 9 illustrates a flowchart 900 of an object key transfer sequence according to various embodiments. In 902, a surveillance system, for example the first surveillance system 410a, may send an area-out notification message when it detects that the target object has exited its surveillance area. This may occur as part of 814 described with respect to FIG. 8. The surveillance system may send the area-out notification message to a destination surveillance system, for example the second surveillance system 410b, that it predicts the object to have entered. The second surveillance system 410b may then accept the object key from the first surveillance system 410a, in 904. In 804, the second surveillance system 410b may perform identification of the object based on the transferred object key. In 804, the second surveillance system 410b may search for object within its surveillance area. If the second surveillance system 410b finds the object in 808, it may send a“found notification” message to the first surveillance system 410a in 810. Upon receiving the“found notification” message, the first surveillance system 410a may proceed to 816 described with respect to FIG. 8. If the second surveillance system 410b does not find the object within its surveillance area within a predetermined time, it may proceed to 906, where the process of searching for the object is terminated. The second surveillance system 410b may send a“Not found notification” message to the first surveillance system 410a in 908. Upon receiving the“Not found notification” message, the first surveillance system 410a may resume tracking and searching for the object within its surveillance area, i.e. does not proceed to 816.

[0042] FIG. 10 illustrates a flowchart 1000 of an object key broadcast sequence according to various embodiments. Instead of transferring the object key to one other surveillance system like shown with respect to FIG. 9, a surveillance system may also broadcast the object key to a plurality of surveillance systems simultaneously. The surveillance system may broadcast the object key instead of handing it over to one other surveillance system, in the event that it is unknown where the target object has travelled to, or if the target object is a highly prioritized target object. For example, the first surveillance system 410a which has determined that the target object has travelled out of its surveillance area, may send an area out notification, in 902, to the second surveillance system 410b and the third surveillance system 410c. The area out notification may include the object key, that contains information on the appearance of the target object. The second surveillance system 410b and the third surveillance system 410c may each monitor a surveillance area that borders the surveillance area of the first surveillance system 410a. Each of the second surveillance system 410b and the third surveillance system 410c may accept the object key in 904, and perform identification of the object key 612. Each of the second surveillance system 410b and the third surveillance system 410c may inform the first surveillance system 410a, as to whether the target object is found in their respective surveillance areas. Upon receiving the“object found notification” message from any one of the second surveillance system 410b or the third surveillance system 410c, the first surveillance system 410a may de-register the target object from its task list.

[0043] FIG. 11 illustrates a diagram 1100 showing a scenario for transferring the key image according to various embodiments. A surveillance system may determine that the target object is exiting its surveillance area, when it detects that the target object is in a vicinity of an exit of its surveillance area. For example, the target object may appear in the FOV of one of the cameras of the surveillance system that monitor the exits of the surveillance area. The surveillance system may also determine that the target object is exiting its surveillance area, when it detects that the target object is at a periphery of its surveillance area. For example, the target object may appear in the FOV of one of the perimeter cameras of the surveillance system. Alternatively, or additionally, the surveillance system may determine that the target object is exiting its surveillance area, when it detects that the target object is moving along a predefined pathway between its surveillance area and another surveillance area. The surveillance system may transfer the key image to another surveillance system, upon predicting that the target object will enter the surveillance area of the other surveillance system. For example, a bicycle 104 may be detected in multiple image frames captured by the cameras of a first surveillance system that surveys a first surveillance area 310. The first surveillance system may chart the travel path 1104 of the bicycle 104 within the first surveillance area 310 based on the appearances of the bicycle 104 in the multiple image frames. The first surveillance system may further compute the velocity of the bicycle 104 when it was exiting the first surveillance area 310 based on the travel path 1104 and the times of the image frames where the bicycle 104 was capture. Based on the computed velocity and the travel path 1104, the first surveillance system may predict, or extrapolate, the future trajectory 1106 of the bicycle 104. In the diagram 1100, the predicted future trajectory 1106 enters a second surveillance area 320. As such, the first surveillance system may determine that the bicycle 104 is exiting the first surveillance area 310 and thus, may transfer the key image of the bicycle 104 to the second surveillance system that surveys the second surveillance area 320. Alternatively, or additionally, the first surveillance system may also determine that the bicycle 104 would be entering the second surveillance area 320, based on an overall floor plan of a venue that the object tracking system is monitoring. For example, an exit of the first surveillance area 310 may be adjoined to an entrance of the second surveillance area 320 such that when the bicycle 104 exits the first surveillance area 310, it must enter the second surveillance area 320. In another example, the venue may include a predefined pathway 1102, such as a paved road, or a tunnel, or even walls 1110 that physically constrain movement, such that the bicycle 104 is expected to travel along the predefined pathway 1102. Based on the direction of the predefined pathway 1102, the first surveillance system may determine that the bicycle 104 will be entering the second surveillance area 320. The two surveillance areas may be distinct, in other words, non overlapping.

[0044] In another scenario (not illustrated), the first surveillance area 310 and the second surveillance area 320 may partially overlap. Both the first surveillance system 410a and the second surveillance system 410b may be in possession of the object key when the bicycle enters the overlapped area, so that both the first surveillance system 410a and the second surveillance system 410b may track the bicycle. The FOV of a perimeter camera in the first surveillance area 310 may at least partially overlap with the FOV of a perimeter camera in the second surveillance area 320. The first surveillance system may determine that the bicycle 104 is likely to be entering the second surveillance area when the bicycle 104 appears in the FOVs of both perimeter cameras of the first surveillance area 310 and the second surveillance area 320.

[0045] FIG. 12 illustrates a diagram 1200 showing a scenario for broadcasting the key image according to various embodiments. In some situations, a surveillance system may be unable to determine the destination of the target object. For example, the first surveillance area 310 may have an exit that leads to both the second surveillance area 320 and the third surveillance area 330. In such situations, the first surveillance system may broadcast the key image of the target object, such as the bicycle 104, to both the second surveillance system and the third surveillance system.

[0046] In another scenario (not illustrated), the first surveillance system may lose sight of the target object even before the target object reaches an exit. As such, the first surveillance system may not be able to predict where the target object has travelled to. The first surveillance system may determine that the target object is absent from the first surveillance area 310. The first surveillance system may determine that the target object may have exited the first surveillance area 310 and thus, broadcast the key image to all other surveillance systems that monitor neighboring surveillance areas to search for the target object.

[0047] FIG. 13 illustrates example screens 1300 and 1302 of a graphical user interface (GUI) of a surveillance system according to various embodiments. The GUI may be displayed on the guard terminal 416 or the command center terminal 414. The screen 1300 may display representations of a plurality of cameras, using for example icons or buttons 1330. The user may choose to view images from one of the cameras by selecting its corresponding button 1330. After the user has selected a camera in the screen 1300, the GUI may display the video 1332 captured by the selected camera. The user may play, rewind, fast forward, or fast rewind the video 1332. When the user spots a target object 114 that he wishes to track, the user may select or crop the target object 114 out of an image frame of the video 1332. A key image 1334 of the target object 114 may be extracted from the image frame. The key image 1334 may be used as a baseline for comparison, for identifying the target object 114. The user may click on a button 1336 to register the key image 1334, for initiating a search process.

[0048] FIG. 14 illustrates an example screen 1400 of a GUI of a surveillance system according to various embodiments. The screen 1400 may display the registered key images 1334 of different target objects that are being tracked by the surveillance system. The screen 1400 may also display a live video feed 1400 from at least one cameras of the surveillance system.

[0049] FIG. 15 illustrates an example screen 1500 of a GUI of a surveillance system according to various embodiments. The surveillance system may perform a matching check to identify whether the target object 114 appears in a current image frame of the live video feed 1402. The screen 1500 may display a notification or alert 1502 when the target object 114 appears in the current image frame. The screen 1500 may be displayed on the guard terminal 416 so that the security guard may be alerted to the location of the target object 114.

[0050] FIG. 16 illustrates an example screen 1600 of a GUI of a surveillance system according to various embodiments. The screen 1600 may be displayed when the target object 114 is determined to have left the surveillance area.

[0051] FIG. 17 illustrates a flow diagram 1700 of a method for tracking an object according to various embodiments. The method may include tracking in a first area using a first surveillance system 1702. The first surveillance system may include a first plurality of cameras surveying a first area. The method may further include determining whether the object is exiting the first area based on a video captured by at least one camera of the first plurality of cameras 1704. The method may further include upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system. The method may further include upon determining that the object is exiting the first area, ceasing tracking of the object using the first surveillance system 1706. The second surveillance system may be identical to, or similar to the first surveillance system. The second surveillance system may include a second plurality of cameras surveying a second area.

[0052] According to various embodiments, a non-transitory computer readable medium may be provided. The non-transitory computer readable medium may include instructions, which when executed, performs the method described with respect to the flow diagram 1700.

[0053] FIG. 18 illustrates a conceptual diagram of an object tracking system 1800 according to various embodiments. The object tracking system 1800 may include, or may be part of, the object tracking system 500. The object tracking system 1800 may include a first surveillance system 1802, a second surveillance system 1804, a path processor 1806 and a controller 1808. The object tracking system 1800 may include further surveillance systems, similar to the first surveillance system 1802 or the second surveillance system 1804. Each surveillance system may include at least one camera surveying a respective area, and each surveillance system may be configured to track objects in the respective area. The path processor 1806 may be configured to determine whether an object being surveyed, for example, in a first area, is exiting the first area. The path processor 1806 may determine whether the object is exiting the first area, based on a video captured by at least one camera of the first surveillance system. The path processor 1806 may transmit information on the determination, to the controller 1808. The controller 1808 may control operation of the surveillance systems. The controller 1808 may be configured to initiate tracking of the object using another surveillance system, and at the same time, cease tracking of the object using the first surveillance system 1802, upon the path processor 1806 determining that the object is exiting the first area. Alternatively, the controller 1808 may be configured to cease tracking of the object using the first surveillance system 1802, when the first surveillance system receives notification from the second surveillance system 1804, that the object is in the second area. The first surveillance system 1802, the second surveillance system 1804, the path processor 1806 and the controller 1808 may be coupled with each other, like indicated by lines 1810, for example electrically coupled, for example using a line or a cable, and / or mechanically coupled.

[0054] The following examples pertain to further embodiments. [0055] Example 1 is a method for tracking an object, the method including: tracking the object in a first area, using a first surveillance system including at least one camera surveying the first area; determining whether the object is exiting the first area based on a video captured by the at least one camera; and upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system including at least one further camera surveying a second area, and ceasing tracking of the object using the first surveillance system.

[0056] In example 2, the subject-matter of example 1 can optionally include that the first area is distinct from the second area.

[0057] In example 3, the subject-matter of example 1 can optionally include that the first area partially overlaps with the second area.

[0058] In example 4, the subject-matter of any one of examples 1 to 3 can optionally include that determining whether the object is exiting the first area includes: detecting the object in a vicinity of an exit of the first area.

[0059] In example 5, the subject-matter of any one of examples 1 to 4 can optionally include that determining whether the object is exiting the first area includes: detecting the object at a periphery of the first area.

[0060] In example 6, the subject-matter of any one of examples 1 to 5 can optionally include that determining whether the object is exiting the first area includes: detecting the object moving along a predefined pathway between the first area and the second area.

[0061] In example 7, the subject-matter of any one of examples 1 to 6 can optionally include that determining whether the object is exiting the first area includes: predicting a future trajectory of the object based on a velocity of the object.

[0062] In example 8, the subject-matter of any one of examples 1 to 7 can optionally include: determining that the object is entering the second area while tracking the object using the first surveillance system.

[0063] In example 9, the subject-matter of example 8 can optionally include that determining that the object is entering the second area while tracking the object using the first surveillance system includes: detecting the object in a region where the first area overlaps with the second area.

[0064] In example 10, the subject-matter of any one of examples 8 to 9 can optionally include that determining that the object is entering the second area while tracking the object using the first surveillance system includes: detecting the object moving along a predefined pathway between the first area and the second area. [0065] In example 11, the subject-matter of any one of examples 8 to 10 can optionally include that determining that the object is entering the second area while tracking the object using the first surveillance system includes: predicting a future trajectory of the object based on a velocity of the object.

[0066] In example 12, the subject-matter of any one of examples 1 to 11 can optionally include tracking the object using the first surveillance system includes: matching a digital signature of the object against the video captured by the at least one camera of the first surveillance system.

[0067] In example 13, the subject-matter of example 12 can optionally include: upon determining that the object is exiting the first area, transmitting the digital signature of the object from the first surveillance system to the second surveillance system.

[0068] In example 14, the subject-matter of any one of examples 12 to 13 can optionally include: upon determining that the object is exiting the first area, broadcasting the digital signature of the object to a plurality of surveillance systems including cameras surveying a corresponding plurality of other areas.

[0069] In example 15, the subject-matter of any one of examples 1 to 14 can optionally include that determining whether the object is exiting the first area includes: determining that the object is absent from the first area.

[0070] Example 16 is an object tracking system including: a first surveillance system including at least one camera surveying a first area, wherein the first surveillance system is configured to tracking an object in the first area; a second surveillance system including at least one further camera surveying a second area, wherein the second surveillance system is configured to tracking an object in the second area; a path processor configured to determine whether the object is exiting the first area based on a video captured by the at least one camera; and a controller configured to initiate tracking of the object using the second surveillance system and to cease tracking of the object using the first surveillance system, upon the path processor determining that the object is exiting the first area.

[0071] Example 17 is a non-transitory computer readable medium including instructions, which when executed, performs a method for tracking an object, the method including: tracking the object in a first area, using a first surveillance system including at least one camera surveying the first area; determining whether the object is exiting the first area based on a video captured by the at least one camera; and upon determining that the object is exiting the first area, initiating tracking of the object using a second surveillance system including at least one further camera surveying a second area, and ceasing tracking of the object using the first surveillance system.

[0072] While embodiments of the invention have been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced. It will be appreciated that common numerals, used in the relevant drawings, refer to components that serve a similar or the same purpose.

[0073] It will be appreciated to a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[0074] It is understood that the specific order or hierarchy of blocks in the processes / flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes / flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[0075] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean“one and only one” unless specifically so stated, but rather“one or more.” The word“exemplary” is used herein to mean“serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term“some” refers to one or more. Combinations such as “at least one of A, B, or C,”“one or more of A, B, or C,”“at least one of A, B, and C,”“one or more of A, B, and C,” and“A, B, C, or any combination thereof’ include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as“at least one of A, B, or C,”“one or more of A, B, or C,”“at least one of A, B, and C,”“one or more of A, B, and C,” and“A, B, C, or any combination thereof’ may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words“module,”“mechanism,”“element,”“device, and the like may not be a substitute for the word“means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase“means for.”