Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR COUNTING AND DETERMINING DIRECTION OF MOVEMENT OF MOVING OBJECTS
Document Type and Number:
WIPO Patent Application WO/2020/101472
Kind Code:
A1
Abstract:
A method (200) and a system (100) for counting and determining direction of movement of moving objects have been disclosed. The moving object can be tracked and its direction of movement can be traced without the need of overlapping adjacent frames having a foreground image of the desired object. The method (200) comprises capturing at least one image representing at least one frame of a video stream comprising one or more objects of at least one desired object type, wherein the objects are captured while moving inside or outside an enclosed region of interest, (1000).

Inventors:
CHONG JIN HUI (MY)
LIANG KIM MENG (MY)
HON HOCK WOON (MY)
Application Number:
PCT/MY2019/050086
Publication Date:
May 22, 2020
Filing Date:
November 12, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MIMOS BERHAD (MY)
International Classes:
H04N5/14; G06T7/194
Foreign References:
US20110080336A12011-04-07
US20150317517A12015-11-05
US20110116682A12011-05-19
US20130094696A12013-04-18
KR20070114216A2007-11-29
Attorney, Agent or Firm:
KANDIAH, Geetha (MY)
Download PDF:
Claims:
CLAIMS:

1 A method (200) for determining direction of movement of moving objects is characterized by the steps of:

i) capturing at least one image representing at least one frame of a video stream comprising one or more objects of at least one desired object type, wherein the objects are captured while moving inside or outside an enclosed region of interest, (1000);

ii) processing each of the frames of the video stream for extracting at least one foreground image, wherein the foreground image represents the desired object type, (1002);

iii) detecting if a foreground image is extracted in more than one frames,

(1004);

iv) performing horizontal and vertical tracking if the foreground image is detected in more than one frame, (1006);

v) performing skeletonization if only one foreground image is detected in one frame of the video stream, (1008); and

vi) determining direction of movement for the desired object type based on a tracking path retrieved in step (iv) or a skeleton representation of the foreground image retrieved in step (v), (1010).

2. The method (200) as claimed in claim 1 , the method further includes the step of counting the number of objects moving inside and the number of objects moving out of the enclosed region of interest based on the direction of movement for the desired object type, (1012).

3. The method (200) as claimed in claim 1 , wherein the step of performing horizontal and vertical tracking includes:

I) generating a boundary box encompassing the foreground image of the desired object;

ii) computing a centroid of the foreground image;

iii) checking if the desired object type is present within a vertical line crossing boundary of the current and a non-overlapping adjacent frame;

iv) checking if the desired object type is present within a horizontal line crossing boundary of the current and a non-overlapping adjacent frame; v) checking if condition (iii) or (iv) is fulfilled to determine if the desired object is tracked;

vi) recording sequence of the centroid of the foreground image object position of the tracked object between at least a current and at least one non-overlapping adjacent frame; and

vii) determining a tracking path of the desired object type based on all recorded sequences of the foreground image position of the desired object. 4 The method (200) as claimed in claim 1 , wherein the step of determining direction of movement for the desired object type based on skeleton representation of the foreground image includes:

i) computing a skeleton of the foreground object;

ii) identifying a backbone of foe skeleton line with the most vertical skeleton,

S;

iii) computing length of a highest point of S to foe nearest intersection, X; iv) computing length of a lowest point of S to the nearest intersection, Y; and v) determining foe direction of movement of the foreground object based on the length of X and Y, wherein foe desired object is moving IN if value of length of‘X’ is less than equal to value of the length of T and foe desired object is moving OUT if value of the length of‘X1 is greater than value of foe length of Ύ'.

5. A system (100) for determining direction of movement of moving objects comprising:

at least one video acquisition unit (102) to capture at least one image representing at least one frame of a video stream comprising one or more objects of at least one desired object type, wherein the objects are captured while moving inside or outside an enclosed region of interest; a processing unit (104); and

a memory unit (108) coupled to the processing unit (104) for storing the captured video stream, wherein the processing unit (104) is to execute a plurality of modules (110) stored in the memory unit (108), characterized in that, the plurality of modules (110) comprising: an extraction module (1 12) configured to:

process each one of the frames of the video stream to extract at least one foreground image, wherein the foreground image represents the desired object type; a tracking analysis module (114) configured to:

detect if the foreground image is extracted in more than one frames; determine a tracking path of the desired object type using a horizontal and vertical tracking module (116) if the foreground image is detected in more than one frame; and retrieve a skeleton representation of the desired object type using a skeletonization module (118) if only one foreground image is detected in only one frame of the video stream; and a direction identification module (120) configured to:

determine direction of movement for the desired object type based on the tracking path or the skeleton representation of the foreground image.

6. The system (100) as claimed in claim 5, wherein the modules (110) further include a counting module (122) having at least one IN counter and at least one OUT counter configured to count the number of objects moving inside and the number of objects moving out of the enclosed region of interest based on direction of movement for the desired object type. 7. The system (100) as claimed in claim 5, wherein the horizontal and vertical tracking module (1 16) is configured to: generate a boundary box encompassing the foreground image of the desired object type;

compute a centroid of the foreground image; check if the desired object type is present within the vertical line crossing boundary of the current and a non-overlapping adjacent frame; check if the desired object type is present within the horizontal line crossing boundary of the current and a non-overlapping adjacent frame; determine if the desired object is tracked if vertical crossing line or horizontal crossing line criteria is fulfilled; record sequence of the centroid of the foreground image object position of the tracked object between at least a current and at least one nonoverlapping adjacent frame; and determine a tracking path of the desired object type based on all recorded sequences of the foreground image position of the desired object.

8. The system (100) as claimed in claim 5, wherein the direction identification module (120) is further configured to:

identify a backbone of the skeleton line with the most vertical skeleton,

S; compute a length of a highest point of S to the nearest intersection as X; compute a length of a lowest point of S to the nearest intersection as Ύ’; and compare the length of X and Y to determine the direction of movement of the desired object in the foreground image, wherein the desired object is moving IN if value of length of X is less than equal to value of the length of Ύ' and the desired object is moving OUT if value of the length of X is greater than value of the length of Ύ.

Description:
METHOD AND SYSTEM FOR COUNTING AND DETERMINING DIRECTION OF MOVEMENT OF MOVING OBJECTS FIELD OF THE DISCLOSURE

The disclosures made herein relate generally to the field of video analytics and, more particularly, to a method and to a system for counting and determining direction of movement of moving objects, for instance a bird.

BACKGROUND

Advancements in image processing techniques have led to numerous applications of video analytics. One such application is counting objects wherein an area or a counting line is monitored using a camera and number of objects in and out of the area or crossing the counting line are calculated through video analytics. This application is very useful in environments such as buildings, roads, shopping malls, or public transportation systems. Such video analytics-based techniques usually mount a camera on the top of an area of interest and the camera is adjusted to look down for taking pictures or recordings, and then use a variety of different image recognition and processing technologies to achieve object counting. Typically, once the pictures or recordings are taken an area estimation technique is employed to detect variation in pixels over an image frame and label out an area where the object being counted is located. This is followed by object tracking techniques to know when an object has triggered a cross-line event to estimate the number of objects passing the crossing line.

Conventional object counting using a camera is performed by identifying the number of moving objects and their tracking paths and the direction of the tracking paths. The conventional object counting techniques are based on detecting an object in overlapping adjacent frames of the picture or recording taken by the camera. Thereby, the conventional object counting techniques track the moving object of the current frame by overlapping in the detected zone of the previous frame. Thus, once a moving object is tracked, conventional object tracking techniques will label the detected object with the same ID, and record the sequence of the coordinates or location of the moving object. With the sequence of the coordinates, the moving object tracking path is determined and the direction of the tracking path is found out. Finally, the moving object can be counted as in or out of an area when crossing a counting line. However, if the object is moving too fast, for instance if the moving object is a flying bird, then there may be no overlapping in adjacent frames of the flying bird foreground image. Therefore, the conventional object tracking techniques will lose track of the flying bird. Secondly, the conventional object tracking techniques cannot label the same flying bird foreground image with same the ID. Hence, the sequence of coordinates or location of the flying cannot be recorded and its tracking path cannot be determined as the tracking path as it needs at least two locations of the flying bird with the same ID. Thirdly, the direction of the flying bird cannot be figured out as the tracking path is undefined. As a result, the counting operation of the birds flying in or out cannot be performed.

There have been attempts in the art to detect a moving object, for instance a flying bird. Chinese patent application 105807332A discloses a bird detection system in which a bird is detected using a triangulation technique thereafter direction of movement of the bird is calculated based on a mounting angle of two known positions of the bird. However, the '332 patent application needs at least two adjacent frames having the image of the bird to calculate the direction of its movement.

United States patent 7602944B2 discloses a method and system for counting moving objects in a digital video stream. The '944 patent involves determining an area of motion in the digital video stream and then determining an object box surrounding an object. Thereafter, the '944 patent tracks the moving object box of the current frame by overlapping in the area of motion of the previous frame.

There is therefore felt a need for a method and a system for counting and determining direction of movement of moving objects which is not dependent upon overlapping adjacent frames of an image or recording to track and count moving objects. SUMMARY

The conventional object tracking and counting techniques are unable to track an object of interest if there are no overlapping adjacent frames of a foreground image of the object of interest. Also, conventional object tracking and counting techniques cannot identify a foreground image of the object of interest, record its tracking path or count the object moving in or out of a crossing line unless two locations of the object of interest with the same ID are identified. These drawbacks led the present disclosure to provide a method and a system for counting and determining direction of movement of moving objects.

The present disclosure overcomes shortcomings of the conventional object tracking and counting techniques by being able to track a moving object independent of the number of frames the object is captured in. Thereby, the present disclosure eliminates the limitations of identifying at least two locations of the desired object with the same identification (ID) label to be able to determine a direction of movement of the object.

In one implementation, the present disclosure provides a system which tracks moving objects without the need of overlapping foreground images of the object of interest in frames of a video recording. The system includes at least one video acquisition unit to capture at least one image representing at least one frame of a video stream comprising one or more objects of at least one desired object type. These objects are captured while moving inside or outside an enclosed region of interest. The system further includes a memory unit and a processing unit coupled to the memory unit to execute a plurality of modules stored in the memory unit. The plurality of modules comprises of (i) an extraction module which is configured to process each of the frames of the video stream to extract at least one foreground image of the desired object type; (ii) a tracking analysis module configured to detect if a foreground image is extracted in more than one frames; determine a tracking path of the desired object type using a (iii) horizontal and vertical tracking module if the foreground image is detected in more than one frame; and retrieve a skeleton representation of the desired object type using a (iv) skeletonization module if only one foreground image is detected in only one frame of the video stream; and (v) a direction identification module configured to determine direction of movement for the desired object type based on the tracking path or the skeleton representation of the foreground image. The system also includes a counting module configured to count the number of objects moving inside and the number of objects moving out of the enclosed region of interest based on direction of movement for the desired object type.

In accordance with the present disclosure, the at least one desired object type is a bird and the enclosed region of interest may be at least one of a house, a bird nest house, a swiftlet-breeding house, and a swiftlet farm. However, the desired object type or moving object may not be limited to a bird and can be extended to track and count moving living and non-living objects and other enclosed spaces.

Typically, the proposed system employs a two-layer tracking analysis module wherein if horizontal and vertical tracking module is unable to identify a tracking path of the moving object then the skeletonization module is Invoked to identify the direction of movement of the moving object.

In another implementation, the present disclosure provides a method for determining direction of movement of moving objects. The method comprises the steps of capturing at least one image representing at least one frame of a video stream comprising one or more objects of at least one desired object type, wherein the objects are captured while moving inside or outside an enclosed region of interest; processing each one of the frames of the video stream for extracting at least one foreground image, wherein the foreground image represents the desired object type; detecting If a foreground image is extracted in more than one frames; performing horizontal and vertical tracking if the foreground image is detected in more than one frame; performing skeletonization if only one foreground image is detected in only one frame of the video stream; and determining direction of movement for the desired object type based on a tracking path or a skeleton representation of the foreground image.

Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components. BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS

In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label Is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.

FIGURE 1 illustrates components for a system for counting and determining direction of movement of moving objects, in accordance with an embodiment of the present disclosure.

FIGURE 2 is a flowchart showing the steps for counting and determining direction of movement of moving objects, in accordance with an embodiment of the present disclosure.

FIGURE 3 is a flow diagram representing working of the horizontal and vertical tracking module in accordance with an embodiment of the present disclosure.

FIGURES 4a - 4c illustrate determination of tracking path using horizontal and vertical tracking technique in accordance with an embodiment of the present disclosure.

FIGURE 5 is a flow diagram representing working of the skeletonization module in accordance with an embodiment of the present disclosure.

FIGURES 6a - 6d shows representative skeletons generated for various flying postures of a bird using skeletonization in accordance with an embodiment of the present disclosure.

FIGURES 7a - 7b illustrates how direction of movement is determined in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION

In accordance with the present disclosure, there is provided a method and a system for counting and determining direction of movement of moving objects, which will now be described with reference to the embodiment shown in the accompanying drawings. The embodiment does not limit the scope and ambit of the disclosure. The description relates purely to the exemplary embodiment and its suggested applications.

The embodiment herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiment in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiment herein may be practiced and to further enable those of skill in the art to practice the embodiment herein. Accordingly, the description should not be construed as limiting the scope of the embodiment herein.

The description hereinafter, of the specific embodiment will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify or adapt or perform both for various applications such specific embodiment without departing from the generic concept, and, therefore. such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation.

As will be appreciated by one skilled in the art, the present disclosure may be embodied as a system, method or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware or programmable instructions) or an embodiment combining software and hardware aspects that may all generally be referred to herein as an‘unit,'‘module, * or‘system.’

Referring to the accompanying drawings, FIGURE 1 illustrates an exemplary architecture in which or with which proposed system (100) for counting and determining direction of movement of moving objects that can be implemented in accordance with an embodiment of the present disclosure. Example of moving objects may include birds or any other living or non-living being. Preferably, the proposed system (100) relates to a flying bird tracking and counting system. In particular, the present invention finds application in counting swiftlets moving in and out of a swiftlet house. The system (100) includes at least one processing unit (104), at least one interface (106), and at least one memory unit (108).

The at least one processing unit (104) may be implemented as one or more computing device including microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate digital signals based on operational instructions. Among other capabilities, the at least one processing unit (104) is configured to fetch and execute computer-readable instructions stored in the at least one memory unit (108) to perform counting and determining direction of movement of moving objects, specifically a flying bird.

The processing unit (104) is communicably coupled to at least one interface (106) which enables the processing unit (104) to communicate with external devices and /or users who wish to fetch the count of flying binds moving in or moving out of an enclosed region or space. The interface (106) may include a plurality of hardware and software interfeces, for instance, a graphical user interface to present the count of birds flying in and out of an enclosed space or to accept inputs from the user in terms of selection of the area to be monitored as a crossing line or boundary for counting flying birds. The interface (106) also facilitates optical, wired or wireless communication with external communication networks and hardware devices, for instance, a video acquisition unit (102) to acquire recordings of the enclosed space or crossing line to count the number of flying birds.

The at least one memory unit (108) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, magnetic tapes, compact disc readonly memories (CD-ROMs), and magneto-optical disks, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). The memory unit (108) may include modules (110) and data (124).

The modules (110) include routines, programs, objects, components, data structures and the like, which perform particular tasks or implement particular abstract data types. In one implementation, the modules (110) may include an extraction module (112), a tracking analysis module (114), a horizontal and vertical tracking module (116), a skeletonization module (118), a direction identification module (120) and a counting module (122). The memory unit (108) may not be limited with the aforementioned modules (110) but may include other programs or coded instructions that supplement applications and functions of the system (100).

The data (124) is a working memory or buffer for storing recordings from the video acquisition unit (102) and other intermediate results generated by one or more modules (110) for use by the system (100) for tracking and counting moving objects. The data may be in the form of user inputs, video frames, information on the count or any other data generated as a result of the execution of one or more modules (110).

Those skilled in the art would appreciate that, although the system (100) includes a number of distinct units and modules, as illustrated in FIGURE 1 , it should be recognized that some units and modules may be combined, and/or some functions may be performed by one or more units or modules. Therefore, the embodiment of FIGURE 1 represents the major components of the system (100) for counting and determining direction of movement of moving objects, but these components may be combined or divided depending on the particular design without limiting the scope of the present disclosure.

The working of the system (100) may be explained in conjunction with FIGURES 2- 7 explained below.

FIGURE 2 represents the method (200) for counting and determining direction of movement of moving objects in accordance with this disclosure. In accordance with one aspect of the present disclosure, the system (100) further includes a video acquisition unit (102) to capture at least one image representing at least one frame of a video stream, wherein the video stream comprises of one or more objects of at least one desired object type, and these objects are captured while moving inside or outside an enclosed region of interest, block (1000). According to this disclosure, the video acquisition unit (102) may include an analog camera, a network camera or an IP / web camera. Typically, the video acquisition unit (102) is hosted in an enclosed area such as a bird house or swiftlet house or a room and is mounted at a ceiling with a top-down view to capture a recording of the birds flying in and out of the area.

The recorded video stream is then received either wired or wirelessly at the processing unit (104) via interface (106). The recorded video stream is stored in the data section (124) of the memory unit (108) for further processing by one or more modules (110).

The video stream is received at the extraction unit (1 12) for processing each one of the flames of the video stream for extracting at least one foreground image, wherein the foreground Image represents the desired object type, for instance a flying bird, block (1002). The extraction unit (112) uses techniques known in the art to perform background subtraction and foreground extraction on each of the frames, the extracted foreground image along with the frame information are then relayed to the tracking analysis module (114). The tracking analysis module (114) performs a two-layer analysis to identify a tracking path and further determine the direction of movement of the moving object. The tracking analysis module (1 14) includes a horizontal and vertical tracking module (116) and a skeletonization module (118) to determine a tracking path of the moving object, for instance a flying bird, based on the number of frames having a foreground image of the desired object in the video stream.

Referring back to FIGURE 2, on receiving the foreground image of the desired object type the tracking analysis module (114) checks if a foreground image of the object is available in one or more frames, block (1004). If yes, then the tracking analysis module (114) performs horizontal and vertical tracking, at block (1006) else the tracking analysis module (1 14) performs skeletonization, if only one foreground image is detected in only one frame of the video stream (1008).

Based on the selection of the tracking technique by the tracking analysis module (114) a tracking path for the desired object type is determined and is used by the direction identification module (120) to determine direction of movement of the desired object, block (1010). The direction of movement is determined in terms of movement of the object inside the enclosed space or outside the enclosed space. The direction of movement is used by a counting module (122) to count the number of objects moving inside and the number of objects moving out of the enclosed region, block (1012). For instance, the counting module (122) may maintain at least two counters to track movement of a swiftlet bird inside (IN) and outside (OUT) a swiftlet house respectively. These counters hold accumulated values and are incremented each time a moving object moves IN or OUT of the enclosed space. Further, these counters may be reset as per the requirement of users operating the swiftlet house / bird house or any region where moving objects need to be tracked and counted. Referring now to FIGURE 3, determination of a tracking path, as performed by the horizontal and vertical tracking module (116), is explained using a flowchart, in accordance with an embodiment of the present disclosure. The horizontal and vertical tracking technique will now be explained with the example of a flying bird as the moving object. It is within the scope of this disclosure to track other moving living and non-living objects as well.

The horizontal and vertical tracking module (116) tracks the flying birds horizontally and vertically and builds a tracking path of the flying bird. The horizontal and vertical tracking module (116) generates a boundary box encompassing the foreground image and positioning the identified object within the box, it then computes a centroid of the flying bird foreground image, block (2000). It then tracks the flying bird horizontally and vertically by assigning a vertical and a horizontal tracker to the flying bird, block (2002). The vertical tracker is to track the flying bird based on a boundary box vertical line crossing in the adjacent frame, block (2004). Thus, the vertical tracker checks if the desired object type is present within a vertical line crossing boundary of a current and a non-overlapping adjacent frame. According to the present disclosure, it is the same flying bird if the vertical line of boundary box is not crossed. Therefore, the horizontal and vertical tracking module (116) maintains the same label ID. On the other hand, the horizontal tracker is to track the flying bird based on boundary box horizontal line crossing in the adjacent frame, block (2006). Thus, the horizontal tracker checks if the desired object type is present within the horizontal line crossing boundary of the current and a non-overlapping adjacent frame. Similarly, it is the same flying bird with same label ID if the horizontal line of boundary box is not crossed. The bird is considered tracked when either horizontal or vertical tracker is able to track the bird.

Subsequently, sequence of the centroid of the foreground bird object position is recorded and the tracking path of the bird object is determined based on the all recorded sequence of foreground object positions, blocks (2008, 2010). With the tracking path, the direction of the tracking path of the foreground bird object is determined. Finally, the IN or OUT accumulated data counter based on the direction of the flying bird is updated.

FIGURES 4a - 4c represent three scenarios of non-overlapping frames having the same label ID for foreground image of the desired object, for ease of reference these non-overlapping frames are referred to as 1st Frame and 2nd Frame in the figures. The figures 4a to 4c display how tracking path of the bird object is determined by Identifying direction (indoor or outdoor) of the recorded sequence of the foreground object positions and accordingly the IN and OUT counters are updated.

Referring now to FIGURE 5, determination of a skeleton representation and direction of movement, as performed by the skeletonization module (1 18) and direction identification module (120), is explained using a flowchart, in accordance with an embodiment of the present disclosure.

If the horizontal and vertical tracking module (116) cannot determine the tracking path of the flying bird then the horizontal and vertical tracking module (116) sends the foreground image of the bird object to the skeletonization module (118) for further analysis. The skeletonization module (1 18) computes a skeleton representation of the foreground object by identifying a backbone of the foreground object skeleton, (3000) The direction identification module (120) receives the skeleton representation and represents the backbone of the skeleton representation which is the most vertical line denoted as S. block (3002). The direction identification module (120) then computes the length of the highest point of S to the nearest intersection denoted as X and the length of the lowest point of S to the nearest intersection denoted as Y. blocks (3004, 3006). Thereafter, the direction identification module (120) determines the direction of the foreground object based on foe length of X and Y, block (3008). Accordingly, the IN or OUT accumulated data counter based on the direction of the foreground bird object are updated.

FIGURES 6a - 6d show various skeleton representations generated by the skeletonization module (118) for various poses or flying postures of birds.

As seen in FIGURES 7a - 7b, if the length of X is less than equal to the length of Y then the bird is flying inside the enclosed region or bird house (represented by the upward arrow). If. the length of X Is greater than the length of Y then the bird Is flying outside the enclosed region or bind house (represented by the downward arrow).

In accordance with an additional aspect, the present disclosure can take the form of a computer program product accessible from a machine-readable media providing programming code for use by the system (100). The software and/or computer program product can be hosted in the environment of FIGURE 1 to implement the teachings of the present disclosure. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise.

The terms "comprises," "comprising,"‘including,’ and‘having, 6 are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.

The use of the expression "at least" or "at least one* suggests the use of one or more elements, as the use may be in one of the embodiments to achieve one or more of the desired objects or results. The process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously, in parallel, or concurrently.

Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present disclosure with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the disclosure could be accomplished by modules, routines, subroutines, or subparts of a computer program product.

While the foregoing describes various embodiments of the disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof. The scope of the disclosure is determined by the claims that follow. The disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.