Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIDEO ANALYSIS METHODS AND APPARATUS
Document Type and Number:
WIPO Patent Application WO/2017/105347
Kind Code:
A1
Abstract:
Video analysis methods are described in which abnormalities are detected by comparing features extracted from a video sequence or motion patterns determined from the video sequence with a statistical model. The statistical model may be updated during the video analysis.

Inventors:
MARANATHA YOSUA MICHAEL (SG)
TAY MENG KEAT CHRISTOPHER (SG)
LOOI RAYMOND (SG)
SRIRAM ASHISH (SG)
Application Number:
PCT/SG2016/050599
Publication Date:
June 22, 2017
Filing Date:
December 12, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VI DIMENSIONS PTE LTD (SG)
International Classes:
G06T7/00; G06V10/776
Foreign References:
US20060045185A12006-03-02
US20140003710A12014-01-02
US20080123975A12008-05-29
Other References:
ZHU ET AL.: "Context-Aware Activity Recognition and Anomaly Detection in Video", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, vol. 7, no. 1, February 2013 (2013-02-01), pages 91 - 101, XP011488792, Retrieved from the Internet [retrieved on 20170201]
HU ET AL.: "A System for learning Statistical Motion Patterns", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 28, no. 9, September 2006 (2006-09-01), pages 1450 - 1464, XP001523382, Retrieved from the Internet [retrieved on 20170202]
See also references of EP 3391334A4
Attorney, Agent or Firm:
LINDSAY, Jonas Daniel (SG)
Download PDF:
Claims:
CLAIMS

1. A method of identifying abnormal events in a video sequence, the method comprising:

extracting features from the video sequence;

determining an abnormality measure for each feature by comparing the extracted features with a statistical model; and

identifying an abnormal event using the abnormality measure. 2. A method according to claim 1 , further comprising discretizing the extracted features.

3. A method according to claim 2, wherein the statistical model comprises a histogram indicating a frequency distribution of discretized features extracted from the video sequence and the abnormality measure is determined from the frequency distribution.

4. A method according to claim 3, further comprising updating the frequency distribution with the discretized extracted feature.

5. A method according to claim 4, further comprising pruning the updated frequency distribution if the updated frequency distribution exceeds a threshold number of entries. 6. A method according to claim 3, further comprising determining for a portion of the video sequence, a set of discretized features present in the portion of the video sequence and a frequency of occurrence in the portion of the video sequence of each discretized feature of the set of discretized features. 7. A method according to claim 6, further comprising determining an abnormality measure for the portion of the video sequence as a function of the abnormality measures for each of the discretized features and the frequency of occurrence of the discretized features.

8. A method according to any preceding claim wherein identifying an abnormal event using the abnormality measure comprises comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

9. A method according to any preceding claim, further comprising displaying to a user an indication of the location of the abnormal even on a frame of the video sequence. 10. A method according to claim 3, further comprising receiving a user indication of a set of discretized features and modifying the frequency in the frequency distribution of the indicated set of discretized features.

1 1. A method according to any preceding claim wherein the features comprise optical flow.

12. A method according to any preceding claim, further comprising detecting an event according to a pre-defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre-defined rule.

13. A non-transitory computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to any preceding claim. 14. An apparatus for identifying abnormal events in a video sequence, the apparatus comprising:

a computer processor and a data storage device, the data storage device having a feature extractor module and an abnormality detector module comprising non-transitory instructions operative by the processor to:

extract features from the video sequence;

determine an abnormality measure for each feature by comparing the extracted features with a statistical model; and

identify an abnormal event using the abnormality measure.

15. An apparatus according to claim 14, wherein the data storage device further comprises instructions operative by the processor to discretize the extracted features.

16. An apparatus according to claim 15, wherein the statistical model comprises a histogram indicating a frequency distribution of discretized features extracted from the video sequence and the abnormality measure is determined from the frequency distribution.

17. An apparatus according to claim 16, wherein the data storage device further comprises instructions operative by the processor to update the frequency distribution with the discretized extracted feature.

18. An apparatus according to claim 17, wherein the data storage device further comprises instructions operative by the processor to prune the updated frequency distribution if the updated frequency distribution exceeds a threshold number of entries.

19. An apparatus according to claim 16, wherein the data storage device further comprises instructions operative by the processor to determine for a portion of the video sequence, a set of discretized features present in the portion of the video sequence and a frequency of occurrence in the portion of the video sequence of each discretized feature of the set of discretized features.

20. An apparatus according to claim 19, wherein the data storage device further comprises instructions operative by the processor to determine an abnormality measure for the portion of the video sequence as a function of the abnormality measures for each of the discretized features and the frequency of occurrence of the discretized features. 21. An apparatus according to any one of claims 4 to 20 wherein the abnormality detector module comprises instructions operative by the processor to identify an abnormal event using the abnormality measure by comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

22. An apparatus according to any one of claims 14 to 21 , the data storage device further comprising a user interface module comprising non-transitory instructions operative by the processor to: display to a user an indication of the location of the abnormal even on a frame of the video sequence.

23. An apparatus according to claim 16, the data storage device further comprising a user interface module comprising non-transitory instructions operative by the processor to: receive a user indication of a set of discretized features and modifying the frequency in the frequency distribution of the indicated set of discretized features. 24. An apparatus according to any one of claims 14 to 23, wherein the features comprise optical flow.

25. An apparatus according to any one of claims 4 to 24, the data storage device further comprising a rule based classifier module comprising non-transitory instructions operative by the processor to: detect an event according to a pre-defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre-defined rule.

26. A method of identifying abnormal events in a video sequence, the method comprising:

extracting features from the video sequence;

determining motion patterns from the extracted features;

determining an abnormality measure for each motion pattern by comparing the motion patterns with a statistical model; and

identifying an abnormal event using the abnormality measure.

27. A method according to claim 26, wherein the motion patterns comprise tracks indicating motion between frames of the video sequence.

28. A method according to any one of claims 26 to 27, further comprising quantizing the extracted features. 29. A method according to any one of claims 26 to 28 further comprising updating the statistical model using the motion patterns.

30. A method according to any one of claims 26 to 29 wherein identifying an abnormal event using the abnormality measure comprises comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

31. A method according to any one of claims 26 to 30 wherein the statistical model comprises a set of clustered motion patterns.

32. A method according to claim 31 , wherein determining an abnormality measure for a motion pattern comprises identifying a set of clustered motion patterns of the statistical mode! closest to the motion pattern. 33. A method according to claim 32, wherein the abnormality measure is a distance measure between the set of clustered motion patterns closest to the motion pattern and the motion pattern.

34. A method according to any one of claims 26 to 33 wherein the features are optical flow.

35. A method according to any one of claims 26 to 34, further comprising detecting an event according to a pre-defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre- defined rule.

36. A non-transitory computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to any one of claims 26 to 35. 37. An apparatus for identifying abnormal events in a video sequence, the apparatus comprising:

a computer processor and a data storage device, the data storage device having a feature extraction module; a motion pattern generation module and a classifier module comprising non-transitory instructions operative by the processor to:

extract features from the video sequence;

determine motion patterns from the extracted features;

determine an abnormality measure for each motion pattern by comparing the motion patterns with a statistical model; and

identify an abnormal event using the abnormality measure.

38. An apparatus according to claim 37, wherein the motion patterns comprise tracks indicating motion between frames of the video sequence. 39. An apparatus according to any one of claims 37 to 38, the data storage device further comprising a quantization module comprising non-transitory instructions operative by the processor to: quantize the extracted features.

40. An apparatus according to any one of claims 37 to 39, the data storage device further comprising a compare a compare and build statistical model module comprising non-transitory instructions operative by the processor to: update the statistical model using the motion patterns.

41. An apparatus according to any one of claims 37 to 40 wherein the classifier module comprises non-transitory instructions operative by the processor to: identify an abnormal event using the abnormality measure by comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

42. An apparatus according to any one of claims 37 to 41 wherein the statistical model comprises a set of clustered motion patterns. 43. An apparatus according to claim 42, wherein the classifier module comprises non-transitory instructions operative by the processor to: determine an abnormality measure for a motion pattern by identifying a set of clustered motion patterns of the statistical model closest to the motion pattern. 44. An apparatus according to claim 43, wherein the abnormality measure is a distance measure between the set of clustered motion patterns closest to the motion pattern and the motion pattern.

45. An apparatus according to any one of claims 37 to 44 wherein the features are optical flow.

46. An apparatus according to any one of claims 37 to 45, wherein the storage device further comprises a rule based classifier module comprising non-transitory instructions operative by the processor to: detecting an event according to a pre- defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre-defined rule.

Description:
Video Analysis Methods and Apparatus

FIELD OF THE INVENTION Embodiments of the present invention relate to video analysis. In particular, embodiments described herein relate to the detection of abnormal events in videos.

BACKGROUND OF THE INVENTION Public infrastructure settings these days are feared to be more and more vulnerable to security threats. The world has suffered much loss of life and property due to terrorist incidents. In order to protect human lives, public infrastructure facilities such as rail and road transport and shopping malls from potential security threats, it has become imperative to build surveillance systems that can monitor a scene and automatically detect and report suspicious events. The importance of surveillance systems is evident from the increasing number of closed circuit television (CCTV) cameras we see in in train stations, airports, shopping malls, traffic junctions and streets our day-to-day life. For instance, it has been reported that United Kingdom has one of the largest camera network with over 4.2m cameras this is approximately 1 for every 14 people.

Much of the content recorded from surveillance scenes are rarely screened and merely serve as record for forensic analysis. Moreover, searching for a specific occurrence in this enormous quantity of data amounts to looking for a needle in a haystack. A surveillance camera becomes more usable if it is packaged with intelligence to detect and report events in close to real time.

Video based surveillance systems are widely used to monitor sensitive areas for dangerous behavior, unusual activities and intrusion detection. This process generally involves humans monitoring a continuous stream of video from single or multiple sources looking to find such abnormal behavior. This process is highly inefficient given the rarity of occurrence of such abnormal events. Most attempts at automating this system require a set of predefined rules describing what sort of

i events are to be considered abnormal. Such rule based systems describe abnormal events using rules such as 'detect people crossing a virtual line'; 'detect any activity within a bounded region'; 'detect vehicles stopping in a region for a longtime'; etc. Defining rules specific to a scene requires a human to analyze the scene being monitored and create these rules. In addition to this, it is very difficult to create rules to describe most abnormal behavior like 'fight in a crowd' and 'a person loitering in a region with standard people movement'.

SUMMARY OF THE INVENTION

According to a first aspect of the present invention there is provided a method of identifying abnormal events in a video sequence. The method comprises: extracting features from the video sequence; determining an abnormality measure for each feature by comparing the extracted features with a statistical model; and identifying an abnormal event using the abnormality measure.

In an embodiment, the method further comprises discretizing the extracted features.

In an embodiment, the statistical model comprises a histogram indicating a frequency distribution of discretized features extracted from the video sequence and the abnormality measure is determined from the frequency distribution.

In an embodiment, the method further comprises updating the frequency distribution with the discretized extracted feature.

In an embodiment, the method further comprises pruning the updated frequency distribution if the updated frequency distribution exceeds a threshold number of entries. In an embodiment, the method further comprises determining for a portion of the video sequence, a set of discretized features present in the portion of the video sequence and a frequency of occurrence in the portion of the video sequence of each discretized feature of the set of discretized features. In an embodiment, the method further comprises determining an abnormality measure for the portion of the video sequence as a function of the abnormality measures for each of the discretized features and the frequency of occurrence of the discretized features.

In an embodiment, identifying an abnormal event using the abnormality measure comprises comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

In an embodiment, the method further comprises displaying to a user an indication of the location of the abnormal even on a frame of the video sequence.

In an embodiment, the method further comprises receiving a user indication of a set of discretized features and modifying the frequency in the frequency distribution of the indicated set of discretized features.

In an embodiment, the features comprise optical flow. Additionally or alternatively, the features may comprise color, and /or tracks. The use of tracks is described in more detail below. The abnormality measure may be measured by the weighted sum derived from the frequency of the tracks.

In an embodiment, the method further comprises detecting an event according to a pre-defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre-defined rule.

According to a second aspect of the present invention there is provided an apparatus for identifying abnormal events in a video sequence. The apparatus comprises a computer processor and a data storage device, the data storage device having a feature extractor module and an abnormality detector module comprising non- transitory instructions operative by the processor to: extract features from the video sequence; determine an abnormality measure for each feature by comparing the extracted features with a statistical model; and identify an abnormal event using the abnormality measure.

According to a third aspect of the present invention there is provided a method of identifying abnormal events in a video sequence. The method comprises: extracting features from the video sequence; determining motion patterns from the extracted features; determining an abnormality measure for each motion pattern by comparing the motion patterns with a statistical model; identifying an abnormal event using the abnormality measure.

In an embodiment, the motion patterns comprise tracks indicating motion between frames of the video sequence. The tracks can be constructed without requiring any object level identification. In an embodiment, the method further comprises quantizing the extracted features.

In an embodiment, the method further comprises updating the statistical model using the motion patterns. In an embodiment, identifying an abnormal event using the abnormality measure comprises comparing the abnormality measure with a threshold and identifying an abnormal event when the abnormality measure is greater than the threshold.

In an embodiment, the statistical model comprises a set of clustered motion patterns.

In an embodiment, determining an abnormality measure for a motion pattern comprises identifying a set of clustered motion patterns of the statistical model closest to the motion pattern. In an embodiment, the abnormality measure is a distance measure between the set of clustered motion patterns closest to the motion pattern and the motion pattern.

In an embodiment, the features are optical flow. In an embodiment, the method further comprises detecting an event according to a pre-defined rule and wherein identifying an abnormal event comprises using the abnormality measure and the result of the pre-defined rule.

According to a fourth aspect of the present invention there is provided an apparatus for identifying abnormal events in a video sequence. The apparatus comprises: a computer processor and a data storage device, the data storage device having a feature extraction module; a motion pattern generation module and a classifier module comprising non-transitory instructions operative by the processor to: extract features from the video sequence; determine motion patterns from the extracted features; determine an abnormality measure for each motion pattern by comparing the motion patterns with a statistical model; and identify an abnormal event using the abnormality measure.

According to a yet further aspect of the present invention, there is provided a non- transitory computer-readable medium. The computer-readable medium has stored thereon program instructions for causing at least one processor to perform operations of a method disclosed above.

BRIEF DESCRIPTION OF THE DRAWINGS

In the following, embodiments of the present invention will be described as non- limiting examples with reference to the accompanying drawings in which:

Figure 1 shows a video analysis apparatus according to an embodiment of the present invention;

Figure 2 shows the processing carried out in method of analyzing video data according to an embodiment of the present invention;

Figure 3 shows a data structure of a statistical model according to an embodiment of the present invention; Figure 4 shows the processes carried out using the statistical model in an embodiment of the present invention; Figure 5 shows a method implemented by the update model process in an embodiment of the present invention;

Figure 6 shows processing units of an apparatus for detecting abnormalities in a video sequence according to an embodiment of the present invention;

Figure 7 shows the discretization of the direction of motion in an embodiment of the present invention;

Figures 8a and 8b show examples of default Gaussian grids used in embodiments of the present invention;

Figure 9 shows an example of words being mapped to Gaussians;

Figure 10 shows an implementation of adaptive training which may be used with embodiments of the present invention; and

Figure 11 shows a video analysis system that combines a rule based analysis with a statistical analysis according to an embodiment of the present invention. DETAILED DESCRIPTION

Figure 1 shows a video analysis apparatus according to an embodiment of the present invention. The video analysis apparatus 100 receives video data 120 as an input and outputs indications of abnormal events 140. The input video data 120 comprises a plurality of frames of video data. In some embodiments, the video analysis apparatus 100 processes the input video data 120 in real time or near real time. In such embodiments, the video data 120 may be received directly from a video camera. As described in more detail below, in some embodiments, the analysis carried out by the video analysis apparatus 100 requires a scene, that is, a plurality of frames. Therefore, the analysis and the output of indications of abnormal events 140 may lag behind the input video data 120. In alternative embodiments, the processing carried out by the video analysis apparatus 100 may take place on pre- recorded video data.

The output of indications of abnormal events 140 may take the form of indications of times and locations on the input video data 120. In some embodiments, the indications may take the form of motion indications or tracks on the input video data 120. As described in more detail below, the processing carried out by the video analysis apparatus 100 involves learning based on an input video sequence. This learning process may continue during the analysis carried out by the video analysis apparatus 100. In some embodiments, a user may input rules or apply feedback to the output indications of abnormal events 140. The rules and feedback may be entered using a user input device such as a mouse, keyboard or touchscreen.

The video analysis apparatus 100 may be implemented as a standard computer having a processor configured to execute software stored on a storage device as a plurality of computer program modules operative by the processor to carry out the methods described in more detail below.

Figure 2 illustrates the processing carried out in method of analyzing video data according to an embodiment of the present invention. As shown in Figure 2, processing 200 includes two components: a feature extractor 210 and an abnormality detector 220.

In general, anomaly detection is the identification of item, event, or observation which does not fit into the expected pattern of the dataset. In the abnormality detector 220, the anomaly is defined quantitatively as item, event, or observation which occurred infrequently in dataset. In other words, it is assumed that something normal will occur more frequently as compared to something abnormal. The abnormality detector 220 in this embodiment is an unsupervised anomaly detection algorithm that based on the frequency counting. The algorithm is light weight and fast enough to support real-time detection. Additionally, it has adaptive capability meaning that it can learn during detection and update itself to adapt with current condition.

The input 202 to the feature extractor 210 is a video sequence. Before this is fed into the abnormality detector 220, the input is transformed by the feature extractor 210. The feature extractor 210 describes each input scene which comprises a plurality of frames as a set of feature descriptors with a weight or score for each feature descriptor. In this embodiment, we describe the input is described as a bag of words 212 indicating a set of words and the frequency of each word in the set. The words should uniquely identify a descriptor and its frequency is proportional with the weight or score. Examples of words are discretized values of optical flow, colour or speed of motion, amongst other meta-data in general.

The main component of the abnormality detector 220 is a statistical model 224. The statistical model 224 is a histogram-like data structure which can gives information on the frequency distribution of the words. The statistical model 224 is initialised during a training phase using a training dataset. The learning process 222 is used to build the statistical model 224 from a training data set. The training dataset may be for example a few hours of video sequence. After learning, the model should know rough frequency distribution of the words and hence can give a reasonable abnormality measure during detection. The statistical model is also updated during the detection phase. This allows the statistical model 224 to adapt to the most recent situation. This takes into account the fact that in some cases, the frequency distribution of the words may change overtime.

During learning, the learning process 222 gets a stream of bags of words which are derived from the training dataset. For each bag of words, training inputs 223 comprising pairs of a word and its frequency are sent to statistical model 224 one by one by the learning process 222. When the statistical model 224 receives the pair of a word and its frequency, the histogram-like data structure in statistical model 224 is updated. This takes place by changing the frequency distribution accordingly. The particular word becomes more frequent as compared to other words.

During detection, the detection process 226 provides detecting inputs 225 to the statistical model 224. The detecting inputs 225 comprise pairs of words and the frequencies for the words. The statistical model 224 gives the detection process 226 an abnormality value 227 indicating how abnormal the word is. The detection process 226 calculates an abnormality measure for the bag of words using the abnormality value 227 for each word.

A number of possible functions to compute the abnormality value 227 in statistical model 224 are envisaged. As a general rule, a word with less frequency in the model should give more abnormality value. For each bag of words, their abnormality measure 228 is the weighted sum of all abnormality values of the words, where the weight is the word's frequency in the bag of words. Finally, to determine whether a particular input is abnormal or not, the abnormality measure 228 of the bag of words is compared with a pre-determined threshold value. If the abnormality measure 228 is higher than the threshold, then it is determined that the input is abnormal. Otherwise, the input is normal.

Furthermore, as illustrated in Figure 2, the statistical model 224 is updated during the detection process 226. A similar updating method to that used during the learning process 222 is carried out, but the frequency of the word is multiplied by a small multiplier called DetectionMultiplier before sent into the statistical model 224. DetectionMultiplier usually has value in between 0 to 0.2. DetectionMultiplier is used to make sure that the model does not change too fast. This update during the detection process 226 gives the statistical model 224 an adaptive capability which enables it to adapt with the new situation during the detection process 226. Figure 3 shows a data structure of a statistical model according to an embodiment. The data structure 300 is a histogram sorted in frequency. The data structure 300 stores locations for the first known word 310, the last normal word 320 and the last known word 330. These locations may be stored as indexes linking to words. The location of the last normal word 320 allows the statistical model to quickly determine whether a word is normal or not. Whenever the statistical model is updated, it should always maintain the sorted order of the histogram 300 and maintain the location of the last normal word 320.

Whether a word is normal or not is determined by a parameter called normalTh. The normalTh has value between 0 to 100%, it is the ratio of frequency of normal words compared to the total frequency. In the example shown in Figure 3, normalTh is equal to 99%. This means that the top 99 percentile of the words are normal while the last 1 percentile is abnormal.

Figure 4 shows the processes carried out using the statistical model 224. As shown in Figure 4, there are three processes: update model 402, get abnormality value 404 and prune model 404. The update model 402 receives either a training input 223 or a detecting input 225 as an input. As discussed above both the training input 223 and the detecting input 225 comprise a combination of a word and the frequency of that word in a bag of words. The get abnormality value 404 process takes the detecting input 225 as an input and outputs an abnormality value 227. The prune model 406 process takes place after the update model 402 process has taken place.

The processing models are described in more detail below. The sorted histogram is implemented using a combination of the map data structure and a dynamic array that we name as WordTolndexMap and WordFrequencyArray respectively. The WordFrequencyArray contain pair of word and frequency and is sorted from the highest frequency to the lowest. The WordTolndexMap is a mapping between words and their rank in WordFrequencyArray. In this way, we can quickly find the word's rank and frequency. Moreover, swapping between neighbouring words can be done in 0(1) time. Other than the histogram, the data structure has three other variables to keep track the last normal word. These variables are named as LastNormalWord, TotalFrequency, and TotalFrequencyBeforeNormal. LastNormalWord stores the string of the last normal word; it is more rigorously defined as the last word from the top normalTh percentile. TotalFrequency is the sum of frequency of all words and TotalFrequencyBeforeNormal is the sum of frequency of all words before the last normal word. Using these 3 variables, we can efficiently keep track the last normal word whenever the model is updated or pruned.

Figure 5 shows a method implemented by the update model 402 process. As shown in Figure 4, the inputs to the update model 402 process are a word w and its frequency f to be added to the statistical model 224. The process of updating the model is composed of adding a word to the histogram and continued by pruning the model. The process of pruning the model will be explained in more detail below.

In step S502, The statistical model 224 is updated by searching the location of word w in the statistical model 224 and increasing the frequency of that word the histogram by the input frequency, f. When a word w with frequency f is added into the histogram, what we want is to increase the frequency of the word w in the histogram by f while maintaining the sorted order of the histogram and the location of the last normal word with its helper variables. The first thing we need to do is to find the index of the word w in WordFrequencyArray by using WordTolndexMap. In step S504, we use the index to add the frequency f to the correct element in WordFrequencyArray.

In step S506, we move word w by repeatedly swapping it to its neighbour until it gets into the correct position. This ensures that the histogram is in sorted order.

After the histogram is properly ordered, we still need to maintain some variables such as TotalFrequency, TotalFrequencyBeforeNormal, and LastNormalWord. These variables are updated in step S508. We can update TotalFrequency by adding the incoming word's frequency, f, to its value. To update TotalFrequencyBeforeNormal and LastNormalWord, 3 steps are carried out. In step one we keep the value of the LastNormalWord and update TotalFrequencyBeforeNormal to be the sum of frequency of all words before the (non-updated) LastNormalWord. In the next step, we update the LastNormalWord to make sure that it really is the last normal word or the last word from the top normalTh percentile. This update is done by comparing the ratio between TotalFrequencyBeforeNormal and TotalFrequency with normalTh then, if needed, we replace LastNormalWord with its neighbouring word. Last, we update TotalFrequencyBeforeNormal if the LastNormalWord is changed in the previous step.

Pseudo code for update model is set out below:

function get_correct_position(word w)

begin

index <- WordTolndexMap[w]

if (WordFrequencyArray[index].frequency > WordFrequencyArray[index-1]. frequency)

begin

tempWord <- WordFrequencyArray[index-1].word

swap(WordTolndexMap[w], WordTolndexMap[tempWord]) swap(WordFrequencyArray[index], WordFrequencyArray[index-1 ]) end

end

function add_to_histogram(word w, frequency f)

begin

if ( w in WordTolndexMap )

begin

index <- WordTolndexMapfwj

WordFrequencyArray[index].frequency <- WordFrequencyArrayfindex]

+ f

else

WordTolndexMapfw] <- WordFrequencyArray.size()

WordFrequencyArray.push_back((w,f))

end get_coirect_position(w)

TotalFrequency <- TotalFrequency + f

// Function to maintain the LastNormalWord, TotalFrequencyBeforeNormal. // It can be done in 0(1 ) time

maintain last normal word()

function update_model(word w, frequency f)

begin

add_to_histogram(w,f)

prune_model()

end

The prune model 406 process will now be described. After statistical model 224 is updated, the statistical model will always grow larger in term of total frequency (vertically) and number of word (horizontally). This means that the statistical model can grow arbitrarily large. To prevent this, the statistical model 224 is pruned when it exceeds a certain size.

There are three parameters that are used for pruning the model: MaxTotalFrequency, MaxNumWord, and ReductionFactor. If the total frequency of the model is more than MaxTotalFrequency, then the statistical model will be pruned by scaling down the frequency of all words by ReductionFactor times. The MaxNumWord indicates the maximum number of different word allowed in the statistical model. If the number of word become more than MaxNumWord, then the word with smallest frequency will keep being removed until the number of word is equal to ReductionFactor multiplied by MaxNumWord. In general, it is not good if the histogram is able to grow arbitrarily large since it can affect the performance of the system. Therefore, we employ two types of pruning to keep the size of the histogram reasonable. First, we limit the frequency size by scaling down the frequency of each word in the histogram. This is useful to prevent the loss of accuracy from floating point precision. Secondly, we limit the number of word by removing words with small frequency. A large number of words may cause the system to slowdown; therefore it is important to control the number of words in the histogram. In one embodiment, we use three parameters to control the pruning mechanism. These parameter are ReductionFactor, MaxTotalFrequency, and axNumWord. The ReductionFactor is a number between 0 and 1 that represent the factor in which we reduce the size after pruning. MaxTotalFrequency and MaxNumWord are parameters indicating the upper bound of TotalFrequency and number of words in histogram respectively.

The process of scaling down the frequency is triggered when the TotalFrequency is bigger than MaxTotalFrequency. Once triggered the process will multiply the frequency of each word in the histogram by the ReductionFactor. After that, TotalFrequency and TotalFrequencyBeforeNormal will be multiplied by ReductionFactor as well.

The second pruning process is started when the number of word in the histogram is larger than MaxNumWord. In this process, we keep removing the word with smallest frequency from the histogram until the number of word in the histogram is less or equal to MaxNumWord * ReductionFactor. While removing the word from the histogram, it will also update and manage TotalFrequency, TotalFrequencyBeforeNormal and LastNormalWord to make sure they have the correct value.

Pseudo code for prune model is set out below:

function scale_down_histogram()

begin for all x in WordFrequencyArray

begin

x.frequency <- x.frequency * ReductionFactor

end

TotalFrequency <- TotalFrequency * ReductionFactor

TotalFrequencyBeforeNormal <- TotalFrequencyBeforeNormal * ReductionFactor

end function prune_num_word()

begin

while ( WordFrequencyArray.sizeO > MaxNumWord * ReductionFactor ) begin

TotalFrequency <- TotalFrequency - WordFrequencyArray.backQ.frequency

WordFrequencyArray. pop_back()

end

maintain_last_normal_word()

end function prune_model()

begin

if (TotalFrequency > MaxTotalFrequency)

begin

scale_down_histogram()

end

if (WordFrequencyArray.sizeO > MaxNumWord)

begin

prune_num_word() end

end

The processing carried out by in the get abnormality value 404 will now be described. During detection, we want to get the abnormality value of a word. Before computing the abnormality value, the get abnormality value 404 process will check whether or not the word is normal by comparing its location with the last normal word location. If the word is normal, then the abnormality value is zero. Otherwise, an abnormality value will computed by a function called compute_abnormality_value which takes a word and return abnormality value as a result.

The function compute_abnormality_value can vary according to different embodiments. For example, a mathematical formula may be used to compute the abnormality value. Other than the mathematical formula to compute the abnormality value, it may also vary because of the usage of previous result; for example, we might want to use the previous abnormality value to compute the current abnormality value. In general, the word with less frequency should contribute more abnormality value and the correlation between previous and current abnormality value should be non-negative.

As shown in Figure 4, the get abnormality value 404 returns the abnormality value of a word during the detection processing and the statistical model is updated. As described above, the get abnormality value 404 process gives zero abnormality value for a normal word, otherwise it will use compute_abnormal_value function to compute the abnormality value.

The compute_abnormal_value function is not a fixed function. We can change the function such that it suits the domain where the statistical model is used. In an implementation, we use a function that grows exponentially with respect to the ratio between the word's frequency and the last normal word's frequency. The rate of exponential growth is controlled by setting the value of a constant (Constan ).

Pseudo code for get abnormality value is set out below: function is_abnormal(word w)

begin

return WordTolndexMap(w) > WordTolndex ap(LastNormalWord) end function compute_abnormal_value(word w)

begin

if (not w in WordTolndexMap)

begin

return 1.0

end index <- WordTolndexMap[w]

f <- WordFrequencyArray[index].frequency

lastNormallndex <- WordTolndexMapfLastNormalWord]

lastNormalFrequency <- WordFrequencyArray[lastNormallndex].frequency return exp( - Constant_1 * f / lastNormalFrequency )

end function get_abnormality_value(word w, frequency f)

begin

result <- 0.0

if ( is_abnormal(w) )

begin

result <- compute_abnormal_value(w)

end

update_model(w,f)

return result

end An embodiment of a video analysis method and apparatus for detecting abnormalities in video camera surveillance will now be described. In this embodiment we chose optical flow as the feature. The feature extractor receives a stream of frames from the video. Optical flow is generated on each frame and encoded into a word with format: "x.y.direction". The x and y are the coordinates of the pixel that have the optical flow. To reduce the word spaces, we discretized x and y into the nearest smaller multiple of 8, i.e. 34 will be discretized to 32. The direction is the angle of the optical flow with respect to the y axis, it is discretized into 8 possible directions (consecutive directions differ by 22.5 degree).

The feature extractor will output one bag of words for every 5 frames it received. The bag of words is basically a vector of pair of word with its frequency, where the frequency is just how many times such word appear in the 5 frames. Once the bag of words is received, they are fed into the statistical model. We can either feed the bag of words to learning process to build the model, or feed the bag of words into detection process for a real time detection.

With this feature extractor, the statistical model can be used to detect events with unusual optical flow. If the movement on the video camera scene is well structured, then the statistical model can be used to detect abnormality which is not following the structure. For example, consider a scene where the camera is looking at highway where pedestrians rarely jaywalk. In this case, the normal optical flow would be the movement of vehicles along the road. If there is a person jaywalking, it will be detected as abnormal since it will produce optical flow perpendicular to the movement of the vehicles.

We use compute_abnormal_value functions as described above. With this, there are 5 parameters that we need to decide which are normalTh, MaxTotalFrequency, MaxNumWord, Constan , and DetectionMultiplier (from the detection process). Before detection, the statistical model is build learning from 2 - 3 hours of video from the camera view. With this, the initial model will roughly know the general pattern of the optical flow in the camera view. Having the initial model, statistical model can be used for online detection on the video streams from the camera.

In the beginning, the detection result might not be good because the initial model does not get enough information from the video used for learning. Fortunately, the statistical model is adaptive and it will learn during detection as well. Therefore, the statistical model will get better and better while doing detection and as a result the detection will also get better. Moreover, adaptive capability enables the model to adapt if the normal behaviour in the camera view is changing.

The processing described above is very light-weight in terms of requirements and fast.

In certain situations, the user might want to provide feedback to the statistical model. As abnormalities are contextual, certain users might want to designate certain motions (which correspond to a set of words) as being normal or abnormal. The embodiments described above provide a very direct way of giving this feedback.

In an embodiment, users are able to provide feedback in the form of a user interface where the user highlights the area of the camera view of interest. The areas highlighted are directly mapped to its corresponding words. The statistical model 224 is then updated with the add_to_frequency function. If the highlighted zone is to be designated as normal, a positive frequency number is passed as parameter to the add_to_frequency function. For the opposite case, a negative frequency number is used instead.

Figure 6 shows processing units of an apparatus for detecting abnormalities in a video sequence according to an embodiment of the present invention.

As shown in Figure 6, the processing 600 comprises a training phase 602 and a detection phase 604. Some of the processing units of the training phase 602 also occur in the detection phase 604, where this is the case, like reference numerals are used.

In the training phase 602, a scene track model is learned. In the detection phase 604, this scene track model is used to detect abnormal events in a video stream. As shown in Figure 6, a training video stream 605 is input into the training phase 602, and a test input video stream 606 is input into the detection phase 604.

In the training stage 602, we intend to model all activities occurring in the training video stream 605. A feature extraction module 610 extracts basic features that describe regions of activity in the scene in space and time from the input video stream 605. The input video stream 605 comprises a series of frames or images. The basic features which are referred to as words 615 are pixels or super-pixels with motion information. The basic features (words 615) are too numerous to be used for modelling. Thus we introduce multiple stages of quantization to decrease the sample space.

A Probabilistic Latent Semantic Analysis (PLSA) module 620 groups words that appear together in time to generate topics 625 using a generative model. Further grouping in space is performed in a topic quantization module 630 in which each topic 625 is represented using a small set of Gaussians that model individual activity regions 635 that appear in a small space and time frame. A mapping module 640 maps all words to the Gaussian event regions 635. A word-Gaussian map 645 is output by the mapping module 640.

The word-Gaussian map is used to quantize words 615 in a Quantization module 650 which generates a mapping of words to a lower dimensional Gaussian space defined by a set of Gaussian event regions 655. Once the mapping is established and words are quantised, we can model the activities/motion patterns seen in the live video. A generate motion patterns module 660 generates motion patterns in the form of tracks 665. Each video frame generates a set of Gaussians and these are used to build tracks 665 that model activities in the scene. This track generation process can generate few hundred thousand tracks from an hour of video. A build statistical model module 670 uses a clustering process to generate a scene activity model 675 that models the system using fewer tracks. For each scene which comprises frames from a few seconds of video, tracks are generated and clustered to get a reduced set of merged tracks. The frequency count of these tracks model the distribution of activity in the scene.

At the end of training, we normalize the frequency count of merged tracks to get a probability model of activity distribution in the video. This model is the Scene Activity model 675.

In the detection phase 604, abnormalities are detected in a test input video stream 606. The pipeline is similar to the track generation process in the training phase 602. The feature extraction module 610 extracts words 616 from the test input video stream 606. These words 616 are mapped to Gaussian event regions 656 by the word quantisation module 650. The generate motion patterns module 660 generates tracks 666 from the Gaussian event regions 656. These tracks 666 are referred to as test tracks.

Once a test track is generated, it is compared with all of the merged tracks in the scene activity model. These tracks are referred to as train tracks. A classifier block 680 compares the test tracks with all of the train tracks and finds the most similar track match. The classifier module 680 then determines whether the newly seen test track is abnormal or not based on a similarity value. Tracks that are found to be abnormal represent abnormal events occurring in the real world and such occurrences can be made to trigger alarms.

The training phase 602 and the detection phase 604 of an embodiment will now be described in more detail. The training phase 602 aims to extract the most prominent activities seen in the training video stream 605 and represent them using tracks 665 formed by a series of Gaussians. The Gaussians model the position and variance of different local instances of the activity. The feature extraction module 610 is used to extract basic features from the raw video which comprises a series of frames/images. The basic features or words 615 describe regions of activity in the scene in space and time. The basic features (words 615) are pixels/super-pixels with motion information. The feature extraction module 610 may be implemented with any feature extraction method that gives local position and motion information. Examples of possible feature extraction methods that may be used are block-matching, phase correlation, optical flow and the Luckas Kanade Tracker.

The words 615 are in (x; y; o) format, where x and y represent the position and o represents the direction of motion. Each word is represented as a 4D vector (x; y; dx; dy) where dx and dy represent the orthogonal components of the direction of motion o.

The direction of motion o is discretized. Figure 7 shows the discretization of the direction of motion. As shown in Figure 7, the direction of motion is discretized into 9 bins. Bin 0 represents no motion and bin 1 to bin 8 represent motion in the directions shown in Figure 7.

The probabilistic latent semantic analysis (PLSA) block 620 implements a generative model which specifies sampling rules on the sampling space of words. If each word is represented by w and each doc (group of frames) is represented by d, we intend to get a set of topics z that introduces a conditional independence between w and d. These topics should tend to group words that appear together.

The joint distribution of the 3 variables (w,z,d) can be written as:

Pi u-. - . d) = P{ d)P( w[2 )P( ' . \ ( .l)

Where N z is the total number of topics. The word distribution P(w|z) and topic distribution P(z|d) are estimated iteratively using maximum likelihood principle. Optimization is conducted using expectation maximization (EM) algorithm. The EM procedure starts by randomly initializing the parameters. In the expectation step, the posterior distribution of the topic variable is calculated as:

In the maximization step, the model parameters are estimated as:

D The word distribution P(w|z) tends to cluster words that occur at the same time. This is used as basis for determining regions of activity in the scene in further steps.

PLSA ensures that the words in a topic are not highly scattered. But it is possible that a topic models more than one spatial region or more than one direction of motion. Therefore, there can also be multiple topics modelling the same region, adding redundant information. Further, we need a discrete representation of activity regions in the scene to build motion patterns in the future. Therefore, the quantization block 630 is included in the processing to provide a discreet representation of the scene.

The topic quantization module 630 clusters topics. The clustering is performed in 2 stages, namely Intra topic clustering and Inter topic clustering to get a discrete representation of activity regions in terms of Gaussians. The Gaussians N ~ (μ,∑) are in 4D space (x; y; dx; dy). The mean μ represents the average position of the region in (x,y) and average direction of motion in (o - split into orthogonal components dx, dy). The covariance∑ which is a 4x4 matrix represents the change in position/direction of words within the Gaussian and their relationships.

A topic can model multiple simultaneous activities occurring at different regions or having different directions of motion. Intra topic clustering is used to separate these individual activities within a topic. We intend to represent each topic using a small set of Gaussians that model individual activity regions.

Each topic z is associated with a probability density function (PDF) given by: P(w|z), Vw eW. Each w represents a 4D point in (x; y; dx; dy) space.

A Gaussian Mixture Model (GMM) with a presumed maximum number of Gaussians K that can represent all significant activity regions is fitted to each topic by sampling points in 4D space from P(w|z) for each z. The K Gaussians are fitted to each topic using Expectation Maximization (EM). The GMM probability density function is given by:

Where P k (w|¾,M k ,∑ k ) ~ N(p k ,∑ k ) are individual Gaussian components; z = (z-i, Z2, .. .Z|<) are K dimensional latent indicator variables with only one of the ¾ equal to 1 and the rest 0. This represents which mixture component 1.2....K generated w; ak = p(Zk) are mixture weights that sum to 1 ; and Θ = (c ...c-k, μ-ι ... μ k > ∑1■ · ·∑ ·

The EM algorithm starts with a random initialization for all (Pk,∑ k ) and proceeds as an iterative process switching between an expectation (E-step) and maximization (M- step) routine.

The E-step computes the uncertainty about which component k produced w, given (Mk,∑k):

for, 1 < k < K components and 1 < i < N words. The M-step computes the mixture components for the given words Wj with the above mapping ik :

, y k

where, NR represents the number of words associated with component k

∑;=i ¾ «

l' Each topic ζ is now represented by a K component GM :

Where ¾ - Ν(μ,∑).

Hierarchical clustering is used to reduce the Gaussians. For each topic, reject Gaussians with very low weight are rejected. Also topics with very high covariance (distributed noise) or very low covariance (single pixels) are rejected to get a list of valid Gaussians representing each topic z given by: g j ^G, 1 < j < M, M < K.

The Gaussians ¾ are clustered using hierarchical clustering based on Kullback- Leibler (KL) divergence between Gaussians as distance measure. This is carried out as follows: Get proximity matrix (M x M) in distance Mxy and in direction M dxy between g s for each topic. Each GMM cluster G is sampled. For each Gaussian g, in the GMM given by: n, samples are drawn, n, = N Wj where N is total number of samples. The KL distance between 2 sampled distributions G-i , G 2 is given by dKu (Gi .G2):

KLXY = (i rct(i n ( logiQ{ (1 KL } )

For valid Gi and G 2 , d K i_s (0, + « ). Thus the d K i_xY function is used to reduce the range to (0,1). Each element of the proximity matrix Mxy(p,q) is given by xLXY (Gp.Gq). Similarly Mdxy is obtained using (dx, dy) components. For all (p, q) in Mxy and Mdxy:

IF(Mxy(p, q) > xyrhresh and M d xy(p, q) > dxy esh):

group Gaussians/GMMs into one GMM (G' p = w p G p + w q G q ).

The thresholds are set depending on what resolution you intend to have. For scenes where cameras have a close-up view, lower thresholds are used, which will result in representing large objects with fewer Gaussian components. For scenes where cameras are

monitoring activity from a longer distance, higher resolution is required to isolate different activities. In such cases, higher thresholds are used to get more Gaussian components. Repeat 1. & 2. until no more merges are possible in 2. 4. Each topic is represented as a reduced set of GMMs G'j. Each G' j is re- sampled and a single Gaussian h j is fitted to it. Now, each topic is represented as a set of Gaussians hj ~ N(pj,¾). Since there can also be multiple topics modelling the same region with a high degree of overlap this adds redundant information. We use inter topic clustering to reduce this redundancy and to model the entire scene using a set of well distributed Gaussians. The Gaussians may be clustered with direction of motion as follows. The Gaussians are sorted by direction of motion. The direction of motion of all Gaussians generated in the previous stage is quantized. If |dx| and |dy| are both too small, the value is set to zero; the rest are rounded off to the nearest direction of motion shown in Figure 7. The Gaussians are grouped into bins. Within each bin, Gaussians that have higher percentage of overlap are merged. The decision to merge or not to merge 2 Gaussians is based on the ridge ratio r va i between them, which is calculated as follows:

For all a <≡ [0,1] and β = 1 - a, the ridge line is given by:

) = X(r(a) , f i ∑:) + N(r( ). ½ . ¾) S„mx - maxiina(c(a ) ) S.nin - in ill in I (<.·(( )) S„ U , T 2 nia.r(S MA - max($ mu ))

where S max and S mm are sets of all maxima and minima of the elevation plot function e(a). The ridge ratio is given by the ratio of global minima and the second highest maxima. If it is low then the 2 Gaussians are similar and can be merged (i.e. the elevation plot is almost flat). Else the Gaussians are too distinct and cannot be merged.

Gaussians are merged by hierarchical clustering starting with the Gaussian pair with least ridge ratio. The merged Gaussian is obtained by sampling points from the 2 Gaussians and fitting a single Gaussian on to the sampled data points. The function of the mapping module 640 in an embodiment will now be described. If a video with resolution 640x480 is divided into super pixels of size 8 8 pixels, then each frame is made up of 80x60 super pixels. With 9 direction levels, a total of 80x60x9 = 43200 words are possible, which is a large number to be used as basic units to build tracks.

As described above, a quantized representation of the scene is generated using PLSA-GMM. The mapping module 640 maps the raw features (words) to the quantized space (Gaussians). This will reduce the feature space to a few hundred Gaussians. However, there can be words that are not seen during training that may appear during testing. These unseen words are highly likely to not map very well to Gaussians generated from PLSA-GMM. Thus we need a default Gaussian grid to map the unseen space.

Figure 8a shows an example of a default Gaussian grid. The Gaussian grid 800 comprises a plurality of Gaussians 810 that span the entire frame. All Gaussians in the grid have the same covariance. Each grid is made up of 2 layers. The layers have a position offset of 1 standard deviation of an individual Gaussian, such that every pixel lies within 1 standard deviation of any one of the Gaussians in the grid (the edges of the frame are an exception). 9 such grids are created, one for each quantized motion direction. Figure 8b shows a further example of a default Gaussian grid. The Gaussian grid 850 constructed such that it models the perspective in the scene. If the user can specify perspective information, the grid can be constructed with Gaussians having varying covariance. As shown in Figure 8b, Gaussians 860 representing regions close to the camera have a larger covariance than Gaussians 870 representing regions further from the camera.

Figure 9 shows an example of words being mapped to Gaussians. All words are evaluated for each Gaussian in the training set, and if a word is within 2 standard deviations of the Gaussian, it is mapped to that Gaussian. As shown in Figure 9, words 920 which are evaluated to be within 2 standard deviations of a first Gaussian 910 are mapped to the first Gaussian 910. Similarly, words 940 which are evaluated to be within 2 standard deviations of a second Gaussian 930 are mapped to the second Gaussian 930. It is noted that the clustering of PLSA-G Ms in previous stages is expected to produce Gaussians that are well separated. Therefore, it is a reasonable approximation to map each word to only one Gaussian.

If a word does not lie within 2 standard deviations of any of the Gaussians from training, it is evaluated for Gaussians in the default grid and the Gaussian that produces highest probability of matching is selected and the word is mapped to this Gaussian. It must be noted that the Gaussians in the default grid are not representative of any seen activity. Thus using these directly to generate motion patterns gives a poor representation of activities. The PLSA-GMM Gaussians on the other hand model actual activity regions. Thus they model features of the object such as size and scene perspective. This extra information is important to get a proper representation of an activity in the scene.

An activity such as a vehicle moving or a pedestrian crossing a road produces a series of words. These words can be reduced to a lower dimensional Gaussian space by the word quantization module 650 using the map generated by the mapping module 640. Thus each video frame generates a set of Gaussians that are used to build motion patterns (or tracks) that model activities in the scene. This process is carried out by the generate motion patterns module 660 as follows. A motion pattern spanning multiple frames can be broken down into transitions between subsets of frames. A set of frames, for example 5 frames may be termed as a doc, then, the transition between two docs can be modelled as a transition between 2 Gaussians seen in these docs. The validity of a transition in time and space from one Gaussian in a first doc to another Gaussian in a second doc is determined by a pair of validity functions d v and o v . The functions have a range (0, 1 ) with values close to 1 indicating valid transitions. The distance based validity function dv ensures that transitions to Gaussians that are physically closer have lower cost and transitions to Gaussians that are further away have much higher costs. This function is designed based on the degree of movement (in terms of pixel distances) possible between two consecutive docs in the scene. The angle based validity function o v ensures that only those transitions are allowed wherein the average direction of motion of the Gaussians involved in the transition is close to the geometric angle of transition between them.

For G, ~ N(p i( ∑i) with μ, = (χ ( ,½) and∑, given by the follow formula,

Validity function based on distance d v between two Gaussians G1 ~ Ν(μ-ι,Σι) and G2 ~ Ν(μ2,∑2) in (x, y, dx, dy) space can be based on any one of the following distance measures. It is noted that only the 2 distance related dimensions x, y are used for distance measure calculations. Euclidean distance factor for a video with resolution W * H is given by:

The Mahalanobis distance is given by:

d r =

1. otherwise

The KL distance is given by:

' aJGr.Gi) = d KLN {f c {Gl). f {G2))

dKLGiG 1 .G 2 ) ~d KLG {G 2 .G 1

< <LG ÷ (/'2 - /'l)'∑J ! ( //2 - μι )

r/,,— 1 - KL

The validity function o v based on the direction of motion may be calculated as follows. The function o v between two Gaussians Gi and G2 with G, ~ Ν(μί,∑ where μ,≡ (x, y, θί) = (Xi. yi > dXi, dy,), is obtained using the direction measure given by μ Θ ί= (dx,, dy,) and location measure given by μι = (χ,, y,): if j (/.ι·; I < 0 and | (./< , | < (J ibr either G \ or (¾

"OF

oFvalUi "tf" r ifisi

Note that doF is set to 0 as we are only considering objects in motion and thus transition from/to/between static Gaussians is considered invalid. Allowing such transitions will generate lot of outliers as transitions to regions with light intensity changes and other such static background activities will get connected to tracks.

ύ>! ιΙ„! = doFxv, X doFdifj

1

doFdiff =

oFuv,, = αΐ'<ι(θχ. β )

α. if (i > I)

wifli (i given . if a > 0

a— 2~ otherwis(

with n given by:

>{μι-1'2)— orct .i )' 2(ii2 - ! --i ' -2— - r i)- for ,; = (.' ' i..i/i)

After all valid transitions between doc pairs are obtained, for a scene comprising a series of docs we link the transition units (tracklets) to form tracks in the generate motion patterns module 660.

Each tracklet is represented by a pair of Gaussians Gi - G j . A track formed by linking tracklets is in the form Gi - G 2 - G 3 ...G n . The generate motion patterns module 660 may implement the following algorithm to build tracks. 1. Add 1 st tracklet to track-list.

2. For each new tracklet G, - G j check if G, is the last Gaussian in any of the tracks in track-list. If so add Gj at the end of this track. If not add G, - G j as a new track to track-list.

3. Remove short loops and jitter.

The track generation process can generate few hundred thousand tracks from an hour of video. Thus we need a running clustering process to model the system using much fewer tracks. For each scene (frames from few seconds of video), tracks are generated and clustered to get a reduced set of tracks L r . The frequency count of these tracks model the distribution of activity in the scene. The learning process runs in real-time with 1 scene delay.

The build statistical model module 670 may implement the following algorithm.

Tracks generated for one scene are received from generate motion patterns module 660. Each new track is compared with tracks in L r . If it matches, increment frequency count of the track in L r . Else store the new track in a temporary track-list L t of max length N t . If new track is the same as one of the tracks in L t) increase its priority counter by 1. Else add track to L t . If L t is full, replace the tracks with least priority in L t> with the new tracks. Priority value for a new track is equal to the priority of the track with least priority + 1. This ensures that new tracks enter the list, but their potential to stay in the list depend on how frequently they appear. Tracks that appear frequently over a longer time frame move to the top of the list. Tracks that appear in short bursts and appear no more move lower and may get replaced. Every t scenes, merge tracks with highest similarity in L r and add top tracks from L t to L r in these vacant slots.

The tracks are compared using a similarity function described below. If the most similar tracks in L r have a similarity value greater than a threshold, then they are not merged and instead the top tracks are added directly to L r . Track merge may also be prevented when covariance of any Gaussian in the merged track exceeds a maximum limit. Repeat all the above steps until the end of the training video to get a final set of tracks that model the activities in the scene. At the end of training, normalize the frequency count of merged tracks in L r to get a probability model of activity distribution in the video. The track comparison may be implemented as follows. A track T = t-i, t,2...t y is made up of a series of Gaussians ¾. The similarity measure S12 between two tracks Ti and T 2 is obtained by using a modified edit distance method, with custom built cost functions that depend on properties of Gaussians being compared. The edit distance gives the cost of transforming track Ti to T 2 . Three operations insert, delete and substitute are possible. The cost for each operation is given by:

rl, ., = in

ul )

where, cd(tn,†. 2 j) is given by any of the distance measures KL distance, Euclidean distance, and Mahalanobis distance which are described above. Using a backtrace algorithm, the optimal alignment of operations A 12 for transformation is obtained. Once alignment A-| 2 is obtained the similarity measure between tracks Ti = tn, ti 2 ...t-i y and T 2 = t 2 i, t 22 ...t 2z is calculated using distance cost cd and angle cost ca by:

∑;„=i ina.r( cd{ t 1JU , v h 2 fH

1

Where, M is the length of A 12 , and l(m). /2(m) + l it At f im)

fl[ w ÷ l ). /2(.m T 1) = < !( ,„) - 1, /2(T7 » ) if Α ν η ) clelel.ii

/l( w ) - l, /2(?».) - 1 if .4 12 (f«)

with /!( ' ()) = 0 and /2(0 ) = ϋ The angle cost function ca(tii, t 2J ) is obtained by a combination of average optical flow angle of the Gaussians and the geometric angle of transition between current and previous Gaussians in the track. For Gaussians t^and t 2 j, with average optical flow angle Θ and θ 2 :

doFdiff — cAdiff

2

iffi iM

doFdiff dc'Adiff

k = 5«>(/'A,-/'/.| -l sv

with (V given b :

n(/ j j./-2) = urct n ' 2(ii-2— .'/ι · ·'"·2 ~ · ' ' " ι)· for — (■'',-. <,-_)

To merge tracks Ti and T 2) the optimal alignment of operations A-| 2 between them is obtained as described above. The merged track T m = t m i, -.-tmM with length M equal to the length of A 2 is obtained using the function:

ynn-gf (tik.f k) if 7^ % and -·½(»?) = substitute m(V(j (t-2k.t ml i.-- l) ) if - i2(» — insert and /; ^ 1

/viY /-(/c ( u -./„,(,_jj) if . ]-2( i j — d lete and /.' 1 n rg .itikJik) if ,4i (m)— insert and k = 1 iru r<] ( {† , lk .t 2 k. if Ay>(in)— delete and /,· = 1

For two Gaussians t 1k ~ Ν(μ-ι,Σι) and t 2k ~ Ν(μ 2 ,∑2), the merged Gaussian is given by tmk ~ N(Mm ( m ) where: 'm =

' in

As described above, the training process produces a reduced set of merged tracks. Words are remapped to the Gaussians corresponding to these tracks. The mapping process is same as described above with reference to the mapping module 640 but with the track-Gaussians in addition to the PLSA-GMM Gaussians being used. This train-map along with train-tracks are used during detection of abnormalities.

In an embodiment, an on-line training procedure that adapts dynamically to the new activities in the scene is implemented. Figure 10 shows an implementation of adaptive training which may be used with embodiments of the present invention. The adaptive training system 1000 shown in Figure 10 is configured to adapt the statistical model using the test video stream 606. The system 1000 includes a feature extraction module 610, a quantization module 650 and a generate motion patterns module 660 which function in the same manner as the corresponding modules described above in relation to Figure 6. As described above, the scene track model comprises of a set of learned train-tracks with a probability distribution modelling their frequency of occurrence. The system 100 shown in Figure 10 includes an additional compare & build statistical model module 1090 which compares each new-track with the train-tracks that are part of the temporary scene track model. If it is significantly different, the new track is added to the list of train-tracks.

Additionally, as the train-track list and the scene track model get modified continuously, the mapping of words to the new Gaussians being formed needs to be modified as well. This is performed by a mapping module 1040 in the feedback loop. Thus with this system, learning can be accomplished fully on-line using a live video stream.

Once the scene track model is learnt, the system can be used to detect abnormal events from a live video stream. Detection of abnormality runs in real-time with 1 scene delay. The track generation process is similar to the process described above in relation to Figure 6. The train-map is used for track generation.

Each track generated during detection (test-track) is compared with the reference set of all train-tracks. Train-tracks with very low probability in the scene track model are excluded from the reference set.

The similarity measure s 2 between a test-track Ti and a train-track T 2 is obtained by using a modified edit distance method similar to the method described in training, but with asymmetric costs for delete, insert and substitute operations.

The cost for each insert, delete and substitute operations is given by: rfj(† . 1. j-r ) (for insert )

/// / // ri/( l i _ 1 . f 2 j ' (for delete)

(for subst it ut e)

where, t¾) is given by a distance measure such as KL distance, Euclidean distance, Mahalanobis distance described above.

Using a backtrace algorithm, the optimal alignment of operations A-| 2 for transformation is obtained. Once alignment A 12 is obtained the similarity measure between tracks Ti =tn, t 12 ...ti y and T 2 = t 2 i, t 22 ...t 2z is calculated using a distance cost cd and an angle cost ca by:

∑«,=! rrUl.T{cd{fi f i {m) . j^,,,-). a(† lf i( l . flim)))

S j 2 = 1

M

where, M is length of A i2 and if Λι ( in)— inserl flim l).f>(,n - 1) = { flint) ÷ l. 2(// if Ai2( in)— (Icicle fl(m) + l.f2{ )÷l i[ Λγ2(>ι>)— subsUlul ( with f1(0) = 0andf2(0) = 0.

The cost functions cd and ca are asymmetric as described below: ¾2(ii>j ) i (/<'·)— delete or swap 0 if Ai ( m )— insert

This asymmetry is to ensure that test-tracks that match completely with a portion of a train-track is a good match and has no cost. There is no penalty for insert as we are trying to convert a test-track to a train-track using the edit distance method. On the other hand, a test-track that matches with only a portion of the train-track must have high cost.

In an embodiment, the classifier module 680 operates as follows. Each test-track is compared with each of the train-tracks to find the most similar track. A binary classifier implemented in the classifier module is used to determine if a test-track is Abnormal or not. The cost of matching the test-track with its best match train-track is given by V2 — 1 ~~ ' 1

where, s-i 2 is the similarity measure between the test-track and most similar train- track. If C12 is greater than the Abnormality threshold, the test-track is Abnormal. The Abnormality threshold can be set by looking at similarity values obtained during training and can be set to a value close to the highest matching cost seen during training. The systems described above may be implemented as an independent abnormality detection system. For example, a system that can raise an alarm when an abnormal event occurs in the scene. Once the training phase 602 is complete and we have a scene track model, the detection phase 604 for detecting abnormalities can run in real-time. Tracks generated in real-time during the detection phase are compared with the train-tracks. Abnormal activities in the video are determined by checking how dissimilar a test-track is with respect to the train-tracks. If the dissimilarity is significantly high, that track and in turn the activity it models is abnormal.

As a real-world activity can end up being modelled by more than one track, we collect all abnormal tracks in the scene and cluster them to get an estimate of the actual number of abnormal events in the scene. Alarms are raised and logged accordingly.

In an embodiment, the systems described above are used in combination with a rule based video analysis system. Such an implementation can reduce the number of false alarms in the rule based system.

Figure 11 shows a video analysis system that combines a rule based analysis with a statistical analysis. The system 1100 has two pipelines of operation, one for the track generation process and one for the rule based abnormality detector process. The pipeline for the track generation process comprises a feature extraction module 610, a quantization module 650 and a generate motion patterns module 660 which function as described above with reference to Figure 6. The track generation process generates tracks 665 from an input video stream 606.

The input video stream 606 is also analysed by a rule based classifier 1110. The rule based detection process runs in parallel using the same input video stream 606. The rule based classifier 1110 detects events satisfying pre-defined rules. For example, detecting cars crossing a line. The rule based classifier 1110 will trigger an alarm whenever such an event occurs. But these alarms could be false due to other objects such as headlights (activities that trigger intensity change and generate motion patterns in the scene).

As shown in Figure 11 , the rule based classifier 1110 outputs indications of rule based abnormal events 1115. The indications of rule based abnormal events 1115 and the tracks 665 are input into a comparator module 1120 which analyses the two inputs using the scene activity model 675. The comparator module 1120 outputs an abnormality measure and correlation between track and rule systems 1125.

The comparator module 1120 is used to correlate the rule based abnormal event indications 1115, with the track 665 generated from the track generation process. A goodness of fit measure is used to get a quantitative representation of how well the rule based event maps to the tracks.

Frequently occurring tracks corresponding to false trigger events can be marked by the user. If the rule based event shows a good fit to these marked tracks, then alarms produced by such events can be suppressed, thus improving the accuracy of the system for detection of events matching rules.

A rule based classifier and comparator may also be added to the embodiment shown in Figure 2.

Whilst the foregoing description has described exemplary embodiments, it will be understood by those skilled in the art that many variations of the embodiment can be made within the scope and spirit of the present invention.