Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR CLASSIFYING A VIDEO
Document Type and Number:
WIPO Patent Application WO/2007/077965
Kind Code:
A1
Abstract:
A method classifies segments of a video using an audio signal of the video and a set of classes. Selected classes of the set are combined as a subset of important classes, the subset of important classes being important for a specific highlighting task, the remaining classes of the set are combined as a subset of other classes. The subset of important classes and classes are trained with training audio data to form a task specific classifier. Then, the audio signal can be classified using the task specific classifier as either important or other to identify highlights in the video corresponding to the specific highlighting task. The classified audio signal can be used to segment and summarize the video.

Inventors:
RADHAKRISHNAN REGUNATHAN (US)
SIRACUSA MICHAEL (US)
DIVAKARAN AJAY (US)
OTSUKA ISAO (JP)
Application Number:
PCT/JP2006/326379
Publication Date:
July 12, 2007
Filing Date:
December 27, 2006
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
RADHAKRISHNAN REGUNATHAN (US)
SIRACUSA MICHAEL (US)
DIVAKARAN AJAY (US)
OTSUKA ISAO (JP)
International Classes:
G06F17/30; G10L15/10; H04H60/31; H04N5/91
Foreign References:
JP3475317B22003-12-08
JP2004258659A2004-09-16
Other References:
XIONG Z. ET AL.: "Effective and Efficient Sports Highlights Extraction Using the Minimum Description Length Criterion in Selecting GMM Structures", 2004 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME '04), vol. 3, 27 June 2004 (2004-06-27), pages 1947 - 1950, XP003015319
FARIN D. ET AL.: "Robust clustering-based video-summarization with integration of domain-knowledge", 2002 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME '02), vol. 1, 26 August 2002 (2002-08-26), pages 89 - 92, XP010604313
OTSUKA I. ET AL.: "An enhanced video summarization system using audio features for a personal video recorder", 2006 DIGEST OF TECHNICAL PAPERS OF THE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE '06), 7 January 2006 (2006-01-07), pages 297 - 298, XP010896622
OTSUKA I. ET AL.: "An enhanced video summarization system using audio features for a personal video recorder", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, vol. 52, no. 1, February 2006 (2006-02-01), pages 168 - 172, XP003015320
See also references of EP 1917660A4
Attorney, Agent or Firm:
SOGA, Michiteru et al. (8th Floor Kokusai Building, 1-1, Marunouchi 3-chom, Chiyoda-ku Tokyo 05, JP)
Download PDF:
Claims:

CLAIMS

1. A method for classifying a video, comprising the steps of: defining a set of classes for classifying an audio signal of a video; combining selected classes of the set as a subset of important classes, the subset of important classes being important for a specific highlighting task; combining the remaining classes of the set as a subset of other classes; training jointly the subset of important classes and the subset of other classes with training audio data to form a task specific classifier; and classifying the audio signal using the task specific classifier as either important or other to identify highlights in the video corresponding to the specific highlighting task.

2. The method of claim 1 , further comprising: segmenting the video according to the classified audio signal into important segments and other segments; and combining the important segments into a summary of the video.

3. The method of claim 1 , further comprising: partitioning the audio signal into frames; extracting audio features from each frame; classifying each frame according to the audio features as either an important frame or an other frame.

4. The method of claim 3, in which the audio features are modified discrete cosine transforms.

5. The method of claim 1 , in which the video is of a sporting event, and the specific highlighting task is identifying highlights in the video, and the set of classes includes a mixture of excited speech and cheering, applause, cheering, normal speech, and music classes, and the subset of important classes includes the mixture of excited speech and cheering, and the subset of other classes includes applause, cheering, normal speech, and music.

6. The method of claim 1 , further comprising: representing the subset of important classes with a first Gaussian mixture model; and representing the subset of other classes with a second Gaussian mixture model.

7. The method of claim 1 , in which the training jointly uses K-fold cross validation.

8. The method of claim 1 , in which the training jointly optimizes an estimate of classification.

9. The method of claim 1 , in which the classifying assigns labels, and further comprising: determining importance levels of the labels according to the specific highlighting task.

10. The method of claim 6, in which a number C of the subsets of classes is 2, and there are N traιn samples in a vector x of the training audio data, and each sample x,- has an associated class label y, that takes on values 1 to C, and the task specific classifier has a form:

/(x; m) - arg max p(x | y, m y , θ y ), y where m = [/W 1 , ..., m c ] τ is the number of mixture components for each Gaussian mixture model and θ , is the parameters associated with class i, / = { 1, 2} .

1 1. The method of claim 10, in which the training audio data includes a validation set with N test samples, and associated labels (x,-, >>,), and an empirical test error on the validation set for a particular m is:

TestErr(m) - #O/ ~ λ χ h "O), where δ is 1 when > \ - /(x,-; m), and 0 otherwise.

12. The method of claim 1 1 , in which an optimum number of mixture components m is selected according to: m = arg minTestErr(m). m

13. A system for classifying a video, comprising: a memory configured to store a set of classes for classifying an audio signal of a video;

means for combining selected classes of the set as a subset of important classes, the subset of important classes being important for a specific highlighting task; means for combining the remaining classes of the set as a subset of other classes; means for training jointly the subset of important classes and the subset of other classes with training audio data to form a task specific classifier; and means for classifying the audio signal using the task specific classifier as either important or other to identify highlights in the video corresponding to the specific highlighting task.

Description:

DESCRIPTION

METHOD AND SYSTEM FOR CLASSIFYING A VIDEO

TECHNICAL FIELD

This invention relates generally to classifying video segments, and more particularly to classifying video segments according to audio signals.

BACKGROUND ART

Segmenting scripted or unscripted video content is a key task in video retrieval and browsing applications. A video can be segmented by identifying highlights. A highlight is any portion of the video that contains a key or remarkable event. Because the highlights capture the essence of the video, highlight segments can provide a good summary of the video. For example, in a video of a sporting event, a summary would include scoring events and exciting plays.

Figure 1 shows one typical prior art audio classification method 100, see Ziyou Xiong, Regunathan Radhakrishnan, Aj ay Divakaran and Thomas S. Huang, "Effective and Efficient Sports Highlights Extraction Using the Minimum Description Length Criterion in Selecting GMM Structures," Intl. Conf. on Multimedia and Expo, June 2004; and U.S. Patent Application No. 10/922,781 "Feature Identification of Events in

Multimedia," filed on August 20, 2004 by Radhakrishnan et al., both incorporated herein by reference.

An audio signal 101 is the input. Features 1 1 1 are extracted 1 10 from frames 102 of the audio signal 101. The features 1 1 1 can be in the form of modified discrete cosine transforms (MDCTs).

As also shown in Figure 2, the features 1 1 1 are classified as labels 121 by a generic multi-way classifier 200. The generic multi-way classifier 200 has a general set of trained audio classes 210, e.g., applause, cheering, music, normal speech, and excited speech. Each audio class is modeled by a Gaussian mixture model (GMM). The parameters of the GMMs are determined from features extracted from training data 21 1.

The GMMs of the features 1 1 1 of the frames 102 are classified by determining a likelihood that the GMM of the features 1 1 1 corresponds to the GMM for each class, and comparing 220 the likelihoods. The class with the maximum likelihood is selected as the label 121 of a frame of features.

In the generic classifier 200, each class is trained separately. The number m of Gaussian mixture components of each model is based on minimum description length (MDL) criteria. The MDL criteria are commonly used when training generative models. The MDL criteria for input training data 21 1 can have a form:

MDL(m) = - log p(data | θ, m) - log /?(θ | m), (1 ) where m indexes mixture components of a particular model with parameters θ, and /? is the likelihood or probability.

The first term of Equation (1) is the log likelihood of the training data under a m mixture component model. This can also be considered as r an average code length of the data with respect to the m mixture model. The second term can be interpreted as an average code length for the model parameters θ. Using these two terms, the MDL criteria balance identifying a particular model that most likely describes the training data with the number of parameters required to describe that model.

A search is made over a range of values for k, e.g., a range between 1 and 40. For each value Jc, a value θ* is determined using an expectation maximization (EM) optimization process that maximizes the data likelihood term and the MDL score is calculated accordingly. The value k with the minimum expectation score is selected. Using the MDL to train the GMMs of the classes 210 comes with an implicit assumption that selecting a good generative GMM for each audio class separately yields better general classification performance.

The determination 130 of the importance levels 131 is dependent on a task 140 or application. For example, the importance levels correspond to a percentage of frames that are labeled as important for a particular summarization task. In a sports highlighting task, the important classes can be excited speech or cheering. In a concert

highlighting task, the important class can be music. By setting thresholds on the importance levels, different segmentations and summarizations can be obtained for the video content.

By selecting an appropriate set of classes 210 and a comparable generic multi-way classifier 200, only the determination 130 of the importance levels 13 1 needs to dependent on the task 140. Thus, different tasks can be associated with the classifier. This simplifies the implementation to work with a single classifier.

DISCLOSURE OF INVENTION

The embodiments of the invention provide a method for classifying an audio signal of an unscripted video as labels. The labels can then be used to detect highlights in the video, and to construct a summary video of just the highlight segments.

The classifier uses Gaussian mixture models (GMMs) to detect audio frames representing important audio classes. The highlights are extracted based on the number of occurrences of a single or mixture of audio classes, depending on a specific task.

For example, a highlighting task for a video of a sporting event depends on a presence of excited speech of the commentator and the cheering of the audience, whereas extracting concert highlights would depend on the presence of music.

Instead of using a single generic audio classifier for all tasks, the embodiments of the invention use a task dependent audio classifier. In addition, a number of mixture components used for the GMMs in our task dependent classifier is determined using a cross-validation (CV) error during training, rather than minimum description length (MDL) criteria as in the prior art.

This improves the accuracy of the classifier, and reduces the time required to perform the classification.

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is a block diagram of a prior art classification method;

Figure 2 is a block diagram of a prior art generic multi-way classifier;

Figure 3 is a block diagram of a classification method according to an embodiment of the invention;

Figure 4 is a block diagram of a task specific binary classifier;

Figure 5 is a block diagram of multiple task specific classifiers for corresponding tasks;

Figure 6A compares models of various classifiers;

Figures 6B compares models of various classifiers;

Figures 6C compares models of various classifiers;

Figures 7 A compares mixture components of generic and task specific classifiers;

Figures 7B compares mixture components of generic and task specific classifiers; and

Figure 8 is a graph of classification accuracy for the classifier according to an embodiment of the invention.

BEST MODE FOR CARRYING OUT THE INVENTION

Figure 3 shows a method for classifying 400 an audio signal 301 of a video 303 as labels 321 for a specific task 350 according to an embodiment of the invention. The labels 321 can then be used to identify highlights in the video. The highlights can be segmented 340 to generate a summary 304 of the video that only includes highlights.

The audio signal 301 of the video 303 is the input. Features 31 1 are extracted 310 from frames 302 of the audio signal 301. The features 31 1 can be in the form of modified discrete cosine transforms (MDCTs).

It should be noted that other audio features can also be classified, e.g., Mel frequency cepstral coefficients, discrete Fourier transforms, etc.

As also shown in Figure 4, the features 31 1 are classified by assigning labels 321 by a task specific, binary classifier 400. The GMMs of the features 31 1 of the frames 302 are classified by determining a likelihood that the GMM of the features 31 1 corresponds to the GMM for each class, and comparing 420 the likelihoods. The class with the maximum likelihood is selected as the label 321 of a frame of features.

The task specific classifier 400 includes a set of trained classes 410. The classes can be stored in a memory of the classifier. A subset of the classes that are considered important for identifying highlights are combined as a subset of important classes 41 1. The remaining classes are combined as a subset of other classes 412. The subset of important classes and the subset of other classes are jointly trained with training data as described below.

For example, the subset of important classes 41 1 includes the mixture of excited speech of the commentator and cheering of the audience. By excited speech of the commentator, we mean the distinctive type of loud, high-pitched speech that is typically used by sport announcers and commentators when goals are scored in a sporting event. The cheering is usually in the form of a lot of noise. The subset of other classes 412 includes the applause, music, and normal speech classes. It should be understood, that the subset of important classes can

be a combination of multiple classes, e.g., excited speech and spontaneous cheering and applause.

In any case, for the purposes of training and classifying there are only two subsets of classes: important and other. The task specific classifier can be characterized as a binary classifier, even though each of the subsets can include multiple classes. As an advantage, a binary classifier is usually more accurate than a multi-way classifier, and takes less time to classify.

The determination 330 of the importance levels 331 is also dependent on the specific task 350 or application. For example, the importance levels correspond to a percentage of frames that are labeled as important for a particular summarization task. For a sports highlighting task, the subset of important classes includes a mixture of excited speech and cheering classes. For a concert highlighting task, the important classes would at least include the music class, and perhaps applause.

Figure 5 shows the general concept for the binary audio classifiers according to the embodiments of the invention. Each one of specific tasks 501 -503 is associated with a corresponding one of the task specific classifiers 51 1 -513. The main difference with the prior art is that instead of a generic, multi-way audio classifier, we now insert a classifier depending on a specific task. This allows users to construct small and efficient classifiers optimized for different types of highlights in a video.

As shown in Figure 4 for the particular type of highlighting task 350, we use one Gaussian mixture model (GMM) for the subset of important classes, and one GMM for the subset of other classes. The subset of important classes is trained using training examples data for the important classes. The subset of other classes is trained using training examples data from all of the other classes.

Figure 4 shows the task specific binary classifier 400 designed for sports highlights. This classifier uses a binary classifier where the important classes include a mixture of excited speech and cheering, and the subset of other classes models all other audio components.

The motivation for constructing the task specific classifier 400 is that we can then reduce the computational complexity of the classification problem, and increase the accuracy of detecting of the important classes.

Although there can be multiple classes, by combining the classes into two subsets, we effectively achieve a binary classifier. The binary classification requires fewer computations than a generic multi-way classifier that has to distinguish between a larger set of generic audio classes.

However, we also consider how this classifier is trained, keeping in mind that the classifier uses subsets of classes. If we were to follow

the same MDL based training procedure of the prior art, then we would most likely learn the same mixture components for the various classes. That is, when training the subset of other classes for the task specific classifier using MDL, it is likely that the number of mixture components learned will be very close to the sum of the number of components used for the applause, speech, and music classes shown in Figure 2. This is because the MDL training procedure is concerned with producing a good generative GMM from the training data 21 1.

If redundancy among the subset of other classes is small, then the trained model is simply a combination of the models for all the classes the model represents. The MDL criteria are used to help find good generative models for the training data 21 1 , but do not directly optimize what we are ultimately concerned with, namely classification performance.

We would like to select the number and parameters of mixture components for each GMM that, when used for classification, have a lowest classification error. Therefore, for our task specific classifiers, we use a joint training procedure that optimizes an estimate of classification rather than the MDL.

Let C = 2, where C is the number of subsets of classes in our classifier.

We have N traιn samples in a vector x of training data 413. Each sample x,- has an associated class label y,, which takes on values 1 to C. Our classifier 400 has a form: f{\; m) = arg max p(x | y, m y , θ y ), (2) where m = [m l 5 ..., me] is the number of mixture components for each class model and θ , is the parameters associated with class i, i = { 1 , 2} . This is contrasted with the prior art generic classifier 200 expressed by equation (1 ).

If we have sufficient training data 413, then we set some of the training data aside, as a validation set with N test samples, and their associated labels (x,-, y,). An empirical test error on this set for a particular m is

TestErr(m) = - £(>, - /(x,-; m)), (3) where δ is 1 when y, = /(X ϊ ; m), and 0 otherwise.

Using this criteria, we pick the in with: in = arg minTestErr(m). (4) m

This requires a grid search over a range of settings for m, and for each setting, retraining the GMMs, and examining the test error of the resulting classifier.

If the training data are insufficient to set aside the validation set, then a K-fold cross validation can be used, see Kohavi, R., "A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection," Proceedings of the 14th International Joint Conference on Artificial Intelligence, Stanford University, 1995, incorporated herein by reference.

K-fold cross-validation is summarized as follows. The training data are partitioned into K equally sized parts. Let

K: { 1 , ..., N} → { 1 , ..., K) map N training samples to one of these K parts. Let/*(x; m) be the classifier trained on the set of training data with the k th part removed. Then, the cross validation estimate of error is:

CV(m) - δ(y ι -f K(0 (Xi; m)). (5)

That is, for the k' h part, we fit the model to the other K-I parts of the data, and determine the prediction error of the fitted model when predicting the k' h part of the data. We do this for each part of the K parts of training data. Then, we determine m = arg min CV(m). (6) m

This requires a search over a range of m. We can speed up training by searching over a smaller range for m. For example, in the classifier shown in Figure 4, we could fix πi \ as for the important classes 411 , and

only search over m 2 for the subset of other classes 412. We can select m λ using the MDL criteria, i.e., keeping the GMM for the subset of important classes.

Figures 6A-6C show symbolically how different training procedures can result in different models. Figure 6A shows GMM models learned using the prior art MDL procedure for three different classes in a 2D feature space. The MDL criteria pick the number of mixture components for each class separately. The MDL criteria are good for model selection where each generative probabilistic model is trained separately without the knowledge of other classes. With the MDL, all clusters within a class are treated as equally important.

Figure 6B shows an expected result of using cross-validation (CV), rather than MDL for training. We see that CV picks fewer components for each class. Specifically, CV summarizes the fine details of the models of Figure 6A by using fewer components. However, we see that even though some fine detail is lost about each class, we can still distinguish between the classes.

Figure 6C shows what would happen when we segregate the classes into a subset of important classes and all other classes, and effectively construct a binary classifier. We can see that we can use fewer mixture components and still distinguish between the important classes 601 and the other classes 602.

Cross-validation for model selection is good for discriminative binary classifiers. For instance, while training a model for the subset of important classes, we also pay attention to the others class, and vice versa. Because the joint training is sensitive to the competing classes, the model is more careful in modeling the clusters in the boundary regions than in other regions. This also results in a reduction of model complexity.

With reference to Fig. 4, the description has been made of the method of combining the classes included in the classifier 400 into two groups (which constitutes the binary audio classifier). The embodiment shown in Fig. 4 provides the subset of classes 41 1 obtained by combining the excited speech class and the cheering class, which are selected from the generic classifier of Fig. 2, and the subset composed of the other classes 412. Those subsets are effective in identifying highlight scenes in a sports program. If another embodiment provides, for example, a subset of the music class and a subset of other classes (not shown), it is possible to create a classifier in which a music scene exhibits a high likelihood. Accordingly, calculation determines that a scene with a music track contained in a music program has a high importance level, which is effective in identifying the scene with a music track. Further, it is also possible to identify a scene with burst of laughter contained in a variety program by creating a laughter class using laughing voices as the training data and by comparing the likeliness with the other classes.

With reference to Fig. 5, the description has been made of the method of using the classifiers 51 1 to 513 appropriately by switchover thereof in correspondence with the tasks 501 to 503. Upon the switchover based on the task, the best one of the classifiers 51 1 to 513 is selected depending on the genre of the video 303 to be analyzed. For example, selected in a case where the video 303 contains the sports program is the classifier that calculates the importance level based on the excited speech class and/or the cheering class, selected in a case of the music program is the classifier that calculates the importance level of the scene with a music track, and selected in a case of the variety program is the classifier that calculates the importance level based on the laughter class. The tasks 501 to 503 of selecting a classifier may be performed by the switchover based on the genre obtained from the program information recorded in the video 303. Also, if this system is to analyze a program recorded from the television broadcast, the tasks 501 to 503 may be performed by the switchover based on the genre information obtained from the electronic program guide (EPG).

Effect of the Invention

Embodiments of the invention provide highlight detection in videos using task specific binary classifiers. These task specific binary classifiers are designed to distinguish between fewer classes, i.e., two subsets of classes. This simplification, along with training based on cross-validation and test error can result in the use of fewer mixture

components for the class models. Fewer mixture components means faster and more accurate processing.

Figure 7A shows the number of components (78) for the prior art general classes, and Figure 7B shows the number of components (42) for the task specific classes.

Figure 8 shows a mean detection accuracy on the vertical axis for the important classes as a function of the number of components of the other classes on the horizontal axis.

Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.