Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR LABEL AUGMENTATION IN VIDEO DATA
Document Type and Number:
WIPO Patent Application WO/2019/037863
Kind Code:
A1
Abstract:
A method for processing video data comprising a plurality of image frames, the plurality of image frames having an earlier and later frame of a video sequence, and having a label for a region or patch in the earlier frame and a corresponding region or patch in the later image frame, the method comprising: obtaining a forward model and a backward model of the plurality of image frames; processing the forward model and the backward model to propagate at least one label in the region or patch to at least one other image frame of the video sequence, using a probabilistic method for estimating the label in the at least one other image frame in forward and backward correspondences, wherein, during the processing, a pixel having a most likely label with a probability lower than a threshold value is assigned a predetermined generic label; and generating a labelled result for any given image frame by applying an image label difference, based on label uncertainty between the forward and backward correspondences, to the given image frame.

Inventors:
SAUER PATRICK (BE)
BUDVYTIS IGNAS (GB)
CIPOLLA ROBERTO (GB)
Application Number:
PCT/EP2017/071391
Publication Date:
February 28, 2019
Filing Date:
August 24, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TOYOTA MOTOR EUROPE (BE)
CAMBRIDGE ENTPR LTD (GB)
International Classes:
G06V10/764; G06V10/84
Foreign References:
EP2395456A12011-12-14
US20130084008A12013-04-04
Other References:
BADRINARAYANAN VIJAY ET AL: "Semi-Supervised Video Segmentation Using Tree Structured Graphical Models", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY, USA, vol. 35, no. 11, November 2013 (2013-11-01), pages 2751 - 2764, XP011527038, ISSN: 0162-8828, [retrieved on 20130917], DOI: 10.1109/TPAMI.2013.54
SUDHEENDRA VIJAYANARASIMHAN ET AL: "Active Frame Selection for Label Propagation in Videos", 7 October 2012, COMPUTER VISION ECCV 2012, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 496 - 509, ISBN: 978-3-642-33714-7, XP047019043
BADRINARAYANAN VIJAY ET AL: "Mixture of Trees Probabilistic Graphical Model for Video Segmentation", INTERNATIONAL JOURNAL OF COMPUTER VISION, KLUWER ACADEMIC PUBLISHERS, NORWELL, US, vol. 110, no. 1, 13 December 2013 (2013-12-13), pages 14 - 29, XP035398570, ISSN: 0920-5691, [retrieved on 20131213], DOI: 10.1007/S11263-013-0673-5
Attorney, Agent or Firm:
INTÈS, Didier et al. (FR)
Download PDF:
Claims:
CLAIMS

1. A method for processing video data comprising a plurality of image frames, the plurality of image frames having an earlier and later frame of a video sequence, and having a label for a region or patch in the earlier frame and a corresponding region or patch in the later image frame, the method comprising:

obtaining a forward model and a backward model of the plurality of image frames;

processing the forward model and the backward model to propagate at least one label in the region or patch to at least one other image frame of the video sequence, using a probabilistic method for estimating the label in the at least one other image frame in forward and backward correspondences, wherein, during the processing, a pixel having a most likely label with a probability lower than a threshold value is assigned a predetermined generic label; and

generating a labelled result for any given image frame by applying an image label difference, based on label uncertainty between the forward and backward correspondences, to the given image frame.

2. The method according to claim 1, wherein the propagated label is a class label.

3. The method according to claim 1, wherein the propagated label is an instance label.

4. The method according to any of claims 1-3, wherein the plurality of image frames have a pixel resolution greater than or equal to 960x720. 5. The method according to any of claims 1-4, wherein the forward and backward models comprise a probabilistic graphical model.

6. The method according to any of claims 3-5, comprising:

after the processing, assigning pixels within an image frame having no instance label to a background class;

dilating the pixels of the background class surrounded by pixels having an assigned instance label into a group of pixels; and reassigning the assigned instance label to the group of pixels when the group of pixels is smaller than a threshold size.

7. The method according to claim 6, wherein the threshold size is 40 pixels, better 30 pixels, still better 20 pixels.

8. The method according to any of claims 1-7, wherein the video sequence is a 360 degree video sequence. 9. The method according to claim 8, wherein the 360 degree video sequence is stored as equirectangular images.

10. Use of a plurality of labelled result image frames according to any of the previous claims, for training an image classifier.

11. A system for processing video data comprising a plurality of image frames, the plurality of image frames having an earlier and later frame of a video sequence, having a label for a region or patch in the earlier image frame and a corresponding region or patch in the later image frame, the system comprising:

storage means storing a forward model of the plurality of image frames and a backward model of the plurality of image frames;

processing means for applying the model to propagate at least one label of the region or patch to at least one other image frame of the video sequence, using a probabilistic method for estimating the label in the at least one other image in forward and backward correspondences, wherein

the processing means is configured to assign a void label to a pixel having a most likely label with a probability lower than a threshold value; and

correcting means for generating a labelled result for any given image frame by applying an image label difference, based on label uncertainty between the forward and backward correspondences, to the given image frame.

12. The system according to claim 11, wherein the forward model and the backward model are probabilistic graphical models.

13. The system according to claim 11 or 12, comprising post-processing means configured to assign pixels within an image frame having no instance label to a background class, dilate the pixels of the background class surrounded by pixels having an assigned instance label into a group of pixels, and reassigning the assigned instance label to the group of pixels when the group of pixels is smaller than a threshold size.

Description:
SYSTEM AND METHOD FOR LABEL AUGMENTATION

IN VIDEO DATA

FIELD OF THE DISCLOSURE

[0001] The present invention relates to systems and methods for labelling of objects or regions in images in video data especially as applied to region or object segmentation in video images. More particularly, the present invention relates to semi-automatic or automatic propagation of labels assigned to regions, objects or even pixels therein, a corresponding processing system and the application of such processing.

BACKGROUND OF THE DISCLOSURE [0002] Semantic segmentation is one of the most important sub-problems of autonomous driving. Its progress has been heavily impacted by the

developments in the state-of-the-art in image classification and the advances in training and inference procedures as well as architectural innovation in general deep learning.

[0003] However, unlike image classification or other general deep learning problems, semantic segmentation (especially for autonomous driving) has rather limited publicly available datasets which do not exceed 5000 labelled frames, although some proprietary datasets may have more. As labelling by hand (i.e., creation of ground-truth labels) takes approximately 1 hour per single frame, alternative methods for obtaining dense labelled data for semantic segmentation must be employed in order to match sizes of standard datasets in other fields.

[0004] US 2013/0084008 discloses a method and system for processing video data comprising a plurality of images. The method and system is for obtaining for labelling of a plurality of objects or regions in an image of a sequence of images followed by label propagation to other images in the sequence based on an inference step and a model.

SUMMARY OF THE DISCLOSURE

[0005] The present inventors have determined that it remains desirable to enhance accuracy of models for semantic segmentation in difficult or

uncommon situations. Increasing the number of semantic classes covered by the model normally requires large amounts of relevant training data which can be costly and time- consuming to produce. Thus, the present inventors address these problems by introducing a method for targeted retraining of such models using automatically generated high quality training data which are created from only a small amount of preselected ground-truth labelled video frames.

[0006] Therefore, according to embodiments of the present disclosure, a method for processing video data comprising a plurality of image frames, the plurality of image frames having an earlier and later frame of a video sequence, and having a label for a region or patch in the earlier frame and a

corresponding region or patch in the later image frame, is provided. The method includes obtaining a forward model and a backward model of the plurality of image frames, processing the forward model and the backward model to propagate at least one label of the region or patch to at least one other image frame of the video sequence, using a probabilistic method for estimating the label in at least one other image frame in forward and backward correspondences, wherein, during the processing, a pixel having a most likely label with a probability lower than a threshold value is assigned a

predetermined generic label, and generating a labelled result for any given image frame by applying an image label difference, based on label uncertainty between the forward and backward correspondences, to the given image frame.

[0007] By providing such a method, a label propagation algorithm can be used to achieve an order of magnitude increase in the quantity of available ground truth labels. The chosen label propagation algorithm can handle occlusions and label uncertainty efficiently, which is helpful in avoiding generation of erroneous labelled data.

[0008] In addition, because the analysis is now performed at pixel level instead of at a super-pixel level as had been previously done, accuracy is further improved.

[0009] Moreover, a first classifier training step is no longer used, and therefore, processor time and energy are saved.

[0010] The propagated label may be a class label, or the propagated label may be an instance label.

[0011] The plurality of image frames may have a pixel resolution greater than or equal to 960x720.

[0012] The forward and backward models may comprise a probabilistic graphical model, for example, a loopy model, a tree model, etc. [0013] The method may comprise, after the processing, assigning pixels within an image frame having no instance label to a background class, dilating the pixels of the background class surrounded by pixels having an assigned instance label into a group of pixels, and reassigning the assigned instance label to the group of pixels when the group of pixels is smaller than a threshold size.

[0014] The threshold size may be 40 pixels, 30 pixels, or even 20 pixels.

[0015] The video sequence may be a 360 degree (e.g., equirectangular) video sequence.

[0016] The 360 degree video sequence may be stored as equirectangular images.

[0017] According to further embodiments of the disclosure, use of a plurality of labelled result image frames for training an image classifier is provided.

[0018] According to yet further embodiments of the disclosure, a system for processing video data comprising a plurality of image frames, the plurality of image frames having an earlier and later frame of a video sequence, having a label for a region or patch in the earlier image frame and a corresponding region or patch in the later image frame, is provided. The system includes

storage means storing a forward model of the plurality of image frames and a backward model of the plurality of image frames, processing means for applying the model to propagate at least one label in the region or patch to at least one other image frame of the video sequence, using a probabilistic method for estimating the label in at least one other image in forward and backward correspondences, wherein the processing means is configured to assign a void label to a pixel having a most likely label with a probability lower than a threshold value, and correcting means for generating a labelled result for any given image frame by applying an image label difference, based on label uncertainty between the forward and backward

correspondences, to the given image frame.

[0019] The forward model and the backward model may be probabilistic graphical models.

[0020] The system may comprise post-processing means configured to assign pixels within an image frame having no instance label to a background class, dilate the pixels of the background class surrounded by pixels having an assigned instance label into a group of pixels, and reassigning the assigned instance label to the group of pixels when the group of pixels is smaller than a threshold size. [0021] It is intended that combinations of the above-described elements and those within the specification may be made, except where otherwise contradictory.

[0022] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.

[0023] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, and serve to explain the principles thereof.

BRIEF DESCRIPTION OF THE DRAWINGS

[0024] Fig. 1A shows an exemplary factor graph corresponding to the model described, according to embodiments of the present disclosure;

[0025] Fig. IB shows an exemplary 3-frame video sequence associated with the factor graph of Fig. 1A;

[0026] Fig. 1C shows several examples of label propagation using

techniques of the present disclosure, including following a post-processing (e.g., clean-up) procedure;

[0027] Fig. 2 is a high level depiction of label propagation in a video sequence based on a manually annotated frame;

[0028] Fig. 3 shows a high level view of an induced tree structure created based on mapped variables within a video sequence;

[0029] Fig. 4 is a flowchart demonstrating an exemplary method according to embodiments of the present disclosure;

[0030] Fig. 5A is an illustration of an image processor or processing system according to an embodiment of the present disclosure; and

[0031] FIG. 5B is an illustration of a processing system whereon a method according to embodiments of the present disclosure can be implemented.

DESCRIPTION OF THE EMBODIMENTS

[0032] Reference will now be made in detail to exemplary embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. [0033] The present disclosure relates to a method for processing video data comprising a plurality of images. The video data thereby may comprise a sequence of images, e.g. indicating the motion of one or more objects in a scene. The video data may be obtained in any suitable way, such as for example by capturing, e.g. using an optical detection or recording system, such as for example a camera, by calling it from a stored position in a memory, etc.

[0034] The video data may comprise analogue video data or digital video. The video data may comprise 3 dimensional video data and/or 360 degree video data, for example, filmed using a plurality of interlinked image capture devices.

[0035] Video data particularly of interest, although not limiting, may be for example, video data recorded from a moving object, such as for example a moving vehicle. Such data may be of particular interest because one of the applications for processing such data may be the use of video processing for automation and security reasons in vehicles (e.g., driver assistance).

[0036] Processing of the video data, as described herein, may be used for example, for recognition and reporting of objects relevant to the moving object, e.g. a vehicle, or to the driver thereof. Objects of interest may be any suitable object, such as, for example, the road, pedestrians, vehicles, obstacles, traffic lights, etc. Processing of the video data may be performed in real-time or may be performed on stored video data.

[0037] Methods and systems according to embodiments of the present invention do not assume small object displacements or a high video capture frame rate.

[0038] Particularly, the present invention provides a method of label propagation (i.e., class and/or instance labels) using graphical models constructed forward and backward through the video sequences for label propagation in video sequences. More specifically, the present disclosure provides a means for augmenting the number of labelled training frames for a classifier based on a limited number of ground-truth labelled frames.

[0039] In order to implement methods of the present disclosure, one or more models of a video sequence is created by creating frame-to-frame correspondences, for example, a model based on the forward direction ("a forward model") and a model based on the backward direction ("a backward model") of the video sequence (Fig. 4, step 510).

[0040] In creation of such models, a first step may include, matching between a patch in the image and patches in one or more previous or subsequent images may be undertaken. An exemplary matching operation can be described as follows. First a correlation operation of a patch (DxD pixels) in location j of the current frame k is performed in a fixed window around location j - Wj in neighbouring frame k + d. Equation la) describes a correlation operation used for finding a best match:

Tk = MATCH ( k j, I k+di w j ) = argmaXji in Wj ^(I k+d (p'(i, c ) - 7 fc (p(i, c)) 2

Vi,c

where l k (p(i, c)) indicates a pixel i value in a patch p (centred on location ; ' of image l k ) in color channel c e {R, G, B} and l k+d (p'(i, c)) indicates pixel i value in patch p'(centered around location / in image l k+d \n color channel c).

[0041] The models so created may be probabilistic graphical models of a sequence of frames and their labels, for example, forward (d=l) and backward (d=-l) built correspondence trees of propagated labels.

[0042] Fig. 1A shows an exemplary factor graph corresponding to the model described, according to embodiments of the present disclosure. According to the exemplary factor graph, in both the backward model (d=-l) and the forward model (d=l) a one-dimensional, four pixel, seven frame k video sequence, where ground-truth labels have been provided for the middle frame 110 only.

[0043] Fig IB shows propagation results of forward, backward models as well as combined result using label differencing, which will be discussed below. The bottom left section of this figure shows three images (a) from the Bochum city sequence (CityScapes), of which the middle frame has ground truth labels. Rows (b) and (c) correspondingly show propagation results for d = -1 and d = 1 at +10 and -10 frames. Rows (d) and (e) correspondingly show different outputs produced by averaging and by taking image label differences of labels in (b) and (c), as will be discussed below.

[0044] The bottom right section displays several examples of people and car instance labels in row (a). Rows b and c show the propagation result (b) before filling in the labels (c) after filling in the labels.

[0045] Effectively, images and labels may be split into overlapping patches, connecting patches from one frame to the next in either a forward or backward direction, using a collection of manually annotated frames as a basis. Fig. 2 is a high level depiction of label propagation in a video sequence based on one manually annotated frame, and this may be carried throughout a video based on more manual annotations (i.e., ground truth labels) (step 515). [0046] A joint probability of pixel class labels can then be defined by equation 1)

P(Z) « r / Ψ (z k+d k+d pij) ,Z kiPU) ) ...(1) where Z is a set of discrete random variables Z k , P(j) taking values in the range 1...L corresponding to the class label of a pixel J in a patch p of frame k.

[0047] According to these embodiments, Ψ is a potential favoring a same class prediction as at equation 2) otherwise

[0048] According to embodiments of the disclosure, δ is set manually depending on a complexity of videos to be processed. If a video is complex, δ is chosen to be smaller in order to have faster decay of label certainty, and larger for less complex video sequences.

[0049] Furthermore Z k+d Tk+d p(J corresponds to a class label of a pixel j in a patch T k+d p in frame k + d. Here T k+diP corresponds to the best matching patch of frame k + d to patch p in frame . Finally, d is a constant which builds correspondences from the current frame to the previous frame or to the next frame when set to -1 and 1 respectively.

[0050] The aforementioned joint distribution can be represented as a factor graph tree as shown in Figure 1A. The resulting factor graph is a tree structure, and message passing, see e.g., https://en.wikipedia.org/wiki/Factor_graph, can be used to obtain an exact inference of the marginal posterior for each variable P(Zk, P(j ) = I).

[0051] Pixels j in overlapping patches may have different random variables assigned, and a final per pixel class distribution may be determined by summing over distributions of overlapping pixels as in equation 3)

W, i, i = p Z k , P (j) = 0 (3)

where K is a normalization constant determined based on K being the number of patches overlapping a particular pixel.

[0052] To calculate the best match, the highest cross correlation score of patch p in a window W x H around patch p in frame k + d as explained above, a cross-correlation algorithm implemented in CUDA is used, with timings based on experiments with an NVidia Titan X Maxwell GPU shown in Table 1. D tt aase

R lit onesou

CamVid 960x720 701 701 9 10.4K 10.4K 11 20 0.4

CityScape 2048x1024 5000 2975 10 62.5K 59. IK 19 50 1.9

Internal 1936x1456 4500 141 10 2.9K 2.9K 11 55 1.8 b lld # L aee Table 1

Fames r

[0053] Fig. 3 shows an exemplary high level view of an induced tree

# F ames r

structure created based on pat U d seches within a video sequence.

[0054] An uncertainty difference between a labelled image of the forward

h # N i e g .

model and a labelled image of the backward model may be determined in order

F r ames

to estimate a level of uncertainty of the assigned labels, i.e., an image label difference (step 520). # Au g .

Fames r

[0055] An optional fourth step of post-processing, e.g., "clean-up," may be undertaken, as will be described herein. # D tt saae

[0056] According to embodiments of the present d Fr amesisclosure, following creation of the forward and backward models, class label a l # Cassesugmentation may be achieved using three steps. Firstly, for each pixel j in frame k, a most likely class label argmaxyRik. i. l') may be assigned. Next, for pixels where the most

M i a pp n g

likely label has a probability lower than a threshold, for example, L + () T i mesec 0.0001, a "void" label may be assigned to avoid mislabelling due to numerical error propagation, among others. Examples of labels for d = -1 and d = 1 for on Pt ro p a g aee

) T i (i memn sequence from the CityScapes dataset are presented in rows (b) and (c) in Fig. IB.

[0057] A final result is produced by taking a image label difference (i.e. assigning a class label if both frames agree and a "void" label if they disagree) as opposed to averaging the backward (d = -1) and forward (d = 1) built structures as has been done to date.

[0058] Although more pixel labels may be obtained when using averaging, the inventors have determined that using an image label difference can reduce erroneous labelling introduced by occlusions, dis-occlusions and/or erroneous patch correspondences. Therefore, overall accuracy can be increased.

[0059] To obtain instance labels, a similar procedure to class label propagation may be followed. Notably, some differences may also be implemented to the class label procedure, for example, when labelling instances, all pixels of noninstances may be assigned to a background class, and according to some embodiments, two steps of post-processing may be performed, as will be described in greater detail below. Notably, as the majority of the state-of-the-art instance segmentation algorithms require high quality instance labels, the inventors have determined the following exemplary two step instance label post- processing algorithm which can be implemented to improve quality.

[0060] During the first step, regions of void, background and instance labels (in this order) comprising an area of less than 20 pixels and which are surrounded by a single instance label are filled in with the surrounding class label. This step is motivated by the observation that a propagation algorithm may mis-classify greater than 95% of small regions which are surrounded by another (different) instance label and the void and background regions may be processed first since they are more likely to have been introduced by mistake. Note that the size of the regions (20 pixels) is preferably chosen in order to allow propagation of car instance labels of more than 20 pixels, but that this value may be more or fewer pixels as desired.

[0061] During the second step, regions of e.g., car instance labels, are grown using the following dilation procedure. Any pixel in the background class whose immediate (11 pixel) neighbourhood region consists of only one instance class label is assigned this label. This dilation procedure was chosen because of the following properties of the propagation algorithm: (a) the most frequent type of error is mis-classifying an instance class as a background class, (b) car boundaries with the background are mostly labelled correctly, but the most common error is the presence of background-labelled regions within the vehicle boundary.

[0062] The above steps of the post-processing are iterated until

convergence.

[0063] The results of the post-processing can be seen at Fig. 1C, in the row

(3). The white noise pixels seen in the 3rd row of Fig. 1C.

[0064] FIG. 5A shows an image processor or processor system 10 useful for implementing embodiments of the present disclosure. The image processor or processor system 10 can be implemented as for example one or more integrated circuits having hardware such as circuit blocks dedicated to each of the parts shown, or can be implemented for example as software modules executed by a general purpose processor in sequence, as in a server. Notably, a graphics processor (GPU) may provide the primary processing power for executing methods of the present disclosure. For example, graphics processor from NVIDIA and/or AMD may be implemented. [0065] The parts shown include an input interface 20 for receiving an input image or image stream (such as frames of a video, in real time or non real time) from an image source device 5 such as a video camera or an optical disk such as a DVDROM or a CDROM or a solid state memory device such as a USB stick. The images or frames of the video sequence are stored in part 34 of a memory 30. Also input to the system are one or more labeled images that are stored in part 36 of memory 30. In addition a model of the images is stored a part 32 of the memory. The model may be a joint model of a sequences of frames and their labels. The model may be a generative probabilistic model of a sequence of frames and their corresponding labels. The model may be a sequential generative model that uses one image to generate a subsequent or previous image. The model may be a sequential generative latent variable model. For example, the model used can be a tree-type model. The processor 10 also has an inference computational part 40. This part 40 is for carrying out any of the methods of the present invention involving the inference step. For example the part 40 may include an E step and an M step computational part (42, 44 respectively) which process the image data in memory parts 34 and 36 in order to propagate the labels.

[0066] A device 55 can be provided for interpreting or taking action based on an output of the present invention. Such an output can be used to provide an alarm (e.g. derived from the labeling of the images with the labeling associated with a pedestrian or in conjunction with a further algorithm that detects pedestrians in images and uses the labeling of the present invention as additional information as to the content of images to make the identification of pedestrians more accurate. The output can also be configured to interact with systems of a vehicle in order to cause, for example, a braking effect of a driver assistance system, a steering effect of a driver assistance system, and/or an acceleration effect of a driver assistance system, among others.

[0067] In a further aspect, the present disclosure relates to a system for processing video data and adapted for propagating label information across the plurality of images. The different components of the system may comprise processing power for performing their function. The functionality of the different components of the system 300 or different method steps of the method 500 of Fig. 4 may be implemented in separate or a joint processing system 1500 such as shown in FIG. 6B.

[0068] FIG. 5B shows one configuration of processing system 1500 that includes at least one programmable processor 1503 coupled to a memory subsystem 1505 that includes at least one form of memory, e.g., RAM, ROM, and so forth. It is to be noted that the processor 1503 or processors may be a general purpose, or a special purpose processor, and may be for inclusion in a device, e.g., a chip that has other components that perform other functions. Thus, one or more aspects of the present invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The processing system may include a storage subsystem 1507 that has at least one disk drive and/or CD-ROM drive and/or DVD drive. In some implementations, a display system, a keyboard, and a pointing device may be included as part of a user interface subsystem 1509 to provide for a user to manually input information. Ports for inputting and outputting data also may be included. More elements such as network connections, interfaces to various devices, and so forth, may be included, but are not illustrated in FIG. 5B. The various elements of the processing system 1500 may be coupled in various ways, including via a bus subsystem 1513 shown in FIG. 5B for simplicity as a single bus, but will be understood to those in the art to include a system of at least one bus. The memory of the memory subsystem 1505 may at some time hold part or all (in either case shown as 1511) of a set of instructions that when executed on the processing system 1500 implement the steps of the method embodiments described herein. Thus, while a processing system 1500 such as shown in FIG. 5B is prior art, a system that includes the instructions to implement aspects of the methods for processing the video data is not prior art, and therefore FIG. 5B is not labelled as such.

[0069] The present invention also includes a computer program product which provides the functionality of any of the methods according to the present invention when executed on a computing device. Such computer program product can be tangibly embodied in a carrier medium carrying machine- readable code for execution by a programmable processor. The present invention thus relates to a carrier medium carrying a computer program product that, when executed on computing means, provides instructions for executing any of the methods as described above. The term "carrier medium" refers to any medium that participates in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non volatile media includes, for example, optical or magnetic disks, such as a storage device which is part of mass storage. Common forms of computer readable media include, a CD-ROM, a DVD, a flexible disk or floppy disk, a tape, a memory chip or cartridge or any other medium from which a computer can read. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. The computer program product can also be transmitted via a carrier wave in a network, such as a LAN, a WAN or the Internet. Transmission media can take the form of acoustic or light waves, such as those generated during radio wave and infrared data

communications. Transmission media include coaxial cables, copper wire and fibre optics, including the wires that comprise a bus within a computer.

[0070] Based on an augmented set of labelled video frames, a classifier, for example, for use in a vehicle providing driver assistance, may be trained, such that human-level understanding of traffic scenes from camera images anywhere in the world may be obtained by the onboard classifier and driver assistance systems.

[0071] Throughout the description, including the claims, the term

"comprising a" should be understood as being synonymous with "comprising at least one" unless otherwise stated. In addition, any range set forth in the description, including the claims should be understood as including its end value(s) unless otherwise stated. Specific values for described elements should be understood to be within accepted manufacturing or industry tolerances known to one of skill in the art, and any use of the terms "substantially" and/or "approximately" and/or "generally" should be understood to mean falling within such accepted tolerances.

[0072] Although the present disclosure herein has been described with reference to particular embodiments, It is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure.

[0073] It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims.

[0074] The method is described in terms of a single cell. However, it may be easily adapted for batteries having multiple cells. Moreover it may also refer to other cell types than lithium-ion cells.