Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MODEL CONSTRUCTION IN A NEURAL NETWORK FOR OBJECT DETECTION
Document Type and Number:
WIPO Patent Application WO/2017/190743
Kind Code:
A1
Abstract:
The present invention relates to a computer-implemented method for constructing a model in a neural network for object detection in an unprocessed image, where the construction may be performed based on at least one image training batch. The model is constructed by training one or more collective model variables in the neural net- work to classify the individual annotated objects as a member of an object class. The model in combination with the set of specifications when implemented in a neural network is capable of object detection in an unprocessed image with probability of object detection.

Inventors:
FALK KEN (DK)
PEDERSEN JEANETTE B (DK)
THORSGAARD HENRIK (DK)
Application Number:
PCT/DK2017/050121
Publication Date:
November 09, 2017
Filing Date:
April 25, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SCOPITO APS (DK)
International Classes:
G06N3/02; G06N3/08; G06T7/00
Foreign References:
US6578017B12003-06-10
Other References:
FISHER YU ET AL: "LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop", 10 June 2015 (2015-06-10), XP055332092, Retrieved from the Internet
RUSSAKOVSKY OLGA ET AL: "Best of both worlds: Human-machine collaboration for object annotation", 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 7 June 2015 (2015-06-07), pages 2121 - 2131, XP032793654, DOI: 10.1109/CVPR.2015.7298824
JUSTIN CHENG ET AL: "Flock", PROCEEDINGS OF THE 18TH ACM CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK & SOCIAL COMPUTING, CSCW '15, 1 January 2015 (2015-01-01), New York, New York, USA, pages 600 - 611, XP055334579, ISBN: 978-1-4503-2922-4, DOI: 10.1145/2675133.2675214
INEL OANA ET AL: "CrowdTruth: Machine-Human Computation Framework for Harnessing Disagreement in Gathering Annotated Data", 19 October 2014, NETWORK AND PARALLEL COMPUTING; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 486 - 504, ISBN: 978-3-540-76785-5, ISSN: 0302-9743, XP047301419
LIYUE ZHAO ET AL: "Robust Active Learning Using Crowdsourced Annotations for Activity Recognition", 1 January 2011 (2011-01-01), XP055143273, Retrieved from the Internet [retrieved on 20140929]
Attorney, Agent or Firm:
PATRADE A/S (DK)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method (100) for constructing (102) a model (20) in a neural network (10) for object detection (40) in an unprocessed image (50), the construction (102) being performed based on at least one image training batch (60), the method comprising the acts of:

Providing (104) a neural network (10) configured with a set of specifications

(12);

Establishing (106) at least one image training batch (60), which batch (60) comprises at least one training image (66) comprising one or more objects (70) where an individual object (70) is a member of an object class (90);

Providing (104) a graphical user interface (GUI) (80) configured for displaying a training image (66) from the image training batch (60); and

Iteratively performing (108) one or more of the following acts:

o Annotating (110) one or more objects (70) in the training image (66) by user interaction (82) generating individually annotated objects (72); o Associating (112) an annotation (24) with an object class (90) for the annotated object (72) in the training image (66) by user interaction (82);

o Returning (114) a user annotated image training dataset (62) compris- ing the training image (66) with one or more annotated objects (72), each individual annotated object (72) associated with an object class (90); and

o Constructing (102) one or more models (20) by training (116) one or more collective model variables (14) in the neural network (10) to clas- sify (118) the individual annotated objects (72) as a member of an object class (90),

which model (20) in combination with the set of specifications (12) when implemented in a neural network (10) is capable of object detection (40) in an unprocessed image (50) with probability (42) of object detection (40).

2. A computer-implemented method (100) according to claim 1 comprising a further act of iteratively performing (108) one or more of the following acts: Displaying (120) a training image (66) comprising one or more machine marked objects (74) associated with a machine performed classification (94) of the one or more individual objects (70); changing (130) the machine object marking (122), the machine object classification (118) or both; and

evaluating (124) the level of training (1 16) of the collective model variables (14) for terminating (126) the training (116) of the model (20).

3. A computer-implemented method (100) according to any of the preceding claims wherein the acts of annotating (110), associating (112) and returning (114) are performed (108) iteratively before subsequently performing (108) the act of constructing (102).

4. A computer-implemented method (100) according to any of the preceding claims comprising a further act of performing intelligent augmentation (140).

5. A computer-implemented method (100) according to any of the preceding claims comprising a further act of establishing (106) at least one image verification batch (68).

6. A computer-implemented method (100) according to any of the preceding claims comprising a further act of reducing (128) complexity of the model (20), reducing the specifications (12) or both as a result of evaluating (124) the constructed model (20) or the use of the neural network specifications (12).

7. A computer-implemented method (100) according to any of the preceding claims comprising a further act of reducing (128) the image training batch (60) as a result of evaluating (124) the accuracy (43) of object detection (40).

8. A computer-implemented method (100) according to any of the preceding claims wherein annotating (110) an object (70) is performed by an area- selection (28) of the training image (66) comprising the object (70) or pixel-segmentation (26) of the object (70).

9. A computer-implemented method (100) according to any of the preceding claims wherein annotating (110) an object (70) is performed using a computer-implemented annotation tool (160) configured with a zoom-function (162) for:

Providing (104) an area- selection interface (164) for area-selection (28) of an object (70) in the training image (66) by user interaction (82), which area- selection (28) is adjustable (166);

Providing (104) a pixel-segmentation interface (168) for pixel-segmentation (26) of an object (70) in the training image (66) by user interaction (82), which pixel-segmentation (26) is configured to pre-segment (170) pixels (172) by grouping pixels (172) similar to a small selection of pixels (172) chosen by user interaction (82); or

- both,

which annotation tool (160) is configured to transform annotation (24) from pixel- segmentation (26) of an object (70) into area-selection (28) of the object (70) in the training image (66).

10. A computer-implemented method (100) according to claim 9 wherein the computer-implemented annotation tool (160) further provides for:

colour overlay annotation (174), which colour is associated with an object classification (90) and which object classification (90) is associated with the annotation (24);

re-classification (96) of one or more individual annotated objects (72), machine marked objects (74) or a combination of both; or

- both,

which annotation tool (160) is configured to show all annotations (24) and machine marks (22) associated with an object class (90) in one or more training images (66).

11. A computer-implemented method (100) according to claim 9 or 10 wherein the computer-implemented annotation tool (160) further provides for history (180) of the performed annotation (24).

12. A computer-implemented method (100) according to any of the preceding claims wherein navigation (30) in the image training batch (60) is performed using a computer-implemented navigation tool (190) providing for: navigation (30) by image management (192); and

status (194) on progression (196) of evaluating the image training batch (60).

13. A computer-implemented method (200) in a neural network (10) for object detection (40) in an unprocessed image (50) with probability (42) of object detection (40) comprising the acts of:

Providing (104) a constructed model (20) according to claims 1-12 to a neural network (10) configured with a set of specifications (12);

Establishing (106) at least one unprocessed image batch (52), which batch (52) comprises at least one unprocessed image (50) to be subject for object detection (40);

Providing (104) a graphical user interface (GUI) (80) configured for displaying one or more unprocessed images (50) with a set of marked objects (74), each individual marked object (74) associated with an object class (90);

Performing (108) object detection (40) in an unprocessed image (50); and Returning (114) the unprocessed image (50) with a set of marked objects (74), each individual marked object (74) associated with an object class (90).

14. A computer-implemented method (200) according to claim 13 comprising a further act of providing (104) access to a neural network (10) for further training (116) one or more collective model variables (14) of the model (20), such that the model (20) is subject to improved accuracy (43) of object detection (40).

15. Use of a computer-implemented method (100) for constructing (102) a model (20) in a neural network (10) according to any of claim 1 to 12 or a computer-implemented method (200) in a neural network (10) for object detection (40) according to claim 13 or 14, wherein at least one image training batch (60) or an unprocessed image (50) is collected by use of an airborne vehicle such as a drone.

Description:
[Model Construction in a Neural Network for Object Detection]

Field of the Invention

The present invention relates to a computer-implemented method for constructing a model in a neural network for object detection in an unprocessed image, where the construction may be performed based on at least one image training batch. The model is constructed by training one or more collective model variables in the neural network to classify the individual annotated objects as a member of an object class. The model in combination with the set of specifications when implemented in a neural network is capable of object detection in an unprocessed image with probability of object detection.

Background of the Invention

The huge potentials of deep learning, neural networks and cloud infrastructure to effi- ciently perform complex data analysis become more and more apparent as the amount of data grows and the demand for automated tasks is ever expanding.

Massive research and investments worldwide is going into machine learning and deep convolution neural networks (CNNs). Large companies and research institutions show state-of-the-art solutions where a single neural network can replace very complex algorithms that previously needed to be developed specifically to each use case.

Commercial machine learning image recognition solutions are starting to appear in the market. However, these solutions are using pre-trained models that can identify com- mon object types like persons, cars, dogs or buildings. The problem with CNNs is that it is very complex to prepare data and configure the networks for good training results. Furthermore, very powerful PCs and graphics processing units (GPUs) are required.

Today, complex machine learning technology is still performed and accessed by high- ly skilled persons to construct pre-trained models. For example, a high level of computer science and deep learning competences is required to annotate, train and config- ure neural networks to detect custom objects with high precision. In general, the pre- trained models only find use within the narrow field of which it is trained.

One of the main problems is that the implementations of pre-trained models today are done on standardized training data. These standardized training data are limited in both size and application fields and thus, present a problem in terms of expanding the training to developing pre-trained models for other applications. Attempts have been made, especially by researchers in the field of neural networks, to convert neural networks to new domains, however, they often use too few images, due to the very time consuming task of annotating data.

In general, the implementations of pre-trained models today are a very time consuming task of training and constructing models and there is a need of specialist knowledge. The setup of the neural networks requires a specialist while the data anno- tation is very time-consuming and may take weeks or longer.

As one example of a machine learning technology, WO 2016/020391 may be mentioned. WO 2016/020391 discloses a method for training a classifier. The classifier is used in a method for automated analyzing biological images in the field of histology.

The method is based on analyzing the image using a biological image analysis device which is programmed to perform a classifier function. The classification is performed by combining an object feature with a context feature associated with the object feature. The object features may include the size, shape and average intensity of all pixels within the object and the context feature is a characteristic of a group of objects or pixels. In histology the presence, extent, size, shape and other morphological appearances of these structures are important indicators for presence or severity of disease which motivates the need for accurate identification of specific objects. Thus, the disclosed method aims at achieving a high level of specificity of the objects.

For implementing the method, SVMs are used. SVMs are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training digital images with pixel blobs, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate classes are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on the side of the gap onto which they fall.

The method for training the classifier disclosed in WO 2016/020391 is based on a three-factor framework where a training set of training digital images are used. The images are analyzed to calculate training context feature values and determine feature values of a particular object feature. The classifier is trained on a single object feature and one or more context features.

During the training phase, the classifier builds a model that inexplicitly specifies the relation between the object feature and one or more of the context features. In one example a training data set consisting of a total of 210 field of view (FOV) images was used wherein negative tumor cells and lymphocytes were manually annotated as the training data. The training data was input to an untrained linear SVM classifier. The example showed that context features have a greater descriptive power than has the object feature alone.

US 2014/0254923 discloses a computer-implemented method of classifying objects in an image. The disclosed method does not rely on context features as does WO 2016/020391, but on individual object detection. The method uses vector description, in order to take account of the rotation and scaling, and the objects in images are classified by a trained object classification process. However, the object classification process is trained through the use of a known training data set of images with known contents. To really exploit the huge potentials of machine learning technology and neural networks to efficiently perform complex data analysis, solutions for simplified procedures are needed. Solutions, which include pre-trained generic models to be used on a wide range of structures and infrastructure inspections, solutions, which allow for non-technical people to train models in CNN's and to use these constructed models to analyse their data, and solutions, which leverage the strengths of cloud infrastructure and CNNs to create a single scalable solution which can work in many different inspection domains. At the same time image recording have become easy at an unprecedented scale and quality. The recording or collection may also be performed by unmanned airborne vehicles such as drones. Collections of images from a drone inspection include a vast or big amount of data and have shown to introduce accuracy issues when training for or applying neural networks to image recognition.

Object of the Invention

It is an objective to overcome one or more of the before mentioned shortcomings of the prior art.

Description of the Invention

An object of the invention may be achieved by a computer-implemented method for constructing a model in a neural network for object detection in an unprocessed image, where the construction may be performed based on at least one image training batch. The method comprises the act of providing a neural network configured with a set of specifications, the act of establishing at least one image training batch, which batch comprises at least one training image comprising one or more objects where an individual object is a member of an object class, and the act of providing a graphical user interface (GUI) configured for displaying a training image from the image training batch. Further, the method comprises the act of iteratively performing one or more of the following acts: one act of annotating one or more objects in the training image by user interaction generating individually annotated objects, another act of associating an annotation with an object class for the annotated object in the training image by user interaction. Another act of returning a user annotated image training dataset comprising the training image with one or more annotated objects, each individual annotated object associated with an object class. And yet another act of constructing one or more models by training one or more collective model variables in the neural network to classify the individual annotated objects as a member of an object class. The model in combination with the set of specifications when implemented in a neural network is capable of object detection in an unprocessed image with probability of object detection.

The neural network may be a convolutional neural network (CNN), regional neural network (R-NN), regional convolutional neural network (R-CNN), fast R-CNN, fully segmented CNN or any similar structure. The neural network may be implemented in different frameworks, as examples, but not limited to these, may be mentioned commercial frameworks such as Tensorflow, Theano, Caffe, or Torch. The specifications of the neural network may for example include specification on datatypes, learning rate, step size, number of iterations, momentum, number and structures of layers, layer configurations such as activation functions (relu, sigmoid, tanh), pooling, number of convolutional layers, size of convolutional filters, number of fully connected layers, size of fully connected layers, number of outputs (output classes), and classification functions. Furthermore, the specifications may include information on the depth of the network, and structures for the set-up of the neural network. The neural network may be configured with different or additional specifications and thus is by no means limited to the mentioned examples. The wide range of specifications may often be reused and many neural networks are already configured with a set of specifications, as examples may be mentioned commercially available Alexnet or VGG, which specify a range of the above-mentioned specifications. A person skilled in the art will know how to use already established neural networks configured with a set of specifications, adapt or set-up the specifications of already established neural networks or may even set-up a neural network with a set of specifications.

Image may refer to any multi-dimensional representation of data points recorded by a sensor, orthomosaics or other multi-dimensional representation of data points. This may for example include radar images, scanning images from an electron microscope or a MR-scanner, optical images, thermal images, point cloud, acoustic data recording, or seismic recordings. This is just a few examples of images and thus the scope of the invention is by no means limited to the mentioned examples.

The iteratively performed acts may only be performed if the image provides for the specific act. In case the image training batch comprises a blank training image or a training image comprising no relevant objects to be annotated, the act(s) related to annotating an object may be omitted.

Object detection in an image comprises both object recognition and localization of the object. Object recognition may also be perceived as object classification.

The object detection may for example be used for mapping or surveying infrastructure where object detection may be actual image objects, temperature changes in thermal images, or specific frequency scales on acoustic images. The object detection may for example comprise common occurring objects, rarely occurring objects or a combination. Again it should be mentioned that these are only a limited number of examples and the scope of the invention is by no means limited to these.

Probability of object detection refers to the possibility of object detection on an image by the network and the possibility of that object belonging to an object class, whereas accuracy refers to how accurate the network actually is when determining an object and object class where the predictions of the network are tested on an image verification batch. In one aspect the accuracy of object detection may describe the circumstance that the user sets a threshold in the program. If the marked object is above this threshold, the neural network will suggest this object class to be associated with the marked object.

One effect of the embodiment is that the data used for training the collective model variables in the neural network, for construction models to classify the individual annotated objects as a member of an object class, only comprises a training image data batch, annotation of relevant objects, and associated object classes. Thus, the provided data preparation does not require a high level of computer science and deep learning competences, which has the advantage that the training may be performed by non- skilled persons within computer science and deep learning. The persons performing the training only need the skills of recognizing relevant objects on the images.

Another effect of the embodiment is that the training may be performed on a wide range of objects with the advantage that collective model variables may be trained for constructing models for a wide range of object detection. Thus this embodiment is advantageous in regard to constructing models for object detection for work in many different inspection domains. Furthermore, it is advantageous in regard to constructing models for object detection with a high degree of invariance, for example in order to specifically outline the objects, size, scale, rotation, colour or the like. The invariance may encompass a number of features and is by no means limited to the examples mentioned here.

Thus according to the above, the training may encompass one model for detection of one or more objects or multiple models, each model constructed for detection of one or more objects.

Yet another effect of the embodiment is that the training may be performed for object detection with a given accuracy. The advantage is that the training may be completed with an accuracy of object detection evaluated to be sufficient for the given task, thereby limiting the training effort and time to a minimum. This may also be advantageous in regard to the fact that the level of training may be accustomed to the complexity of the object detection. An additional effect of the embodiment is that the training may be performed using an image training batch, which comprises training images with multiple objects, where each individual object belongs to different object classes. One advantage of this may be that multiple models for individual object classes may be constructed in one training process, thereby limiting the training effort and time to a minimum. The multiple models may for example be used as one comprehensive model for multiple object detection on a wide range of structures and infrastructure inspections, or a single model may be separated out for more focused object detection on a single structure in very specific infrastructure inspections. Yet an additional effect of the embodiment is that the training may be performed by multiple users on either multiple image training batches, or on one image training batch with the advantage of leveraging the strengths of cloud infrastructure and CNNs to construct a model with limited training effort and time consumption for each user. In case of training with several users it may be preferable to incorporate history on the user interactions and appoint different user levels, where the user level is associated with a hierocracy for object annotation and object classification. The effect of multiple users may be that the training may be divided into more users. Furthermore, more users may provide for a more diverse image training batch, if each user contributes with different images. Furthermore, a more comprehensive image training batch may be established if more users contribute with their own images. This may be advantageous in regard to reduced time-consumption for the individual user, improved accuracy of object detection and thus constructing more accurate models.

One object of the invention may be achieved by the computer-implemented method comprising a further act of iteratively performing one or more of the following acts, where one act comprises displaying a training image comprising one or more machine marked objects associated with a machine performed classification of the one or more individual objects, another act comprises changing the machine object marking, the machine object classification or both, and yet another act comprises evaluating the level of training of the collective model variables for terminating the training of the model.

In general, annotation is used in relation to an action performed by user interaction through the graphical user interface while marking is used in relation to an action performed by the neural network based on the constructed model. One effect of this embodiment may be that the collective model variables are continuously trained in the iterative performed acts and that the constructed model is improved accordingly after each performed iteration. This is advantageous in regard to continuously evaluating the level of training such that the training can be terminated once an appropriate accuracy of object detection is reached, thereby limiting training effort and time consumption of excessive training.

The iterative training may have the further effect that correct performed markings may simply be accepted and thus less and less annotations have to be performed as the training proceeds. As the training proceeds, the annotations may be limited to be per- formed on images with new information, different viewpoints or objects within the same class but with features not seen on the previous images. The iterative training may therefore present the advantage that the time consumed for training is reduced by factors of time.

Another effect of the embodiment may be that object marking and object classification can be changed to correct the training of the collective model variables. This may be advantageous in regard to continuously adjusting the training. The model may also include collective variables for the part or parts of the images not comprising objects, which may be referred to as background. The iterative act of annotating may thus include annotating sections of the background and associating this or these annotations with an applicable object class, for example "background", "other", "non-applicable", "non-object" or other creatively named classes. The annotation of the background may comprise small sections of the image background, a complete image or the remaining part of the image surrounding other annotated objects. The effect of annotating sections comprising background and classifying this is to establish segregation between background and other objects, the relevant objects to be detected. It is important to obtain a broad diversity in the annotated sections of background to improve the accuracy of segregation between background and other objects and thus the probability and accuracy of object detection.

Based on performed experiments the best results are obtained by annotating small sections of the image background in combination with annotating complete images without any relevant objects. However, all of the above described methods, alone or in combination, may still be applicable with good results.

One object of the invention may be achieved by the computer-implemented method wherein the acts of annotating, associating and returning are performed iteratively before subsequently performing the act of constructing.

One effect of this embodiment may be that the user interaction may be performed on a sub-batch of training images without waiting for the act of construction to be performed between each image. By postponing the act of construction and collecting the act of constructing for the entire sub-batch, the user may earn the advantages of a concentrated work effort on the sub-batch and consecutive time for performing other tasks while the act of constructing is performed. One object of the invention may be achieved by the computer-implemented method comprising a further act of performing intelligent augmentation.

Data augmentation is the art of changing the image of an object without changing the object class, regardless of localization in the image. This means that it is the same object no matter if the object is lighter or darker than before, whether it is rotated or not, whether it is flipped or not, to mention a few examples. Common practice, to reduce the number of training data required for training one or more collective model variables in the neural network is to adapt the present image training set to simulate different images. This means that one image of an object may be expanded to multiple images of the same object but imaged with different variations - the number of new images may be as many as 500 or more. By intelligent augmentation is meant that only the relevant changes to an image of an object are considered. Thus, the purpose of intelligent augmentation is to use data augmentation in an intelligent way to reduce the complexity of the neural network. For example, if a rotation of an imaged object never occurs in real images, it will take up complexity in the neural network and may never be used. This means that some weights will be reserved for this information and thereby cannot be used for something else that might be more relevant, which may cost accuracy. Thus, the intelligent augmentation incorporated in this embodiment provides for processing the image for better accuracy of object detection. This processing may include scaling of the images, rotation of the images based on annotations and associating annotations and object classes. The annotation may be performed on objects of different sizes or displayed in different angles, which is exactly what may be used in intelli- gent augmentation for more accurate object detection.

One object of the invention may be achieved by the computer-implemented method comprising a further act of establishing at least one image verification batch. Establishing an image verification batch may have the effect of evaluating the accuracy of the constructed model. The image verification batch is not used for training but only to test the constructed model. This may be advantageous in comparing a previous reached training level with a subsequent model constructed after further training.

Furthermore, from the verification batch and the accuracy by which the object detection is performed, the accuracy may be used in itself to establish if the model has the sufficient accuracy. Thereby it is possible to evaluate whether the model should be changed, more training data should be provided, or the accuracy may be reached using a simpler model. The advantage of using for example a simpler model is that less memory is required, and thus less disk space and less training data may be required.

One object of the invention may be achieved by the computer-implemented method comprising a further act of reducing complexity of the model, reducing the specifica- tions or both as a result of evaluating the constructed model or the use of the neural network specifications.

An effect of this embodiment may be that a simpler model or a simpler neural network may be used for training the collective model variables. This may be advantageous in regard to reduced processing time. Another advantage may be that the required PC capacity may be reduced. Yet another advantage may be that less powerful graphic processing units (GPUs) may be used. This initiates using cheaper hardware elements and thus reduces costs for training or object recognition, or both. As previous mentioned, the advantage of using for example a simpler model is that less memory is required, and thus less disk space and less training data may be required.

One object of the invention may be achieved by the computer-implemented method comprising a further act of reducing the image training batch as a result of evaluating the accuracy of object detection.

The effect of reducing the image training batch is that the training effort and time spent by the user may be reduced resulting in reduced costs for training. In another aspect the image training batch may be reduced by omitting cluttered, shaken or blurred images. Including these images may harm the training with reduced accuracy as a result. Alternatively the relevant objects in such images may be pruned and thus still be used in the image training batch, which may have the advantage of widening the object recognition and thus increasing the accuracy of object detection.

One object of the invention may be achieved by the computer-implemented method wherein annotating an object is performed by an area- selection of the training image comprising the object or pixel- segmentation of the object.

One effect of this embodiment is that common practice of annotating objects may be used with the advantage of facilitating that the computer-implemented method may be implemented on a wide range of neural networks.

One object of the invention may be achieved by the computer-implemented method wherein annotating an object is performed using a computer-implemented annotation tool configured with a zoom- function. The computer-implemented annotation tool is configured for providing an area- selection interface for area- selection of an object in the training image by user interaction, which area-selection is adjustable, configured for providing a pixel-segmentation interface for pixel-segmentation of an object in the training image by user interaction, which pixel- segmentation is configured to pre- segment pixels by grouping pixels similar to a small selection of pixels chosen by user interaction, or configured for both. Furthermore, the annotation tool is configured to transform annotation from pixel-segmentation of an object into area-selection of the object in the training image.

The zoom-function has the effect that more precise annotations may be performed comprising a minimum of background with the advantage of accurate object detection.

The adjustable area-selection provides for the same effect and advantage as the zoom- function. One effect of the fact that the pixel-segmentation in this embodiment is configured to pre-segment pixels is that only a small selection of pixels may be chosen by user interaction, after which the computer-implemented annotation tool pre-segments pixels by grouping pixels similar to the small selection of pixels chosen by user interaction. Thus, each pixel comprised in the object does not have to be selected by the user which may be a tedious and unprecise process.

Another effect of the embodiment, as the annotation tool is configured to transform annotation from pixel- segmentation of an object into area- selection of the object in the training image is, that the annotation may be saved to other formats of neural networks and may thus be used independent of the format or type of the neural networks.

One object of the invention may be achieved by the computer-implemented method wherein the computer-implemented annotation tool further provides for colour overlay annotation, which colour is associated with an object classification and which object classification is associated with the annotation, provides for re-classification of one or more individual annotated objects, machine marked objects or a combination of both, or provides for both. Furthermore, the annotation tool is configured to show all annotations and machine marks associated with an object class in one or more training im- ages.

One effect of this embodiment is that the associated classes are easily identified due to the colour overlay. Typically there will be several types of object classes on the same image which is especially advantageous to easily identify the different associated class and thereby identify erroneously annotations or classifications. The embodiment has the further effect that erroneously annotations or classifications may be corrected immediately.

Another effect of this embodiment is that when all annotation, marking and associated object classes are shown, it provides for easy correction of mistakes with the advantage of optimizing the training. One object of the invention may be achieved by the computer-implemented method wherein the computer-implemented annotation tool further provides for history of the performed annotation. In case of training with several users this embodiment may have the effect that annotations performed by super-users may not be overwritten by less experienced users, which may be advantageous in regard to achieving a high level of training. A further effect is that the user may see his/her own annotation history which may be advantageous in regard to improving his/her own skills.

Another effect of this may be that the history comprises relevant information on whether an object is originally annotated by a human or if it is originally marked by the neural network. Even if the annotation or marking is accepted, there might be issues with the precision of edges. The user might be inclined to accept an imprecise but correct result from the neural network compared to if the user had to make the annotation. This may present inaccuracies in the training if not corrected. Thus, for an experienced user this may be discovered when consulting the history on the annotations/markings and be corrected to restore or improve the accuracy in the training. In one aspect the computer-implemented annotation tool may comprise a function to rotate a marking or an annotation. Rotated marking or an annotation provides for selecting objects that are inclined, without getting too much background. Thereby, achieving markings/annotations with a better fit for selection so that the training becomes more precise.

In another aspect the computer-implemented annotation tool may comprise a function to move an area- selection to a new object, and thereby avoid redrawing the annotation box if the new object has the same properties. In yet another aspect the computer-implemented annotation tool may comprise a function to repeat an area- selection. If multiple objects appear in an image, this function can repeat the area- selection to the next object, thereby avoiding redrawing the annotation box if the next object has the same properties. In yet another aspect the computer-implemented annotation tool may comprise a one- key- function for saving the image including annotation and object classes and which function provides unique identifiers for the image dataset to be saved. Thereby overwriting existing data is avoided and time consumption is reduced. Furthermore, the user does not have to remember the sequence of names as the function may keep track of these.

One object of the invention may be achieved by the computer-implemented method wherein navigation in the image training batch is performed using a computer- implemented navigation tool providing for navigation by image management, and providing for status on progression of evaluating the image training batch.

One effect of this embodiment may be that the user may be motivated by following the progress, with the advantage of keeping the user alert, and thereby avoid errone- ously annotations or wrong object classes associated with the annotations.

Another effect may be that the user may gain a better overview of the image training batch and may skim through the training images, thereby only focusing on images with relevant objects. This may have the advantage of keeping the user alert to avoid mistakes and furthermore limit the training effort and time consumption provided by the user for the training.

One object of the invention may be achieved by a computer-implemented method in a neural network for object detection in an unprocessed image with probability of object detection. The method comprises the act of providing a constructed model to a neural network configured with a set of specifications, the act of establishing at least one unprocessed image batch, which batch comprises at least one unprocessed image to be subject for object detection, the act of providing a graphical user interface (GUI) configured for displaying one or more unprocessed images with a set of marked objects, each individual marked object associated with an object class, the act of performing object detection in an unprocessed image, and the act of returning the unprocessed image with a set of marked objects, each individual marked object associated with an object class. One effect of this embodiment is that the huge potential of machine learning technology and neural networks to efficiently perform complex data analysis may be utilised by non-skilled persons within computer science. This is advantageous in regard to allowing non-skilled persons within computer science to use constructed models in neural networks to analyse their data which may provide of reduced time and cost. The reduction in cost and time may be both in regard to hardware requirements and in labour.

One object of the invention may be achieved by a computer-implemented method in a neural network for object detection comprising a further act of providing access to a neural network for further training one or more collective model variables of the model, such that the model is subject to improved accuracy of object detection.

One effect of this embodiment is that the model may be continuously improved or updated. This is advantageous if objects with new features appear on the market which objects belong to an already existing object class. In this case the model may be trained to include this object without training a new model.

Examples of user cases:

Case 1:

A user has made an inspection resulting in 1000 images and would like to set up a new model to detect one class of objects, in this case insulators. Thus, the image training set comprises 1000 images. The user selects an existing neural network comprising a set of specifications. The user then specifies the relevant number of object classes, in this case two classes: insulators and background. Furthermore, the user specifies that annotation is performed by pixel-segmentation. The user then looks through the first 10 training images and chooses a small selection of pixels where after the annotation-tool through pre-segmenting pixels in the image performs the complete pixel-segmentation. After annotation of the first 10 images the first process of training the collective model variables is performed and a model is constructed. The model is then able to give suggested markings for the remaining 990 images. The user looks through the next 40 images. On 30 images the objects are marked correctly and thus, the user accepts these without changing the markings or the classifications. On 10 images the objects are not marked correctly so these are corrected.

Now, a second process of training the collective model variables is performed and an updated model is constructed. The model is improved by the second process and with an improved accuracy of object detection.

As the model is improved the user now looks through the next 100 images. This time only 10 of the images comprise incorrect markings. The markings on the other 90 im- ages are correct and accepted.

Accepting an image is a one button click and the program automatically goes to the next image. As the user reaches image no. 500 this image and the next 100 images do not comprise any relevant objects (insulators) for this case. The user goes to naviga- tion thumbnail view where the current image is highlighted and scrolls through the next 100 images down to image no. 600 - the next image whereon insulators again appear. The user then chooses this picture through the user interface by clicking on that image, after which the user continues accepting or correcting markings. In between the user may optionally stop to train the model so the markings get itera- tively better. The user may stop after completing the 1000 images an updated model is constructed - for this case the "first version" model.

Before continuing, the user now initiates a new training on the same 1000 images starting with the constructed "first version" model. This time the training is done with a higher number of iterations. This extends the training time but is done to improve the accuracy of the model. After completing the 1000 images the further updated model is constructed - for this case the "second version" model. A second user is also interested in insulators but wants to distinguish between glass insulators and ceramic insulators. The user therefore specifies two new classes: "Insulator, glass" and "Insulator, ceramic". The second user benefits from the fact that a large image training batch has already been used to construct a model for object detection on insulators. The second user now loads the previously annotated training set and in the thumbnail view the user can now see all markings of insulators. For each insulator the second user can now, through the user interface, simply click on each marking and change the object class to any of the two newly specified classes. The second user does not have to do the marking again, and furthermore does not have to look through the images without insulators. The second user may now finish the training by constructing the new updated model - for this case the "third version" model. A third user just wants to know if an insulator is comprised in an unprocessed image batch or not. This user is not interested in knowing exactly which pixels image the insulator contains. This user specifies that area-selection shall be used. This user - just as the second user - benefits from the fact that a large image training batch has already been used to construct a "first version" model for object detection on insulators. Fur- thermore this user - again just as the second user - now loads the previously annotated training set and the neural network converts the pixel-segmented insulators to area- selected insulators using intelligent data augmentation for this type of neural network. The third user may now finish the training by constructing yet another new updated model - for this case the "fourth version" model.

An objective may be achieved by use of a computer-implemented method for constructing a model in a neural network as outlined and where images are collected by use of by use of an airborne vehicle such as a drone. In particular unmanned airborne vehicles such as drones may be used for inspection of areas or infrastructure. Drones have proven a valuable tool carrying image recording devices to places not hereto accessible. Likewise drones have proven capable of positioning image recording devices in a breath of angles, distances etc. of subjects. Fur- thermore, drones have proven capable of tracking paths of structures or infrastructure and of being capable of collecting vast amounts of images during operation.

In practice drone operators and inspectors will aim to collect as many images as pos- sible during a flight that is often planned in detail and must be performed taking limited flight time into account.

Thus an image batch from a drone flight comprises a vast amount of images often from different - or slightly different - angles of a subject or often similar subjects from different locations along a flight path. Another problem with such series or collection of images is that the drone inspection result in images taken from a perspective hereto un-seen by human inspection.

The disclosed methods have shown to overcome issues with training or construction of models and to enable management of the big data collected.

Likewise a computer-implemented method in a neural network for object detection as disclosed and wherein an unprocessed image or a batch of images is obtained from a drone flight has shown to be more accurate than hereto.

Further aspects to case 1:

The users may choose that 20% of the images are reserved for an image verification batch and thus, the remaining 80% of the images comprise the image training batch. The image verification batch may be used to test the accuracy of the constructed mod- el.

Through the training of the collective model variables and as the intermediate models are constructed, the accuracy of an intermediate model may be tested by use of the verification batch. Thereby the accuracy of the model may be made available to the user. Furthermore, the neural network may suggest whether the model should be further improved or if simplifications may be done to the training.

As a further training of both the "third version" and the "fourth version" model the respective second and third user may add and annotate new images with imaged insu- lators. These imaged insulators could be previously known insulators or a new class unknown to the system.

Case 2:

A user loads a satellite image map of Greenland. The user marks polar bears x number of times. The system can now detect polar bear locations and the total number of polar bears.

Case 3:

A user adds one or more thermal images of central heating pipes for a given area. The user specifies 5 classes each representing the severity of a leak. After marking these classes the system can now identify leaks with a 1-5 severity degree. In this case the invention is used for object detection of objects where the object classes consist of fault classes.

Case 4:

Whenever the training of the collective model variables is completed and thus, a constructed model is completed, the neural network evaluates if the completed model should be made available to other users. The evaluation criteria could for example be user ranking, model accuracy, and the number of images in the verification batch, hence the number of images which are used to determine the accuracy.

Description of the Drawing

Figure 1 illustrates one embodiment of the computer-implemented method for con- structing a model in a neural network for object detection in an unprocessed image.

Figure 2 illustrates one embodiment of constructing a model in a neural network for object detection in an unprocessed image. Figure 3 illustrates one embodiment of the computer-implemented method for constructing a model in a neural network for object detection in an unprocessed image. Figure 4 illustrates one embodiment of the computer-implemented method for constructing a model in a neural network for object detection in an unprocessed image.

Figure 5 illustrates one embodiment of the computer-implemented method for con- structing a model in a neural network for object detection in an unprocessed image.

Figure 6 illustrates a training image.

Figure 7 illustrates area-segmentation (7 A) and pixel-segmentation (7B and 7C).

Figure 8 illustrates one embodiment of the computer-implemented annotation tool.

Figure 9 illustrates one embodiment of intelligent data augmentation. Figure 10 illustrates one embodiment of the computer-implemented navigation tool.

Figure 11 illustrates one embodiment of the computer-implemented method in a neural network for object detection in an unprocessed image.

Detailed Description of the Invention

No. Item

10 Neural network

12 Specifications

14 Collective model variables

16 Trained collective model variables

18 Image dataset

20 Model

22 Machine-mark

24 Annotation

26 Pixel- segmentation

28 Area-selection

30 Navigation 40 Object detection

42 Probability

43 Accuracy

50 Unprocessed image

52 Unprocessed image batch

60 Image training batch

62 User annotated image training dataset

66 Training image

68 Image verification batch

70 Object

72 User annotated object

74 Machine marked object

80 Graphical user interface (GUI)

82 User interaction

90 Object class

92 User classified object

94 Machine classified object

96 Re-classification

100 Computer-implemented method for constructing

102 Constructing

104 Providing

106 Establishing

108 Performing

110 Annotating

112 Associating

114 Returning

116 Training

118 Classifying

120 Displaying

122 Marking

124 Evaluating

126 Terminating

128 Reducing 130 Changing

140 Intelligent augmentation

160 Computer-implemented annotation tool

162 Zoom-function

164 Area-selection interface

166 Adjustable

168 Pixel- segmentation interface

170 Pre-segment

172 Pixel

174 Colour overlay annotation

180 History

190 Computer-implemented navigation tool

192 Image management

194 Status

196 Progression

200 Computer-implemented method for object detection

Figure 1 illustrates one embodiment of the computer-implemented method (100) for constructing (102) a model (20) in a neural network (10) for object detection (40) in an unprocessed image (50). The method comprises the acts of providing (104) a neural network (10) and a GUI (80). Furthermore, an image training batch (60) comprising training images (60) is established (106) in this embodiment. The neural network (10) is configured with a set of specifications (12). These specifications may comprise, amongst other, information on number of layers and collective model variables. The GUI (80) may be configured for displaying a training image (66) and for displaying user interactions (82) such as annotated objects and object classes.

The computer-implemented method (100) further comprises acts which may be itera- tively performed (108). These acts include annotating (110) objects (70) on the training images (66) and associating (112) each annotation with an object class (90). The acts of annotating (110) and associating (112) may be performed in any preferred order, such that an object may be annotated after which an object class is associated with the object annotation, or an object may be associated with an object class after which the object is annotated. The iteratively performed acts further illustrated in the embodiment includes returning (1 14) a user annotated image training dataset, which training dataset comprises the training image and annotated object with an associated object class if relevant objects are present on the image, and constructing (102) one or more models.

The broken lines illustrate that the act of annotation (110) and associating (112) may be interchanged, as already described. Furthermore, the broken line illustrates that the acts may be performed in an iterative process, where the model construction receives additional input for each performed iteration. The embodiment may comprise only a single iteration of acts and thus each act may only be performed once. Furthermore, each iteration may only comprise some of the acts. For example, if no relevant objects are present on the image, no act of annotating (110) and associating (112) of object class will be performed.

After completing the image training batch (60) a trained model (20) is constructed (102).

Figure 2 illustrates one embodiment of constructing a model (20) in a neural network (10) for object detection in an unprocessed image. A training image (66) may be described in an image dataset (18), here illustrated by triangles, crosses and circles. The image dataset is interpreted by the collective model variables in the neural network (10). The training image (66) may comprise an annotation (24) and thus part of the image dataset may be interpreted as annotated data. The constructed model (20) com- prises the trained collective model variables (16) resulting from the process of interpreting image datasets by the collective model variables (14) in the neural network (10).

The constructed model (20) further comprises the set of specifications (12) with which the neural network (10) is configured.

Figure 3 illustrates one embodiment of a computer-implemented method (100) for constructing (102) a model (20) in a neural network (10) for object detection in an unprocessed image. The illustrated embodiment comprises a method according to fig- ure 1 but with additional acts. The dotted lines refer to acts already described in figure 1. The embodiment illustrates the further acts which may be performed after a model is constructed (102) and therefore the dotted arrow pointing to the act of constructing (102) a model is where the iterative performed acts continue from the acts illustrated in figure 1.

The model may be constructed on basis of a single training image (66). Thus, once a model is constructed (102) the computer-implemented method (100) may comprise the following described acts which may be iteratively performed along with the itera- tively performed acts described in figure 1 of annotating (110), associating (112) and returning (1 14).

These acts may comprise displaying a training image (66) from the image training batch (60), which training image may comprise a machine marked object (74) and the associated object classification (94) performed using the constructed model. If the machine marking or classification or both are incorrect this may have to be corrected and thus, an act of changing (130) the object marking, classification, or both may be performed by user interaction. If no changing (130) is performed an act of evaluating (124) the level of training may be performed. If no changes (130) are performed and furthermore no relevant objects are found to be unmarked, unclassified or both the level of training may be evaluated (124) as sufficient and the training may be terminated with the constructed model (102) as a result.

Figure 4 illustrates one embodiment of the computer-implemented method (100) for constructing (102) a model (20) in a neural network (10) for object detection (40) in an unprocessed image (50).

In line with the embodiment illustrated in figure 1, the method comprises the acts of providing (104) a neural network (10) and a GUI (80). Furthermore, an image training batch (60) comprising training images (60) is established in this embodiment. The neural network (10) is configured with a set of specifications (12). These specifications may comprise, amongst other, information on number of layers and collective model variables. The GUI (80) may be configured for displaying a training image (66) and for displaying user interactions (82) such as annotated objects and object classes. The computer-implemented method (100) further comprises acts which may be iteratively performed (108). These acts include annotating (110) objects (70) on the training images (66) and associating (112) each annotation with an object class (90). The acts of annotating (110) and associating (112) may be performed in any preferred order, such that an object may be annotated after which an object class is associated with the object annotation, or an object may be associated with an object class after which the object is annotated. The iteratively performed acts further includes returning (114) a user annotated image training dataset, which training dataset comprises the training image and annotated object with associated object classes if relevant objects are present on the image.

This embodiment differs from the embodiment in figure 1 as the acts of annotating (110), associating (112) and returning (114) may be performed (108) iteratively before subsequently performing (108) the act of constructing (102).

An alternative embodiment of the illustrated method may comprise that two iterative processes are performed. An inner iterative process comprising the acts of annotating (110), associating (112) and returning (114) may be performed (108) iteratively before subsequently performing (108) an outer iterative process wherein the further act of constructing (102) is performed.

The broken lines illustrate that the act of annotation (110) and associating (112) may be interchanged, as already described. Furthermore, the broken line illustrates that the acts may be performed in an iterative process, where the model construction receives additional input for each performed iteration. The embodiment may comprise only a single iteration of acts and thus each act may only be performed once. Furthermore, each iteration may only comprise some of the acts. For example, if no relevant objects are present on the image, no act of annotating (110) and associating (112) of object class will be performed.

After completing the image training batch (60) a trained model (20) is constructed (102). Figure 5 illustrates another embodiment of a computer-implemented method (100) for constructing a model (20) in a neural network for object detection in an unprocessed image. The method comprises the acts of providing (104) a neural network (10) and a GUI (80) not illustrated. Furthermore, an image training batch (60) comprising train- ing images (60) is established (106) in this embodiment. In this embodiment annotating (1 10) of objects is performed in a first sub-batch of the image training batch (60). Based on the annotated images the collective model variables are trained (116) in the neural network for constructing a model (20). Subsequently a second sub-batch of the remaining image training batch is established (106) and the constructed model is used for marking (122) objects in the second sub-batch. After the machine performed marking (122) these markings are evaluated (124) by user interaction. This evaluation of the second sub-batch may lead to changing (130) of the machine marking, additional annotation (110) of objects or both. Depending on whether the evaluation (124) of the machine marking gives reasons for changing (130) object markings or annotating (HO) additional objects, the collective model variables may be further trained (116) either by confirming that the object marking (122) is correct or by performed changes and/or additional annotations.

If the model is evaluated to be trained further, a third sub-batch of images may be es- tablished and another iteration, starting with marking (122) objects using the updated constructed model, may be performed.

If the collective model variables are evaluated as being sufficiently trained, the method may be terminated (126) and the trained collective model variables (16) comprises the constructed model (20) for subsequent use in a neural network for object detection in an unprocessed image.

In figure 6 a training image (66) is illustrated on which different objects (70) are annotated (24) and associated with an object class (90). The objects (70) are annotated (24) using area- selection (28). The example on the training image concerns high voltage cable systems. The annotated objects are two vibration absorbers and two insulators. All four objects (70) are individually annotated (24) and associated with an object class (90). Other objects that could be relevant in other connections could for example be the cables or the mast, which should then have been annotated (24) as objects and associated with an object class (90).

Figure 7 illustrates two different approaches for annotation of objects: area-selection (28) and pixel- segmentation (26). For the illustrated embodiments an insulator is used as the object for exemplary purpose. The computer-implemented annotation tool provides for both kinds of annotation and may be used in both cases. However, other appropriate annotations tools may also be used. In figure 7 A area- selection (28) is illustrated. Area- selection is performed simply by framing the object, as illustrated by the broken line. Pixel-segmentation (26) is illustrated in figures 7B and 7C. Pixel-segmentation (26) is performed by choosing the pixels constituting the imaged object or a small selection of the pixels constituting a small part of the imaged object. From the selected pixels the annotation tool locates the boundaries of the object. Thus, the object is annotated by the located boundaries as illustrated in figure 7C by the patterned areas.

In figure 8 one embodiment of annotation (24) using the computer-implemented annotation tool (160) is illustrated. The annotation may subsequently be used for intelligent augmentation (140). In figure 8A an object (70) on the training image (66) is annotated (24) using area- selection. The exemplary object is a vibration absorber. In figure 8B rotated area-selection is used. The rotated area-selection may subsequently be used for intelligent augmentation as illustrated in figure 9. The rotated annotation in figure 8B may provide for a more accurate object classification.

Figure 9 illustrates an embodiment of intelligent data augmentation. In figure 9A the object is annotated using area- selection and in figure 9B pixel-segmentation is used for annotating the object. In both cases intelligent augmentation (140) is performed by extracting information of the dimensions and by rotation of the object. In the illustrat- ed embodiment a width, length and rotation of the object is extracted. Relating information of dimensions and rotation may be used for scaling the images for more accurate object detection. Furthermore, this may be used when converting from pixel- segmentation to area-selected annotation or marking. In figure 10 one embodiment of the computer-implemented navigation tool is illustrated. The illustration shows the graphical navigation tool as displayed GUI (80). The GUI (80) may be divided into several sections: One section, where the current training image (66) with annotations (24) is displayed and provided with forward and back- ward navigation (30) between the training images (66), another section, here illustrated below the training image (66), may display the status (194) on progression (196) of evaluating the image training batch. The status may display how many of the training images (66) of the image training batch annotation and object classification has been performed. The status may be displayed in percentage, as the current image number vs total amount of images, or in other appropriate measures. Yet another section may display two rows of images comprising the image training batch (60), where one row shows previous training images on which, annotation (24) and classification have been performed, thus this row shows the annotation history. The other row may show the subsequent training images, which has not yet been subject to annotation (24) and object classification. Both rows may be provided with forward and backward navigation (30) between the training images (66). Each row can be displayed alone or together.

Figure 11 illustrates one embodiment of the computer-implemented method (200) in a neural network (10) for object detection in an unprocessed image (50) with probability of object detection. The method comprises the acts of providing (104) a neural network (10) configured with a set of specifications (12) and a graphical user interface (GUI) (80). Furthermore, an act of establishing (106) at least one unprocessed image batch (52) is comprised in the method.

The unprocessed image batch (52) may comprise at least one unprocessed image (50) to be subject for object detection. The neural network (10) is provided with a constructed model (20) with trained collective model variables and the GUI (80) is configured for displaying one or more unprocessed images (50) with a set of marked ob- jects (74) and associated object classes (90).

Hereafter, the method comprises the further acts of performing (108) object detection in an unprocessed image and returning (114) the unprocessed image (50) with a set of marked objects (74) and machine classified objects (94). 30