Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EVALUATION SYSTEM, EVALUATION DEVICE, COMPUTER-IMPLEMENTED EVALUATION METHOD, COMPUTER PROGRAM AND AI SYSTEM
Document Type and Number:
WIPO Patent Application WO/2018/051179
Kind Code:
A1
Abstract:
An evaluation system is provided for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories. The evaluation system may comprise : a learning information storage unit configured to store information regarding at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor measurement information; a determination condition storage unit configured to store a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent; a classification result input unit configured to receive classification results output from the at least two or more learning results, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines; a determination unit configured to determine whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and an identification information assigning unit configured to assign the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

Inventors:
ANDO TANICHI (JP)
Application Number:
PCT/IB2017/001122
Publication Date:
March 22, 2018
Filing Date:
September 14, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OMRON TATEISI ELECTRONICS CO (JP)
International Classes:
G06Q50/04
Domestic Patent References:
WO2006124635A12006-11-23
Foreign References:
DE19743600A11999-04-15
US20100063948A12010-03-11
Other References:
None
Download PDF:
Claims:
CLAIMS

1. An evaluation system for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories, the evaluation system comprising:

a learning information storage unit configured to store information regarding at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor measurement information,'

a determination condition storage unit configured to store a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a classification result input unit configured to receive classification results output from the at least two or more learning results, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines;

a determination unit configured to determine whether, out of

classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and

an identification information assigning unit configured to assign the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both. 2. The evaluation system according to claim 1,

wherein the plurality of production lines are controlled based on the identification information assigned to the corresponding classification results and/or to the corresponding learning results by the identification information assigning unit.

3. The evaluation system according to claim 1 or 2, wherein^

- if the same identification information is assigned to two or more of the classification results and/or to two or more of the learning results, two or more of the plurality of production lines corresponding to the two or more of the

classification results and/or to the two or more of the learning results are controlled so that the two or more of the plurality of production lines produce a specified number or specified numbers of articles; and/or

if identification information assigned to one of the learning results and/or of the classification results is different from the identification information assigned to any other learning result and/or classification result, a production line corresponding to said one of the learning results and/or of the classification results is controlled so as not to produce articles.

4. The evaluation system according to any one of claims 1 to 3,

wherein the identification information assigning unit is configured to assign the same identification information to the learning results that output the classification results that were determined to be equivalent, and

the determination unit is configured to determine equivalency of the learning results based on a determination of whether the classification results are equivalent.

5. The evaluation system according to any one of claims 1 to 4,

wherein the classification result equivalency determination condition includes at least one of a degree of coincidence of classification subjects of the learning results and a degree of coincidence of classification results output from the learning results.

6. The evaluation system according to any one of claims 1 to 5, further comprising^

a setting unit that sets a range in which the identification information is assigned,

wherein the identification information assigning unit is configured to assign identification information to at least one of the learning results and the classification results in the set range in which identification information is assigned.

7. The evaluation system according to any one of claims 1 to 6, further comprising

a classification subject determination unit configured to determine equivalency of classification subjects classified by the learning results.

8. The evaluation system according to any one of claims 1 to 7, further comprising

a re-learning request unit configured to, in a case where classification results are not determined to be equivalent by the determination unit, requests re- learning with the different learning data.

9. The evaluation system according to any one of claims 1 to 8,

wherein in a case where classification results are not determined to be equivalent, the determination unit is configured to change the classification result equivalency determination condition that is used.

10. The evaluation system according to any one of claims 1 to 9, further comprising

an input data changing unit configured to, in a case where classification results are not determined to be equivalent by the determination unit, changes input data that is input to the learning results and is used in classification result equivalency determination.

11. The evaluation system according to any one of claims 1 to 10, further comprising

a classification result combining unit configured to combine classification results output from the learning results and generates new classification results, wherein

the determination unit is configured to determine whether classification results generated by the classification result combining unit are equivalent.

12. An evaluation device for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories, the evaluation device comprising:

a classification result input unit configured to receive classification results output from at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor measurement information, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines,'

an acquisition unit configured to acquire information regarding learning results of classification results that are input, and a classification result

equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a determination unit configured to determine whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and

an identification information assigning unit configured to assign the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

13. A computer-implemented evaluation method for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories, the method comprising:

receiving classification results output from at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor

measurement information, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines; determining whether, out of classification results output from the at least two or more learning results that attained classification ability through machine learning, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent; and assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

14. A computer program comprising computer-readable instructions that, when loaded and run on a computer, cause a computer to perform the method according to claim 13. 15. An artificial intelligence, AI, system for evaluating equivalency of at least two AI classifiers, the system comprising:

a learning information storage unit configured to store information regarding learning results of the at least two AI classifiers that attained

classification ability through machine learning,'

a determination condition storage unit configured to store a classification result equivalency determination condition that enables determining that classification results output from the at least two AI classifiers are equivalent;

a classification result input unit configured to receive classification results output from the at least two AI classifiers;

a determination unit configured to determine whether, out of

classification results output from the at least two AI classifiers, at least two classification results are equivalent, with use of information regarding the AI classifiers and the classification result equivalency determination condition; and an identification information assigning unit that assigns the same identification information to the classification results that were determined to be equivalent, to AI classifiers that output the classification results that were determined to be equivalent, or to both.

Description:
DESCRIPTION

TITLE OF THE INVENTION:

EVALUATION SYSTEM, EVALUATION DEVICE, COMPUTER-IMPLEMENTED EVALUATION METHOD, COMPUTER PROGRAM AND AI SYSTEM

CROSS-REFERENCE TO RELATED APPLICATION

[OOOl] The present application is based on Japanese Patent Application No. 2016-180222 filed on September 15, 2016, and the disclosure thereof is hereby incorporated herein by reference.

TECHNICAL FIELD

[0002] The present invention relates to an evaluation system, an evaluation device, a computer-implemented evaluation method, a computer program and an artificial intelligence (AI) system. In particular, the present invention relates to evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories.

BACKGROUND ART

[0003]

A system is known for evaluating property of articles, such as their quality, produced in a factory using AI technology. In such a system, for example, images of produced articles may be captured and the produced articles may be classified into two or more groups by an AI classifier based on the captured images. The AI classifier may be trained during the production of the articles.

Recent years have seen progress in AI technology such as deep learning, and technology for classifying subjects based on input data such as camera images and sensor information has started to become prevalent. Also, the use of AI technology such as deep learning has made it possible for machines to learn on their own and attain the ability to classify subjects. This technology makes it possible to configure a classifier that classifies subjects.

[0004] When automatic classification is to be performed by a classifier, a large amount of data regarding the subjects to be classified is gathered in advance, and training is performed based on that data. When training is complete, the device can then classify subjects. There are two main types of machine learning methods, namely supervised learning and unsupervised learning.

[0005] In supervised learning, which class each learning data piece belongs to is determined and labeled in advance. For this reason, the classifier performs neural network training such that when the learning data pieces are received, the learning data pieces are determined to belong to the labeled classes.

[0006] In unsupervised learning, information indicating which class each learning data piece belongs to is not prepared in advance. The classifier learns to group input data into multiple classes such that similar objects are in the same class. For example, the technology in Non-patent Document 1 is an example of machine learning by unsupervised learning. The technology in Non-patent

Document 1 is technology in which approximately 10,000,000 images are arbitrarily extracted from videos on a video posting site, learning is performed with the images using a learning method called deep learning, and human and cat facial features are learned from the images, thus obtaining the ability to classify humans and cats. Deep learning is also generally called deep machine learning.

RELATED ART DOCUMENT NON PATENT DOCUMENT

[0007] Non-patent Document l: Le et al., Building High-level Features Using Large Scale Unsupervised Learning, ICML2012

SUMMARY OF THE INVENTION

[0008] In some circumstances, a same kind of articles may be produced at a plurality of production lines in one or more factories. In such cases, each production line may be provided with an AI classifier that is configured to evaluate property of the produced articles and to learn during the production of the articles. As the production and the learning of the AI classifiers at respective production lines proceed, different AI classifiers at different production lines may acquire different learning results and/or classification abilities due to specific characteristics of each production line, even when the different production lines are producing the same kind of articles. In view of evaluating property of articles, however, AI classifiers at different production lines for the same kind of articles may be required to have equivalent classification abilities. Accordingly, it may be desirable to determine and/or monitor equivalency of at least two AI classifiers.

Further, when learning is performed many times, there are cases where a single user or the programmer of a learning program or a program that utilizes learning results is not able to acquire all of the learning-related information that is necessary for understanding how a certain learning result is related to other learning results. In such a case, learning results often do not match completely, and therefore there is desire for a function for determining whether or not learning results can be deemed to be and treated as equivalent. Without this, there is a problem in that learning results that can actually be deemed to be the same end up being treated as different.

According to an aspect, an evaluation system is provided for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories. The evaluation system may comprise :

a learning information storage unit configured to store information regarding at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor measurement information;

a determination condition storage unit configured to store a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a classification result input unit configured to receive classification results output from the at least two or more learning results, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines;

a determination unit configured to determine whether, out of

classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and an identification information assigning unit configured to assign the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both. '

In some examples of the evaluation system according to the above-stated aspect, the plurality of production lines may be controlled based on the identification

information assigned to the corresponding classification results and/or to the corresponding learning results by the identification information assigning unit.

In the evaluation system according to the above-stated aspect and examples, if the same identification information is assigned to two or more of the classification results and/or to two or more of the learning results, two or more of the plurality of production lines corresponding to the two or more of the classification results and/or to the two or more of the learning results may be controlled so that the two or more of the plurality of production lines produce a specified number or specified numbers of articles; and/or, if identification information assigned to one of the learning results and/or of the classification results is different from the identification information assigned to any other learning result and/or classification result, a production line corresponding to said one of the learning results and/or of the classification results may be controlled so as not to produce articles.

In the evaluation system according to the above -stated aspect and examples, the identification information assigning unit may be configured to assign the same identification information to the learning results that output the classification results that were determined to be equivalent, and the determination unit may be configured to determine equivalency of the learning results based on a

determination of whether the classification results are equivalent. In the evaluation system according to the above-stated aspect and examples, the classification result equivalency determination condition may include at least one of a degree of coincidence of classification subjects of the learning results and a degree of coincidence of classification results output from the learning results. In the evaluation system according to the above-stated aspect and examples, the evaluation system may further comprise a setting unit that sets a range in which the identification information is assigned, and the identification information assigning unit may be configured to assign identification information to at least one of the learning results and the classification results in the set range in which identification information is assigned.

In the evaluation system according to the above-stated aspect and examples, the evaluation system may further comprise a classification subject determination unit configured to determine equivalency of classification subjects classified by the learning results.

In the evaluation system according to the above-stated aspect and examples, the evaluation system may further comprise a re -learning request unit configured to, in a case where classification results are not determined to be equivalent by the determination unit, request re -learning with the different learning data.

In the evaluation system according to the above-stated aspect and examples, in a case where classification results are not determined to be equivalent, the

determination unit may be configured to change the classification result

equivalency determination condition that is used.

In the evaluation system according to the above-stated aspect and examples, the evaluation system may further comprise an input data changing unit configured to, in a case where classification results are not determined to be equivalent by the determination unit, change input data that is input to the learning results and is used in the classification result equivalency determination.

In the evaluation system according to the above-stated aspect and examples, the evaluation system may further comprise a classification result combining unit configured to combine classification results output from the learning results and generates new classification results, and the determination unit may be configured to determine whether classification results generated by the classification result combining unit are equivalent. According to another aspect, an evaluation device is provided for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories. The evaluation device may comprise ' ·

a classification result input unit configured to receive classification results output from at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor measurement information, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines!

an acquisition unit configured to acquire information regarding learning results of classification results that are input, and a classification result

equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a determination unit configured to determine whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and

an identification information assigning unit configured to assign the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

According to yet another aspect, a computer-implemented evaluation method is provided for evaluating property of articles, such as their quality, produced at a plurality of production lines in one or more factories. The method may comprise :

receiving classification results output from at least two or more learning results that attained classification ability through machine learning, each of the at least two or more learning results being obtained through machine learning with respect to determining the property of the articles produced at different one of the plurality of production lines using at least captured images and/or sensor

measurement information, the classification results relating to the property of the articles produced by corresponding ones of the plurality of production lines! determining whether, out of classification results output from the at least two or more learning results that attained classification ability through machine learning, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent; and assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

According to yet another aspect, a computer program is provided. The computer program may comprise computer-readable instructions that, when loaded and run on a computer, cause a computer to perform the method according to the above - stated aspect.

According to yet another aspect, an AI system is provided for evaluating

equivalency of at least two AI classifiers. The system may comprise^

a learning information storage unit configured to store information regarding learning results of the at least two AI classifiers that attained

classification ability through machine learning;

a determination condition storage unit configured to store a classification result equivalency determination condition that enables determining that classification results output from the at least two AI classifiers are equivalent;

a classification result input unit configured to receive classification results output from the at least two AI classifiers;

a determination unit configured to determine whether, out of

classification results output from the at least two AI classifiers, at least two classification results are equivalent, with use of information regarding the AI classifiers and the classification result equivalency determination condition; and an identification information assigning unit that assigns the same identification information to the classification results that were determined to be equivalent, to AI classifiers that output the classification results that were determined to be equivalent, or to both. [0009] An identification information assigning system according to yet another aspect includes^ a learning information storage unit that stores information regarding at least two or more learning results that attained classification ability through machine learning! a determination condition storage unit that stores a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent. ' a classification result input unit that receives classification results output from the at least two or more learning results! a determination unit that determines whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition! and an identification information assigning unit that assigns the same identification information to classification results that were determined to be equivalent, or to the learning results that output the classification results that were determined to be equivalent, or to both.

[0010] According to this aspect, the equivalency of classification results output from multiple learning results (e.g. classifiers) is determined, and the same identification information is assigned to learning results or classification results that are determined to be equivalent, thus making it possible to match multiple learning results (classifiers) or classification results output from multiple learning results (classifiers), and making it possible to achieve convenience for users.

[0011] Also, in the identification information assigning system according to yet another aspect , the identification information assigning unit may assign the same identification information to the learning results that output the classification results that were determined to be equivalent, and equivalency of the learning results may be determined based on a determination of whether the classification results are equivalent.

[0012] This is because there are cases where the classification results output from learning results can be determined to be equivalent, and cases where the learning results can also be determined to be equivalent.

[0013] Also, it is preferable that the classification result equivalency

determination condition includes at least one of a degree of coincidence of classification subjects of the learning results and a degree of coincidence of classification results output from the learning results. [0014] This is because the degree of coincidence of classification subjects of the learning results and the degree of coincidence of classification results output from the learning results are particularly important elements when determining the equivalency of learning results or classification results output from learning results.

[0015] Also, the identification information assigning system can further include a setting unit that sets a range in which the identification information is assigned, wherein the identification information assigning unit may assign identification information to at least one of the learning results and the classification results in the set range in which identification information is assigned.

[0016] According to this configuration, the manner of assigning identification desired by the user of the classifier that employs the learning results can be selected.

[0017] Also, the identification information assigning system according to yet another aspect may further include a classification subject determination unit that determines equivalency of classification subjects classified by the learning results.

[0018] According to this configuration, it is possible to check the equivalency of the subjects that are to be classified, which is a very important element when determining the equivalency of learning results or classification results output from learning results.

[0019] Also, the identification information assigning system according to yet another aspect may further include a re-learning request unit that, in a case where classification results are not determined to be equivalent by the determination unit, requests re -learning with different leaning data.

[0020] According to this configuration, in the case where it cannot be determined that classification results output from learning results are equivalent, it is possible to request re-learning of a learning result and generate a learning result for which the classification results can be equivalent.

[0021] Also, in a case where classification results are not determined to be equivalent, the determination unit may change the determination condition that is used.

[0022] According to this configuration, it is possible to give an opportunity to review the equivalency of classification results and obtain equivalency of learning results or classification results output from the learning results within a range that is tolerable to the user who will use the learning results. [0023] Also, the identification information assigning system according to yet another aspect may further include an input data changing unit that, in a case where classification results are not determined to be equivalent by the

determination unit, changes input data that is input to the learning results and is used in classification result equivalency determination.

[0024] According to this configuration, it is possible to prevent inequivalence of classification results caused by an insufficient number of pieces of or types of input data or an excessive amount of input data, for example.

[0025] Also, the identification information assigning system according to yet another aspect may further include a classification result combining unit that combines classification results output from the learning results and generates new classification results, wherein the determination unit may determine whether classification results generated by the classification result combining unit are equivalent.

[0026] According to this configuration, by using new classification results obtained by combining classification results output from learning results, it is possible to obtain equivalency of learning results or classification results output from the learning results within a range that is tolerable to the user who will use the learning results.

[0027] An identification information assigning device according to yet another aspect includes- a classification result input unit that receives classification results output from at least two or more learning results that attained classification ability through machine learning; an acquisition unit that acquires information regarding learning results of classification results that are input, and a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent; a determination unit that determines whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition, ' and an identification information assigning unit that assigns the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

[0028] According to this aspect, even in the case where the learning information storage unit and the determination condition storage unit in the above-described identification information assigning system are outside the device, by providing the acquisition unit that acquires the information regarding learning results and the classification result equivalency determination condition from the external storage unit, it is possible to achieve effects similar to the identification information assigning system as stated above.

[0029] An identification information assigning method according to yet another aspect includes^ receiving classification results output from at least two or more learning results that attained classification ability through machine learning;

determining whether, out of classification results output from the at least two or more learning results that attained classification ability through machine learning, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition for determining that classification results output from the at least two or more learning results are equivalent; and assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

[0030] Also, a program according to yet another aspect causes a computer to execute^ a process of receiving classification results output from at least two or more learning results that attained classification ability through machine learning! a process of determining whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition for determining that classification results output from the at least two or more learning results are equivalent, the information regarding the at least two or more learning results and the classification result equivalency determination condition being stored in a storage unit; and a process of assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

EFFECTS OF THE INVENTION

[0031] According to exemplary embodiments and various examples, difference learning results of AI classifiers provided at different production lines may be accounted for. This may contribute to uniform evaluation of the property of produced articles, which may lead to reducing undesirable variations in the property of produced articles and/or to improving overall quality of produced articles.

Further, according to exemplary embodiments, production lines, or more in general production particularly therewith, may be controlled based on identification information (e.g. ID) assigned to classification results according to evaluating equivalence of classification results, learning results, and/or outputs from classifiers of the production lines. Based on the ID, the corresponding production lines may be controlled, for example, with respect to the number of articles produced by the production lines. For example, each of the production lines may include an inspection devices (e.g. classifier). The ID assigned to classification results corresponding to a production line may be assigned to the respective inspection device. In case that the same identification information is assigned to all of the inspection devices (i.e. all classifiers) of the production lines, the equivalence of products of all of the production lines may be considered to be verified. In

particular, all production lines may be controlled in a manner that products are transferred, provided, and/or distributed in any of the production lines, e.g. such that the same number of products are transferred and/or produced in each

production line. Accordingly, based on the classification result equivalency determination that all of the inspection devices can perform the equivalent inspections or all of the production lines can perform equivalent productions, the production lines may be controlled such that the same number of products are produced and/or transferred in and/or from each production line. On the other hand, if one of the production lines is assigned with an ID different from the other production lines, the production line including the inspection device with the different ID may be controlled not to produce corresponding articles and/or not to transfer articles, since the inspection device with the different ID cannot ensure the equivalency of produced articles with respect to articles produced at the other production lines.

Further, the equivalency of classification results output from multiple learning results (e.g. classifiers) may be determined, and the same identification information may be assigned to learning results or classification results output therefrom that are determined to be equivalent, thus making it possible to match multiple learning results (e.g. classifiers) or classification results output from multiple learning results (e.g. classifiers), and making it possible to achieve convenience for users. BRIEF DESCRIPTION OF THE DRAWINGS

[0032]

FIG. 1 is a block diagram of a first embodiment of the present invention.

FIG. 2 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 3 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 4 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 5 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 6 is a block diagram of another example in the first embodiment of the present invention.

FIG. 7 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 8 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 9 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 10 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 11 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 12 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 13 is a diagram for illustrating an example of assigning identification

information to learning results or classification results.

FIG. 14 is a block diagram of a second embodiment of the present invention.

FIG. 15 is a block diagram of a third embodiment of the present invention.

FIG. 16 is a block diagram of a fourth embodiment of the present invention.

FIG. 17 is a block diagram of a sixth embodiment of the present invention.

FIG. 18 is a system configuration diagram of an embodiment.

FIG. 19 is a block diagram of a system of the embodiment and other configurations of devices. FIG. 20 is a block diagram of a learning data preparing device 11.

FIG. 21 is a block diagram of a learning requesting device 12.

FIG. 22 is a diagram showing an example of a configuration of a learning database device 22.

FIG. 23 is a block diagram of an equivalency determining device 24.

FIG. 24 is a flowchart of operations performed by the equivalency determining

device 24.

FIG. 25 is a diagram for illustrating Application Example 1.

FIG. 26 is another esystem configuration diagram of the embodiment.

FIG. 27 is a flowchart in Application Example 8.

FIG. 28 is a flowchart of redetermination preparation processing.

EMBODIMENTS OF THE INVENTION

[0033] Embodiments of the present invention will be described below with reference to the drawings.

[0034] First, a first embodiment of the present invention will be described.

[0035] FIG. 1 is a block diagram of a first embodiment of the present invention.

[0036] An identification information assigning system of the first embodiment includes a learning information storage unit 1, a determination condition storage unit 2, a classification result input unit 3, a determination unit 4, and an

identification information assigning unit 5. Note that although an identification information assigning system is described in the following description, this system may be configured it may be configured integrally with a learning system that generates at least two or more learning results that have attained classification ability.

[0037] Also, a configuration is possible in which the learning information storage unit 1 and the determination condition storage unit 2 are outside an

identifying/assigning device that includes the classification result input unit 3, the determination unit 4, and the identification information assigning unit 5. In this case, it is sufficient to provide an acquiring unit that can read out and acquire necessary information from the learning information storage unit 1 and the determination condition storage unit 2, or to include such a function in the determination unit 4. [0038] The learning information storage unit 1 is a storage unit that stores information related to at least two or more learning results that have attained classification ability through machine learning.

[0039] Here, a learning result is a neural network or the like that has attained classification ability through machine learning, for example. In other words, a classifier or an AI classifier includes a learned model. For example, a classifier may be configured to classify, based on the learned model, subjects such as articles at a production line, e.g. regarding quality of an article. The classifier may be further trained at its production line. A plurality of production lines may be further trained at the respective production line based on different sets of articles as training samples. There are no limitations on the type of neural network, examples of which include an RBF Network (Radial Basis Function Network), a General Regression Neural Network (generalized RBF Network), and a Recurrent Neural Network.

[0040] One example of machine learning is deep learning, and there are two types of learning methods, namely supervised learning and unsupervised learning. In supervised learning, which class each learning data piece belongs to is

determined and labeled in advance. For this reason, in classification learning, neural network learning is performed such that when learning data pieces are received, the learning data pieces are determined to belong to the labeled classes. Even in this case, if the learning data is not the same, there are cases where the classification ability attained for the learning result is not the same. In

unsupervised learning, information indicating which class each learning data piece belongs to is not prepared in advance. In classification learning, the classifier learns to group input into multiple classes such that similar objects are in the same class.

[0041] The following is a more specific description of the problem to be solved by the present invention that is described in the "problem to be solved by the invention" section.

[0042] However, classification ability is attained according to the learning data in each case of learning, and therefore the following can occur in any learning method.

[0043] If there are slight differences in the input learning data or the learning method, there is a possibility of differences arising in classification results for a new classification subject, and particularly in the case of using multiple classifiers that have slight differences in the input learning data or learning method, it is not desirable for cases to occur in which different classification results are obtained by classifiers for input data that should belong to the same class. Also, if a subject from which input data is acquired has features of multiple classes, there are cases where information indicating that fact should be output.

[0044] Furthermore, in the case of deep learning, random numbers are often used in the learning process, and therefore even in the case of the same learning method and the same learning data, differences sometimes arise in the ability obtained for the learning result. In such a case, a method is conceivable in which different class identification information is assigned by different neural networks that have slightly different classification results.

[0045] However, there is a problem in that the number of types of classification results rapidly increases depending on the number of times that learning is performed and the number of classifiers.

[0046] Also, a method is conceivable in which when the subject is the same, classification results are mandatorily matched even if the learning data or learning method is different. However, there is a possibility that the classification results will vary a large amount between classifiers, and this method is problematic as well.

[0047] Furthermore, various systems that perform machine learning are also conceivable.

[0048] Currently, it is often the case that an AI researcher creates a learning program and performs learning for a specific learning objective. The classification ability attained for the learning result is attained by a neural network envisioned by the researcher.

[0049] When a single researcher researches a specific case of learning using deep learning, differences in the ability attained for the learning result caused by differences in the learning content and the learning program can be understood by the researcher through the programming process and experimentation after learning.

[0050] However, when a learning result obtained using deep learning becomes widely used, the number of related persons and devices increases, and the content used in learning and the results can no longer be managed by a single person or a single device.

[0051] As described above, when learning is performed many times, it is no longer possible to easily understand how the classification output of each neural network having classification ability as a result of learning is related to the classification output of other learning results.

[0052] In view of this, when learning is performed many times, the present invention makes it possible to understand equivalency for classes according to attained ability for learning results obtained through learning.

[0053] Through deep learning or the like, a learning result can attain

classification ability for classifying subjects based on input data and input signals. Examples of classification include a determination of a good article or a poor article, a determination of rank, the grouping of subjects, a determination of state, the detection of subjects having the same features, and hierarchical classification.

However, classification is not necessarily required to be performed uniquely.

Certain subjects that are the same may be included in multiple classes. Also, classes do not need to be completely independent of each other. A certain class may be included in another class.

[0054] Examples of information related to a learning result include, but are not limited to, the type of data input to the learning result, the method of acquiring the input data and detailed information regarding the learning data and the data that is output, attribute-related information such as data attributes and learning program attributes, the learning period, the subjects of the classification ability to be attained (the subject field, image subjects, document subjects, audio subjects, etc.), the type of ability (classification ability is attained (supervised, unsupervised), predictive ability is attained, predetermined classification is performed internally (control ability is attained)), evaluation of extent of learning, attained level, differences between learning devices, the learning method, the content of the ability test, and differences between individual learning results. Such information can be managed by a learning database.

[0055] The determination condition storage unit 2 stores a determination condition that enables determining that classification results output from at least two or more learning results are equivalent.

[0056] The condition for determining equivalency defines what condition makes it possible to determine that classification results output from learning results are equivalent.

[0057] One example of a determination condition is that the input data pieces (classification subjects) of the learning results match, and both classification results match. The extent of the match (degree of coincidence) can also be set as a condition. For example, it is determined that classification results are equivalent if greater than or equal to 99.9% of all of the classes for the same input data match. Alternatively, it is determined that classification results for the same sample are equivalent if greater than or equal to 90% of all of the classes match.

[0058] In the case of the subjects classified by the learning results as well, cases of being equivalent and cases of being different are both possible, and therefore such cases may be taken into consideration when determining the equivalency of classification results.

[0059] Another condition that may be set is that a classification result output from a learning result matches an existing classification result recorded in an external device (e.g., a database).

[0060] Alternatively, it is also possible to set and store a desired condition for determining that classification results are equivalent from the viewpoint of usage of the classification results.

[0061] For example, envision the case where there is a learning result that determines that the subject is a cat with use of an image, there is a learning result that determines that the subject is a cat with use of an animal vocalization, and the equivalency of these classes is to be determined. In this case, for a user who needs a cat image determining device that employs a learning result that determines that the subject is a cat with use of an image, a classification result output from a learning result that determines that the subject is a cat with use of an image and a classification result output from a learning result that determines that the subject is a cat with use of an animal vocalization are not equivalent. On the other hand, for a user who needs a cat proximity warning device, it is sufficient to be able to classify the subject as a cat, and therefore a classification result output from a learning result that determines that the subject is a cat with use of an image and a classification result output from a learning result that determines that the subject is a cat with use of an animal vocalization are equivalent.

[0062] In another example, envision the case where there is a learning result that determines that the subject is a cat, there is a learning result that determines that the subject is a dog, and the equivalency of these classes is to be determined. In this case, for a user who needs a classifier in order to classify cats and dogs, a cat classification result and a dog classification result are not equivalent. However, for a user who desires to determine that the subject is an animal, both cats and dogs are animals, and therefore a cat classification result and a dog classification result are equivalent.

[0063] In this way, the determination condition for determining equivalency is set according to how the learning results are to be used, such as the learning subject and the mode of use by the user.

[0064] Furthermore, it is possible to store different determination conditions for different types of learning or different learning subjects. For example, it is determined that classification results are equivalent when the respective types of input data are the same, or it is determined that classification results are

equivalent when the respective types of input data satisfy a predetermined condition.

[0065] Also, the user can be allowed to designate the condition for the method of determining equivalency. For example, several types of determination conditions are stored in the determination condition storage unit 2 in advance, those

determination condition options are presented to the user, and a selection is obtained.

[0066] Also, a configuration is possible in which the user provides a

determination condition program, and that program is stored in the determination condition storage unit 2.

[0067] Specifically, the determination condition for determining the equivalency of classification results is set to an appropriate condition according to how the learning result will be used, such as the learning subject and the mode of use.

[0068] The classification result input unit 3 is an input unit that receives an input of classification results output from at least two or more learning results. The learning results that output classification results may exist in the same system, or may exist outside of the system. The classification results output from those learning results are input to the classification result input unit 3. Note that the classification result input unit 3 may receive data other than classification results output from learning results, such as the learning results themselves.

[0069] The determination unit 4 determines whether at least two classification results are equivalent to each other among the classification results that were output from the at least two or more learning results and were input to the classification result input unit 3, with use of information regarding the learning results that is stored in the learning information storage unit 1 and the

determination condition that is stored in the determination condition storage unit 2. [0070] For example, assume that the determination condition is that the input data sets of learning results 1 and 2 have equivalency, and that the classification results of learning results 1 and 2 match. Here, if it is determined, based on information regarding the learning results 1 and 2 stored in the learning

information storage unit 1, that the input data sets of the learning results 1 and 2 are 70 cat images and 30 dog images, the determination unit 4 determines that the input data sets of the learning results 1 and 2 have equivalency. Furthermore, if the input data of 70 cat images and the input data of 30 dog images are classified into 70 cat images and 30 dog images by each of the learning results 1 and 2, the determination unit 4 determines that the cat image classification results from the learning result 1 and the cat image classification results from the learning result 2 are equivalent, and that the dog image classification results from the learning result 1 and the dog image classification results from the learning result 2 are equivalent.

[0071] Furthermore, it should be noted that there are cases where the

classification results of learning results can be determined to be equivalent, and cases where the learning results can also be determined to be equivalent. For example, in the above -de scribed example, the classification results of the learning result 1 and the classification results of the learning result 2 are equivalent, and the classification subjects are also equivalent in terms of both being images. In this case, it can also be said that the learning result 1 and the learning result 2 are equivalent.

[0072] The identification information assigning unit 5 assigns the same identification information to classification results that are determined to be equivalent, to learning results that output classification results determined to be equivalent, or to both.

[0073] The identification information can be assigned to each classification result that attained classification ability.

[0074] Also, the same identification information can be assigned in the case where any set of classification results among the classification results of multiple learning results were determined to be equivalent to each other.

[0075] Also, in the case where the classification ability outputs only one classification result, the identification information assigned to that classification result and the identification information assigned to the learning result that outputs that classification result may be similar to each other. Furthermore, depending on how the learning results are to be used, a configuration is possible in which the same identification information is assigned to learning results only if all of the classification results are determined to be equivalent.

[0076] Also, identification information may be assigned to both the learning result and the respective classification results. Alternatively, identification information may be assigned to only either one.

[0077] The following describes examples of assigning the same identification information with reference to the drawings.

[0078] (l) Assigning the same identification information to only classification results

(l-l) Case where one classification result is output from each learning result

As shown in FIG. 2, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. A classification result (cat image) is output from the learning result 1 and a classification result (cat image) is output from the learning result 2, and if it can be determined that these classification results are equivalent, the same identification information A is assigned to the classification result of the learning result 1 and the classification result of the learning result 2.

[0079] (1-2) Case where multiple classification results are output from each learning result, and identification information is assigned to all of the classification results

A configuration is possible in which multiple classification results are output from the learning results, and if classification results are equivalent to each other, the same identification information is assigned to the classification results that are equivalent to each other. For example, as shown in FIG. 3, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if it can be determined that the classification result "cat image" of the learning result 1 and the classification result "cat image" of the learning result 2 are equivalent, the same identification information A is assigned to those classification results "cat image". Also, if it can be determined that the classification result "dog image" of the learning result 1 and the classification result "dog image" of the learning result 2 are equivalent, the same identification information B is assigned to those classification results "dog image".

[0080] (l"3) Case where multiple classification results are output from each learning result, and the same identification information is assigned to a portion of the classification results

A configuration is possible in which if multiple classification results are output from each learning result, and a portion of the classification results are deemed to be equivalent, the same identification information is assigned to only the classification results that are deemed to be equivalent. For example, as shown in FIG. 4, there is a learning result 1 and a learning result 2, and data related to mandarin oranges is input as input data to each of the learning results. The learning result 1 then performs classifications into the three classes "high-class", "middle -class", and "irregular". The learning result 2 performs classification into the five classes "1st class", "2nd class", "3rd class", "4th class", and "irregular". If it can be determined that, out of the classification results of the learning results 1 and 2, "irregular" of the learning result 1 and "irregular" of the learning result 2 are equivalent, the same identification information A is assigned to "irregular" of the learning result 1 and "irregular" of the learning result 2.

[0081] (1-4) Case where multiple classification results are output from each learning result, it can be determined that classification results are equivalent in a superordinate concept, and the same identification information is assigned to classification results that can be determined to be equivalent in a superordinate concept

A configuration is possible in which multiple classification results are output from each learning result, and if it can be determined that respective classification results are equivalent in a superordinate concept, then the same identification information is assigned to the classification results that can be determined to be equivalent in a superordinate concept.

[0082] For example, as shown in FIG. 5, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if the determination unit 4 is to make a determination regarding classification results in the

superordinate concept of an animal, the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2 are equivalent in the superordinate concept of an animal. Accordingly, the same identification information A is assigned to the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2.

[0083] Although the same identification information A is given to "cat image" and "dog image" in this example, it can also be said that grouping is performed according to a superordinate concept of "cat" and "dog", and it can also be said that "cat" and "dog" are combined to create a new classification result with the

superordinate concept of "animal", "pet", or the like for "cat" and "dog". In the case of creating a new classification result such as this, it is sufficient that, as shown in FIG. 6, the determination unit 4 has a classification result combining unit 10 that generates a new classification result by combining classification results output from learning results. The same identification information may be assigned to the newly generated classification results. The same follows for the later-described grouping and combining of two or more classification results.

[0084] Note that classification results are not necessarily required to have similar subjects, and in the case of the classification results "cat image" and "cat vocalization" for example, these two classification results are equivalent in the superordinate concept of "cat". In this case as well, the same identification information may be assigned to "cat image" and "cat vocalization".

[0085] (1-5) Case where multiple classification results are output from each learning result, and at least two pieces of identification information are assigned to classification results

A configuration is possible in which if classification results are equivalent, the same identification information is assigned to classification results that are equivalent, and if it can be determined that classification results are equivalent in a superordinate concept, identification information that is different from the identification information assigned to the subordinate concept

classification results is assigned to the classification results determined to be equivalent in the superordinate concept.

[0086] For example, as shown in FIG. 7, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if it can be determined that the classification result "cat image" of the learning result 1 and the classification result "cat image" of the learning result 2 are equivalent, the same identification information A is assigned to those classification results "cat image". Also, if it can be determined that the classification result "dog image" of the learning result 1 and the classification result "dog image" of the learning result 2 are equivalent, the same identification information B is assigned to those

classification results "dog image".

[0087] Furthermore, if the determination unit 4 makes a determination regarding classification results in the superordinate concept of an animal, the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2 are equivalent in the superordinate concept of an animal. Accordingly, the same identification information C is assigned to the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2.

[0088] (2) Assigning the same identification information to only learning results.

(2-1) Case where one classification result is output from each learning result, it can be determined that the classification results are equivalent, and the same identification information is assigned to the learning result

As shown in FIG. 8, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. If only "cat image" is output as the classification result of the learning result 1 and the classification result of the learning result 2, and it can be determined that the classification results are equivalent, it is determined that the learning result 1 and the learning result 2 are equivalent, and the same identification information C is assigned to the learning result 1 and the learning result 2.

[0089] (2-2) Case where multiple classification results are output from each learning result, it can be determined that all of the classification results are equivalent, and the same identification information is assigned to the learning results

A configuration is possible in which if there are multiple classification results from the learning result 1 and multiple classification results from the learning result 2, and the classification results are equivalent, it is determined that the learning result 1 and the learning result 2 are equivalent, and the same identification information is assigned to only the learning results. For example, as shown in FIG. 9, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if it can be determined that the classification result "cat image" of the learning result 1 and the classification result "cat image" of the learning result 2 are equivalent, and it can be determined that the classification result "dog image" of the learning result 1 and the classification result "dog image" of the learning result 2 are equivalent, the same identification information C may be assigned to only the learning results.

[0090] (2-3) Case where multiple classification results are output from each learning result, it can be determined that classification results are equivalent in a superordinate concept, and the same identification information is assigned to the learning results

A configuration is possible in which multiple classification results are output from each learning result, and if it can be determined that the classification results are equivalent in a superordinate concept, then it is determined that the learning results are equivalent, and the same identification information is assigned to the learning results.

[0091] For example, as shown in FIG. 10, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if the determination unit 4 is to make a determination regarding classification results in the

superordinate concept of an animal, the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2 are equivalent in the superordinate concept of an animal. Accordingly, the same identification information C is assigned to the learning result 1 and the learning result 2.

[0092] Note that classification results are not necessarily required to have similar subjects, and in the case of the classification results "cat image" and "cat vocalization" for example, these two classification results are equivalent in the superordinate concept of "cat". In this case as well, the same identification information C is assigned to the learning result 1 and the learning result 2.

[0093] (3) Assigning the same identification information to classification results and assigning the same identification information, which is different from the identification information assigned to the classification result, to learning results (3-1) Case where one classification result is output from each learning result, and it can be determined that the classification results are equivalent

As shown in FIG. 11, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. A classification result (cat image) is output from the learning result 1 and a classification result (cat image) is output from the learning result 2, and if it can be determined that these classification results are equivalent, the same identification information A is assigned to the classification result of the learning result 1 and the classification result of the learning result 2. Also, it is determined that the learning result 1 and the learning result 2 are equivalent, and the same identification information C is assigned to the classification result of the learning result 1 and the classification result of the learning result 2.

[0094] Note that although different identification information is assigned to the learning results and the classification results in the above example, in this example it is possible to assign the same identification information (e.g., the identification information A) to the learning results and the classification results.

[0095] (3-2) Case where there are multiple classification results, the same identification information is assigned to classification results, and the same identification information, which is different from the identification information assigned to the classification results, is assigned to the learning results

As shown in FIG. 12, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The

classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if it can be determined that the classification result "cat image" of the learning result 1 and the

classification result "cat image" of the learning result 2 are equivalent, the same identification information A is assigned to those classification results "cat image". Also, if it can be determined that the classification result "dog image" of the learning result 1 and the classification result "dog image" of the learning result 2 are equivalent, the same identification information B is assigned to those

classification results "dog image". Furthermore, it is determined that the learning result 1 and the learning result 2 are equivalent, and the same identification information C is assigned to the classification result of the learning result 1 and the learning result 2. [0096] (3"3) Case where multiple classification results are output from each learning result, it can be determined that classification results are equivalent in a superordinate concept, the same identification information is assigned to

classification results that can be determined to be equivalent in a superordinate concept, and the same identification information, which is different from the identification information assigned to the classification results, is assigned to the learning results

A configuration is possible in which multiple classification results are output from each learning result, and if it can be determined that respective classification results are equivalent in a superordinate concept, then the same identification information is assigned to the classification results that can be determined to be equivalent in a superordinate concept, and the same identification information is assigned to the learning results as well.

[0097] For example, as shown in FIG. 13, there is a learning result 1 and a learning result 2, and images are input as input data to each of the learning results. The classification results of the learning result 1 and the classification results of the learning result 2 are both "cat image" and "dog image", and if the determination unit 4 is to make a determination regarding classification results in the

superordinate concept of an animal, the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2 are equivalent in the superordinate concept of an animal. Accordingly, the same identification information A is assigned to the classification results "cat image" and "dog image" of the learning result 1 and the classification results "cat image" and "dog image" of the learning result 2.

[0098] Furthermore, it is determined that the learning result 1 and the learning result 2 are equivalent, and the same identification information C is assigned to the learning result 1 and the learning result 2.

[0099] Note that classification results are not necessarily required to have similar subjects, and in the case where the classification subjects are "cat image" and "cat vocalization", and learning is performed using supervised data for the cases of being a cat and not being a cat for example, the two classification results are equivalent in the superordinate concept of "cat". In this case as well, the same identification information may be assigned to "cat image" and "cat vocalization", and the same identification information may be assigned to the learning result 1 and the learning result 2. [0100] Although representative examples of the assignment of identification information have been described above, the present invention is not limited to these examples. For example, (1 * 5) and (2-3) can be combined.

[0101] The identification information assigning system having the above configuration operates as described below.

[0102] First, the classification result input unit 3 receives an input of

classification results output from each learning result.

[0103] The determination unit 4 reads out, from the learning information storage unit 1, information regarding the learning results that output the

classification results. Determination conditions for determining the equivalency of classification results are also read out from the determination condition storage unit 2. Then, using the information regarding the learning results and the determination conditions, the determination unit 4 determines whether the classification results output from the learning results are equivalent.

[0104] If the determination unit 4 determines that the classification results are equivalent, the identification information assigning unit 5 assigns the same identification information to the classification results, to the learning results that output the classification results determined to be equivalent, or to both.

[0105] According to the present invention, the equivalency of classification results output from multiple learning results (classifiers) is determined, and the same identification information is assigned to learning results or classification results that are determined to be equivalent, thus making it possible to match multiple learning results (classifiers) or classification results output from multiple learning results (classifiers), and making it possible for classification results assigned the same identification information to be treated as the same classes, thus making it possible to achieve convenience for users of the classifiers.

[0106] A second embodiment of the present invention will be described below.

[0107] In the identification information assigning system described above, examples of the assignment of identification information described above can be conclusively selected and used, but from the viewpoint of the user of the classifier that employs the learning results, it is preferable to be able to change the range of assignment of identification information for each learning result. Here, the range of assignment of identification information refers to assigning identification

information to only learning results, assigning identification information to only classification results, assigning identification information to both, or assigning identification information to each classification result and assigning identification information to only a portion of the classification results, for example.

[0108] In view of this, it is preferable that the identification information assigning system has a setting unit 6 for setting the range of assignment of identification information for each learning result as shown in FIG. 14.

[0109] The identification information assigning unit 5 assigns identification information to learning results, to classification results, or to both in accordance with the range of assignment of identification information set in the setting unit 6.

[0110] According to this configuration, it is possible to select how identification information is to be assigned as required by the user of the classifier that employs the learning results.

[0111] A third embodiment of the present invention will be described below.

[0112] When determining the equivalency of classification results, it is very important to check the equivalency of the subjects being classified. The reason for this is that, depending on equivalency determination conditions, if the subjects being classified are different, the classification results will not be equivalent. For example, in the case of determining the equivalency of classification results output from two learning results that identify "dog", if images of dogs and other animals are input to one of the learning results, and nothing but images of fish are input to the other learning result, the classification results output from the two learning results will not be equivalent.

[0113] Accordingly, it is desirable to be able to understand whether or not the subjects to be classified by the learning result are equivalent before the equivalency of the classification results is determined.

[0114] Furthermore, in the case of determining the equivalency of learning results based on the equivalency of classification results output from the learning results as well, it is very important to understand the classification subjects of the learning results.

[0115] However, whether or not the subjects of the classification ability attained by a learning result are equivalent cannot be known based on the learning result. For this reason, it is necessary to know whether or not the classification subjects are equivalent.

[0116] In view of this, in another embodiment, the identification information assigning system is provided with a classification subject equivalency determination unit 7 that determines whether or not the subjects classified by learning results are equivalent, as shown in FIG. 15.

[0117] The classification subject equivalency determination unit 7 can determine the equivalency of classification subjects with use of various conditions. If it is guaranteed in advance that the learning results targeted for determination will be used with classification subjects that are equivalent, the classification subjects are equivalent. For example, if the learning results targeted for determination are inspection devices for a specified product, the classification subjects are equivalent.

[0118] Also, if the learning recording unit 1 stores information indicating that the classification subjects of the learning results targeted for determination are the same, it can be determined based on that information that the classification subjects are equivalent. For example, if the names of the classification subjects of the learning results targeted for determination are stored in the learning recording unit 1, and the names of the classification subjects are the same, then the

classification subjects of the learning results targeted for determination are equivalent.

[0119] Also, if the classification subjects are the same or have the same attributes, it can also be determined that the classification subjects are equivalent. For example, if the learning results targeted for determination performed learning with use of information from sensors disposed at a specific intersection, the classification subjects are equivalent regarding the information from sensors disposed at a specific intersection. Also, if the learning results targeted for determination performed learning with use of information from sensors for detecting the status of people in a vehicle compartment, even in the case where the vehicles are different, the classification subjects of the learning results targeted for determination are equivalent regarding the information from sensors in a vehicle compartment.

[0120] Also, in the case of performing learning with use of information obtained from the same device or devices of the same type, if the intended use is the same, it can be determined that the classification subjects are equivalent.

[0121] Also, the fact that the classification subjects are equivalent can be input as information determined by an administrator.

[0122] Also, if the identification information assigned to input data input to learning results is the same, it can be determined that the classification subjects are equivalent. [0123] Furthermore, it is possible to identify the classification subjects, identify that the classification subjects are predetermined subjects, and thereafter determine the equivalency of the classification subjects. For example, in the case of determining the equivalency of classification results output from learning results that classify the state of people using images, if a portion of a person's face or body is detected in an image input to both of the learning results, it can be determined that the classification subject is a person, and it can be determined that the classification subjects are equivalent.

[0124] According to this configuration, it is possible to check the equivalency of classification subjects, which is a very important factor when determining the equivalency of classification subjects.

[0125] A fourth embodiment of the present invention will be described below.

[0126] In the identification information assigning systems described above, a case can occur in which it is not possible to determine that classification results output from learning results are equivalent. In this case, it is possible that the learning or the amount of data used in learning is insufficient.

[0127] In view of this, as shown in FIG. 16, an identification information assigning system described above includes a re-learning request unit 8 that requests the system or device that performs machine learning to perform re- learning of a learning result.

[0128] The system or machine that performs machine learning and was requested to perform re-learning performs re-learning of the learning result targeted for determination. Re-learning may include not only the case of increasing the amount of input data, but also the case of reducing the amount of input data in order to eliminate the adverse effects of over-learning.

[0129] When re-learning ends, the identification information assigning system again performs the operation of determining the equivalency of classification results.

[0130] According to this configuration, in the case where it cannot be determined that classification results output from learning results are equivalent, it is possible to request re-learning of a learning result and generate a learning result for which the classification results can be equivalent.

[0131] A fifth embodiment of the present invention will be described below.

[0132] In the identification information assigning systems described above, a case can occur in which it is not possible to determine that classification results output from learning results are equivalent. In such a case, it is also possible to obtain equivalency for classification results by re-setting the determination conditions and performing the equivalency determination once again.

[0133] For example, in the case of performing classification with determination criteria that are so complex and subtle that deep learning needs to be used, a case can occur in which equivalency cannot be determined due to a slight difference near the boundary between classes. In this case, by reviewing the conditions, it is sometimes possible to be able to determine that classification results are equivalent. In the case of performing classification using another model of sensor as well, by reviewing the conditions for determining the equivalency of classes, it is sometimes possible to be able to determine that classification results are equivalent.

[0134] In view of this, in the fifth embodiment, if it cannot be determined that classification results are equivalent, the determination unit 4 changes the

determination conditions that are used. The determination conditions that are to be changed may be changed to other determination conditions that are stored in the determination condition storage unit 2, or may be set to new determination conditions.

[0135] According to this configuration, it is possible to give an opportunity to review the equivalency of classification results and obtain equivalency of

classification results within a range that is tolerable to the user who will use the learning results.

[0136] A sixth embodiment of the present invention will be described below.

[0137] In the identification information assigning systems described above, a case can occur in which it is not possible to determine that classification results output from learning results are equivalent. In this case, it is also possible to obtain equivalency of classification results by changing the input data that is input to the learning results and used when determining the equivalency of classification results.

[0138] In view of this, as shown in FIG. 17, an identification information assigning system described above includes an input data changing unit 9 that, if the determination unit 4 could not determine that classification results are equivalent, changes the input data that is input to a learning result and is to be used when determining the equivalency of classification results.

[0139] Changing input data includes not only increasing the number of pieces of input data, but also changing the type of input data, for example. [0140] According to this configuration, it is possible to prevent inequivalence of classification results caused by an insufficient number of pieces of or types of input data or an excessive amount of input data, for example.

[0141] The identification information assigning system having the above configuration operates as described below.

[0142] First, the classification result input unit 3 receives an input of

classification results output from each learning result.

[0143] The determination unit 4 reads out, from the learning information storage unit 1, information regarding the learning results that correspond to the classification results. Determination conditions for determining the equivalency of classification results are also read out from the determination condition storage unit 2. Then, using the information regarding the learning results and the determination conditions, the determination unit 4 determines whether the classification results output from the learning results are equivalent.

[0144] If the determination unit 4 determines that the classification results are equivalent, the identification information assigning unit 5 assigns the same identification information to the classification results, to the learning results that output the classification results determined to be equivalent, or to both.

[0145] According to the embodiments described above, the equivalency of classification results output from multiple learning results (classifiers) is

determined, and the same identification information is assigned to learning results or classification results that are determined to be equivalent, thus making it possible to match multiple learning results (classifiers) or classification results output from multiple learning results (classifiers). Also, learning results or classification results that have been assigned the same identification information can be treated as being the same class, and it is possible to increase convenience for users of the classifiers.

[0146] The following describes a specific embodiment in which the identification information assigning system of an embodiment described above is applied to a specific service.

[0147] The following system content is envisioned in the service of the present embodiment.

[0148] The system performs learning regarding classification. A neural network having classification ability is constructed as the learning result. By using information for constructing the neural network that has classification ability, it is possible to duplicate the neural network and use it in other places. Various capabilities can be attained for the learning result by changing learning conditions such as the learning objective, the details of the learning data, and the learning program.

[0149] The system can manage the usage of the capabilities obtained as multiple learning results trained regarding classification. Performing training regarding classification makes it possible to attain classification ability. Accordingly, by managing learning results, it is possible to identify and manage corresponding classification abilities.

[0150] When a learning result is used, the classification ability thereof is demonstrated depending on the usage environment. Identifying learning results makes it possible to manage the usage of the learning results.

[0151] When learning requests are received from multiple requesters, and classification learning is performed through multiple learning methods with use of multiple sets of learning data, management of the learning results becomes particularly important. Multiple learning results are identified and managed in cases such as the following.

[0152] · Different subjects

Different learning requests

· Different learning data

Different learning objectives

Different usage devices

Other, usage of different learning results

Also, the specific embodiment described below is one example, and other configurations are possible. For example, the entire system may be implemented as one device. Also, more than one of each device can be provided, and the sharing of functions can be changed as desired.

[0153] FIG. 18 shows a system configuration diagram of the present

embodiment.

[0154] In FIG. 18, 10 denotes the learning requester system, and 20 denotes the learning service provider system.

[0155] Note that the learning requester system 10, the learning service provider system 20, and the devices described below can be configured by a general-purpose computer that has hardware resources such as a processor 101, a memory (ROM or RAM) 102, a storage device (hard disk, semiconductor disk, or the like) 103, an input device (keyboard, mouse, touch panel, or the like) 104, a display device 105, and a communication unit 106 as shown in FIG. 19. The functions of the learning requester system 10, the learning service provider system 20, and the various devices are realized by a program stored in the storage device 103 being loaded to the memory 102 and executed by the processor 101. Note that the learning requester system 10, the learning service provider system 20, and the various devices may be configured by one computer, or may be configured by distributed computing with use of multiple computers. Also, in order to increase the processing speed, some or all of the functions of the learning requester system 10, the learning service provider system 20, and the various devices can also be realized using dedicated hardware (e.g., a GPU, an FPGA, or an ASIC).

[0156] The learning requester system 10 may include a learning data preparing device 11, a learning requesting device 12, and/or a learning result usage device 13.

[0157] The learning data preparing device 11 may be a device that prepares data for machine learning. FIG. 20 shows a block diagram of the learning data

preparing device 11.

[0158] The learning data preparing device 11 may include an operation unit 111, a learning data acquisition unit 112, a learning data recording unit 113, a data acquisition control unit 114, a learning request content recording unit 115, and a communication unit 116.

[0159] The operation unit 111 may be a unit for operating the learning data preparing device 11, such as a keyboard or a mouse.

[0160] The learning data acquisition unit 112 may acquire information

regarding subjects, which is to be learning data for machine learning (deep learning in a neural network). For example, it is possible to use any input device such as a camera, a sensor, a network terminal, or an autonomous-travel robot sensor.

[0161] The learning data recording unit 113 may be a storage unit that stores learning data for machine learning that was acquired by the learning data acquisition unit 112.

[0162] The data acquisition control unit 114 may control the learning data acquisition unit 112 to acquire learning data and store the acquired learning data in the learning data recording unit 113.

[0163] The learning request content recording unit 115 may store learning request content indicating what type of machine learning is to be performed and what classification ability is to be attained. The learning request content may indicate, for example, (l) attainment of the ability to identify that a cat is included m an image using 640*480 to 1280*1024 dot and 24 bit/pixel image data, and/or (2) attainment of the ability to classify the rank of mandarin oranges using sensor data regarding various ranks as learning data.

[0164] The communication unit 116 may have a function of exchanging data with the learning requesting device 12 via a network.

[0165] The learning requesting device 12 may transmit learning request information to the learning service provider system 20. FIG. 21 shows a block diagram of the learning requesting device 12.

[0166] The learning requesting device 12 may include a learning request unit 121 and a communication unit 122.

[0167] The learning request unit 121 may transmit, to the learning service provider system 20 via the communication unit 122, learning data that is stored in the learning data recording unit 113 of the learning data preparing device 11 and learning request content that is stored in the learning request content recording unit 115 of the learning data preparing device 11.

[0168] The communication unit 122 may have a function of exchanging data with the learning service provider system 20 via a network.

[0169] The learning result usage device 13 may be a device that actually classifies and/or identifies a subject with use of a learning result (e.g., a neural network that has attained classification ability) provided from the service provider. Examples include an inspection device and/or a monitoring device. Note that the learning result usage device 13 is not limited to being one device, and may be multiple devices.

[0170] The learning result usage device 13 may include a communication unit 131, a learning result input unit 132, a control unit 133, a neural network setting unit 134, a neural network 135, a data acquisition unit 136, an input unit 137, and an output unit 138.

[0171] The communication unit 131 may have a function of exchanging data with the neural network setting unit 134 and the learning service provider system 20 via a network.

[0172] The learning result input unit 132 may receive information regarding the learning result (e.g., a neural network that has attained classification ability) provided by the learning service provider system 20. [0173] The control unit 133 may control various units and send, to the neural network setting unit 134, information regarding the learning result (e.g., a neural network that has attained classification ability).

[0174] The neural network setting unit 134 may perform setting of the neural network 135 based on information regarding the learning result (e.g., a neural network that has attained classification ability) provided by the service provider. Accordingly, the neural network 135 that includes the ability to classify subjects may be generated.

[0175] The data acquisition unit 136 may be a unit that acquires data that is to be the classification subject, and examples thereof include a camera, a microphone, and various types of sensors.

[0176] The input unit 137 may receive data with predetermined specifications from the data acquisition unit 136, execute necessary data processing, convert the data into a data format that is compatible with the neural network 135, and output the data to the neural network 135.

[0177] The neural network 135 may receive the data from the input unit 137, classify the data, and output classification results.

[0178] The output unit 138 may be a display, a printer, or the like, and output the classification results obtained by the neural network 135.

[0179] Note that the learning data preparing device and the learning result usage device may have approximately the same configuration, and therefore a common configuration may be used. For example, they may be realized by connecting a learning data input device to a PC. Alternatively, they can be integrally realized by including a learning data input device as an incorporated device.

[0180] Next, the learning service provider system 20 will be described.

[0181] The learning service provider system 20 may include a learning request reception device 21, a learning database device 22, a learning device 23, and an equivalency determining device 24.

[0182] The learning request reception device 21 may receive a learning request from the learning requesting device 12. The received learning request content may be registered in the learning database device 22.

[0183] The learning database device 22 may be a database that stores overall information related to learning, such as learning request content, information used in machine learning, and/or information used when determining the equivalency of classification results and assigning identification information. This learning database device 22 may include the learning information storage unit 1 and the determination condition storage unit 2 in the embodiment described above.

[0184] FIG. 22 shows an example of the learning database device 22.

[0185] The learning database device 22 shown in FIG. 22 may be constituted by a learning program DB 221, a learning data DB 222, a learning request DB 223, a learning result usage history DB 224, and a classification result DB 225. Note that although the example of implementation as one device is described here,

implementation with multiple devices is also possible.

[0186] The various databases record data such as the following.

[0187] (1) Learning program DB 221

The learning program DB 221 may store learning programs for performing learning. The learning content of various learning programs can be acquired from external programs. Note that the learning programs can also be stored in association with learning elements such as the learning subjects, the content of the learning data, and the objective. Many learning programs can be registered in the learning program DB. By designating learning elements, it is possible to specify a learning program and make it executable.

[0188] (2) Learning data DB 222

The learning data DB may store learning data used in learning. The learning data can be stored in association with learning elements such as the subjects of the learning data, the details of the learning data, the range of the learning data, and the learning objective.

[0189] (3) Learning request DB 223

The learning request DB 223 may store the content of learning that was requested and is to be performed. Information regarding the learning request can be stored in association with learning request elements such as information regarding the learning requester, the subjects of the learning data, the details of the learning data, the range of the learning data, and the learning objective.

[0190] (4) Learning result usage history DB 224

The learning result usage history DB 224 may store a learning result usage history. This database can store information regarding the results of using the classification ability attained for learning results. Information regarding usage of learning results can be stored in association with learning usage such as information regarding the user of the learning result, the subjects of the learning data, the details of the learning data, the range of the learning data, and the learning objective.

[0191] (5) Classification result DB 225

The classification result DB 225 may store information regarding the abilities of classification using learning results and the respective classification results. Information can be stored for each learning result. The determination condition for determining the equivalency of classification results may also be stored in the classification result DB 225. Information obtained in the processing of later-described application examples can also be stored.

[0192] Furthermore, diverse learning- related information, which is described below, may be distributed among and stored in the learning program DB 221, the learning data DB 222, the learning request DB 223, the learning result usage history DB 224, and/or the classification result DB 225.

[0193] Learning-related information may include diverse types of information such as the information described below, and these types of information may be stored in the learning data DB 222, the learning request DB 223, the learning result usage history DB 224, and/or the classification result DB 225, and overall management thereof can be performed by the learning database device 22.

[0194] (l) Information regarding the external interface

· For example, the type of data to be input, the method of acquiring the data to be input, and the data to be output (e.g., neural network output).

[0195] (2) Information regarding learning data

(2-1) Information regarding identification information of learning data It is possible to assign identification information to learning data and conceal the details of the learning data as described below when performing management.

[0196] (2-2) Detailed information regarding learning data

For example, the details of learning data, supervision data information (necessary for supervised learning but not necessary for unsupervised learning), the learning data acquisition method (e.g., differences in learning data acquisition techniques), the learning data pre-processing method, and/or the learning data subject scope (such as acquisition subject (e.g., population and/or subject scope), subset, acquisition period, data acquisition method during learning, details of learning data and/or test data).

[0197] (3) Attribute-related information For example, data attributes, learning program attributes, related tool attributes, classification subject attributes, and/or rights relationship information (if rights relationships exist among the above, such information may be recorded in the DB).

[0198] (4) Information regarding the learning period

For example, information indicating when learning was performed.

[0199] (5) Learning objective

Various objectives may be set for each instance of learning, and examples of this information are described below.

[0200] (5-1) Information regarding the subject of the ability to be attained by machine learning

Subject information may be information indicating what field the subject is to be, such as indicating that images are the subject, text is the subject, or audio is the subject.

[O2OI] Also, the ability to be attained may be information regarding the type of ability, for example. Examples of information regarding the type of ability include information indicating that classification ability is to be attained (may include information on whether the learning of the classification ability was supervised or unsupervised), that predictive ability is to be attained (predetermined classification may be performed internally), and/or that control ability is to be attained

(predetermined classification may be performed internally).

[0202] (5-2) Information regarding a learning extent evaluation

Information regarding the learning extent evaluation may be, for example, the content of an evaluation function.

[0203] (5-3) Information regarding an attained level

Attained level information may be information on the attained level that is the objective in learning, for example.

[0204] (5-4) Information regarding re-learning

Re-learning information may be, for example, an ID specifying previous learning and information regarding objective change content.

[0205] (6) Information regarding differences between learning devices

Attained abilities can possibly be different depending on the learning device. The following shows examples of such information regarding the learning device.

[0206] (6"l) Information regarding the learning device manufacturer Model

Information on the derivative model, etc.

Information indicating that a learning service is to be provided as a cloud service.

[0207] (6-2) Information regarding learning device specifications

Information on the computation ability and parallelism, for example.

[0208] (6-3) Information regarding learning device settings

Information regarding settings such as the repetition count, abort time, and available consumption power.

[0209] (7) Information regarding the learning method

(7"l) Information regarding the learning tool program

Information regarding the type and version, for example.

[0210] (7-2) Information regarding the used programming language

For example, information identifying the programming language that was used, such as the name of the programming language that was used. This is because the programming language that was used may need to be understood in order to perform later modifications or the like.

[0211] (7-3) Information regarding the used framework

For example, information specifying the framework that was used. This is because various frameworks for performing learning by deep learning have been disclosed by various companies, and it may be necessary to specify the framework that was used.

[0212] (7-4) Information regarding the configuration of the applied neural network

· Information regarding the learning technique

Various abilities may be attained depending on the learning technique. It may therefore be very important to store information regarding the learning technique. Examples of information regarding the learning technique include, but are not limited to, information regarding techniques existing prior to deep learning such as SVM and regression models, deep belief network, deep Boltzmann machine, stacked autoencoder, autoencoder, restricted Boltzmann machine (RBM), drop out, sparse coding, regularization, denoising autoencoder, type of activation function (sigmoid function, soft sign, soft plus, Relu, etc.), and/or type of random number sequence.

[0213] · Information regarding the neural network configuration Information regarding hyperparameters

For example, information regarding the input layer configuration, number of layers, number of hidden layer units, and/or the content of layers. The information regarding the content of layers may describe a specific name such as CNN (Convolutional Neural Network) or RNN (Recurrent Neural Network) (Elman network, Jordan network, ESN (Echo state network), LSTM (Long short term memory network), BRNN (Bi-directional RNN), etc.).

[0214] · Information regarding presence/absence of interlayer feedback

For example, information regarding the configuration of the feedback circuit or the configuration of the output layer.

[0215] * Computational precision

For example, information quantifying the computational precision.

[0216] · Computational precision during learning

For example, information quantifying the computational precision during learning.

[0217] · Condition for ending learning

For example, information regarding a condition such as ending learning when all of the samples are correctly classified.

[0218] (7-5) Information regarding framework configuration settings

The configuration of the above -de scribed neural network can be performed based on a setting method provided by the framework as well.

Information regarding settings provided by the framework can be stored in advance, and the method provided by the framework can be used as the information regarding learning.

[0219] (8) Information regarding ability test

This is information regarding an ability test. By performing an ability test method, a learning result may be influenced so as to satisfy test requirements.

[0220] (9) Individual learning result differences

For example, random numbers are often used in learning by deep learning, and therefore slight differences can arise in the attained ability each time learning is performed. In view of this, information regarding individual learning result differences may be recorded.

[O22I] The above types of learning-related information are merely examples, and the present disclosure is not limited to the above examples. Various influencing factors may arise according to the learning subject, objective, and environment, and therefore if other influencing factors or the like exist, it may be sufficient to apply identification information for identifying such information and add it to the learning database device 22.

[0222] As described above, by configuring the learning database device 22 in the above manner, it is possible to compare the classification abilities of different learning results. This therefore makes it possible to determine equivalency. Note that the databases can be used by the learning request reception device 21, the learning device 23, and the equivalency determining device 24 via a network such as a local network.

[0223] Based on learning request information received by the learning request reception device 21, the learning device 23 may perform learning based on learning- related information recorded in the learning request DB 23. A learning program that is compatible with the requested content may be searched for, selected, and executed by the learning device. The learning device 23 thus may perform learning based on the request. A classification ability may be attained as the learning result. Note that the learning device 23 may be only one device, or may be multiple devices according to the content of the learning request and the number of learning requests.

[0224] Although there are no restrictions on the learning technique used by the learning device 23, a neural network will be described as one example. By

performing learning regarding classification, a neural network that has attained classification ability may be generated. Before the learning program starts learning, the configuration of the neural network may be determined using hyperparameters. The hyperparameters may include information regarding the input layer configuration, the intermediate layer configuration, and/or the output layer configuration. Optimum hyperparameters often differ depending on the learning subject, and the learning program may perform setting according to the learning objective. Tuning of the hyperparameters may be performed by a human researcher.

[0225] The neural network can attain classification ability for classifying a subject by performing learning with input data or input signals through deep learning, for example.

[0226] Examples of classification ability include, but are not limited to, a determination of a good article or a poor article, a determination of rank, the grouping of subjects, a determination of state, the detection of subjects having the same features, and/or hierarchical classification.

[0227] Note that classification is not necessarily required to be performed uniquely. Certain subjects that are the same may be included in multiple classes.

[0228] Also, classes do not need to be completely independent of each other. A certain class may be included in another class.

[0229] Also, the classification ability can be utilized internally by a device whose objective is not classification.

[0230] For example, it can be utilized by a malfunction predicting device. In this case, the current state of the subject device is classified as being in a state of high possibility or low possibility of a malfunction.

[0231] Also, a control device that manages building air conditioning can appropriately perform air conditioning control by classifying prediction results of the future usage state of rooms in the building. For example, learning can be performed with information from sensors that detect heat emission using infrared light. Based on features regarding the generation pattern of heat energy generated by persons or machines such as PCs detected in the room, it is possible to identify whether or not there is a possibility of a rise in the temperature in the room. The extent of influence differs for each room, but the degree of influence is similar for rooms that have the same structure. When used with different rooms that have the same structure, it is not necessary to perform learning with data acquired from each room. In such a case, when control is performed using a learning result obtained using data regarding another room, if it is possible to confirm that the temperature in the room can be controlled in a similar manner, it is possible to determined that the learning result can be applied to that room. This type of method is called Deep Q Learning, and can be applied in AI technology for performing learning to attain the ability to control a system.

[0232] The utilization of classification ability is not limited to the above description.

[0233] It is possible to broadly utilize a result that has learned to output one or more outputs using input data input to a neural network.

[0234] For example, in the case where there are multiple classifiers, the case of performing learning multiple times, or the like, multiple abilities can be attained as multiple learning results. [0235] Note that when applying a classifier that employs learning results, it is very important to understand whether classification abilities are the same or different.

[0236] The reason for this is that if the results of classifying the same subjects are different for different devices, such results cannot be used appropriately. If multiple devices classify subjects into the same class when they should be classified into different classes, such results cannot be used appropriately. The attained ability is obtained by machine self-learning, and therefore it is not possible for a person or external device to foresee what ability will be attained.

[0237] Accordingly, there is a need for a means such as the equivalency determining device 24 for making it possible to understand whether attained classification abilities are the same or different, as will be described later.

[0238] The equivalency determining device 24 may be a device that, with respect to classification output of classification results output from multiple learning results that attained classification ability through learning, determines whether the classification output of the classification results may be considered to be equivalent classes. The equivalency determining device 24 may include the functions of the classification result input unit 3, the determination unit 4, the identification information assigning unit 5, the setting unit 6, the classification subject

equivalency determination unit 7, the re -learning request unit 8, and/or the input data changing unit 9 of the embodiment described above.

[0239] Also, as another aspect, the equivalency determining device 24 can be configured by a communication unit 241, a learning result comparison unit 242, a comparison subject selection unit 243, an equivalency determination control unit 244, and an identification information generation unit 245, as shown in FIG. 23.

[0240] The communication unit 241 may have a function of exchanging data with the learning request reception device 21, the learning database device 22, and the learning device 23 via a local network.

[0241] The learning result comparison unit 242 may read out, from the learning result DB of the learning device 23, at least two learning results for which the equivalency of classification results is to be determined. Learning-related

information for the learning results may also be read out from the learning database device. The learning results and learning-related information for the learning results that were read out may be transmitted to the equivalency determination control unit 244. [0242] The comparison subject selection unit 243 may have functions similar to those of the classification subject equivalency determination unit 7. The

comparison subject selection unit 243 may have a function of, based on the learning- related information for the learning results that was received via the equivalency determination control unit 244, determining the equivalency of classification subjects input to the learning results and selecting subjects. A function of transmitting information regarding subject equivalency to the equivalency determination control unit 244 may also be included.

[0243] When the equivalency of subjects is determined, if the classification subjects are different, the classification results will not be equivalent, as previously described. However, whether the subjects of the classification abilities attained by the machines are equivalent cannot be known based on the learning results. For this reason, it may be necessary to know whether or not the classification or identification subjects are equivalent. In order to automatically make a

determination, it may be necessary to provide a means for comparing classification or identification subjects and determining the equivalency thereof. In view of this, the comparison subject selection unit 242 may determine the equivalency of classification or identification subjects of the learning results, and then select experimentation data for determining the equivalency of the classification results.

[0244] Note that the equivalency of classification or identification subjects can be determined using various conditions.

[0245] If it is guaranteed in advance that the learning results will be used with subjects that are equivalent, the subjects may be considered equivalent. For example, in the case of inspection devices for a specified product, the subjects may be considered equivalent.

[0246] Also, if information indicating that the subjects are the same is recorded, it can be determined that the subjects are equivalent based on the recorded information. For example, if the names of the subjects recorded in the learning result usage history DB 224 are the same, the subjects may be considered

equivalent.

[0247] Also, in the case where the subjects are the same or have the same attributes, it can be determined that the subjects are equivalent. For example, in the case of performing learning with use of information from sensors disposed at a particular intersection, the subjects may be considered equivalent. In another example, in the case of performing learning with use of information from sensors for detecting the status of people in a vehicle compartment, even if the vehicles are different, the subjects may be considered equivalent in terms of being in a vehicle compartment.

[0248] Also, in the case of performing learning with use of information obtained from the same device or devices of the same type, if the intended use is the same, it can be determined that the subjects are equivalent.

[0249] Also, a configuration is possible in which an administrator can determine that the subjects are equivalent and input that result as information. In this case, the comparison subject selection unit 243 can deem, based on that information, that the subjects are equivalent.

[0250] Also, if the ranges of use of classifiers that employ learning results are equivalent, it can be determined that the subjects are equivalent. For example, if the classifiers use sensors that are disposed in vehicle compartments, it is known that the subjects are in vehicle compartments.

[0251] Also, if the identification information assigned to the learning data for learning is the same, it can be determined that the subjects are equivalent. For example, in the case of intersection monitoring cameras, it is known that the subjects are intersections.

[0252] Furthermore, the comparison subject selection unit 243 may be able to identify subjects and determine that the subjects are predetermined subjects. For example, in the case of a classifier that classifies the status of people with use of images, when a portion of a person's face or body is detected in an image, it can be determined that the subject is a person.

[0253] Furthermore, the subject can be specified with use of an image and then classified. For example, a configuration is possible in which a cat breed classifier first identifies that the subject is a cat and limits the images to cat images, and then classifies the breed.

[0254] In the case where the subjects are equivalent, the comparison subject selection unit 243 can assign the same subject identification ID as information for identifying the subject of the classifier or the learning result.

[0255] In the case where the comparison subject selection unit 243 determines that the classification subjects of learning results are equivalent, the equivalency determination control unit 244 may read out a classification result equivalency determination condition from the classification result DB 225. The equivalency determination control unit 244 may then compare the classification results that are output from the learning results in experimentation data for determining the equivalency of the classification result in accordance with the classification result equivalency determination condition that was read out. It may then be determined whether the comparison results, that is to say the classification results, satisfy the equivalency determination condition.

[0256] The following describes an example of a condition for determining the equivalency of classification results.

[0257] For example, if the sets of experimentation data input to learning results are the same, and the classification results output from both of the learning results for that experimentation data completely match each other, it can be determined that the classification results are equivalent.

[0258] By setting the extent of matching as a condition, it is possible to determine whether or not classification results can be deemed to be equivalent. For example, it may be determined that classification results are equivalent if greater than or equal to 99.9% of all of the classification results for the same

experimentation data match. Alternatively, it may be determined that

classification results are equivalent if greater than or equal to 90% of all of the classification results for the same sample match.

[0259] Also, a configuration is possible in which equivalency is determined with use of the extent to which classification results output from both learning results match an existing classification result recorded in an external device (e.g., a database). According to this configuration, it is possible to construct a system in which a determination is made based on the classification result of a human expert.

[0260] Alternatively, from the viewpoint of using classification results output from learning results, a configuration is possible in which any condition for deeming equivalency is set, and equivalency is accordingly determined.

[0261] The identification information generation unit 245 may have functions similar to the identification information assigning unit 5 described above. In the case where the equivalency determination control unit 244 determines that the classification result equivalency determination condition is satisfied, the

identification information generation unit 245 may generate identification

information that is to be assigned to the learning results or the classification results, and assign that identification information to the learning results or the classification results. [0262] When generating identification information, it may be necessary for the user, systems, devices or the like to be able to identify whether or not learning results or classification results are equivalent when using the learning results for classification. It may therefore be necessary to generate identification information.

[0263] The identification information can be used in the following manner.

Examples include, but are not limited to, usage as a key for registration in a database, usage for specification of a learning result by a user, usage for

specification of a learning result among systems or devices, and/or usage for specification of a learning result in other scenes where equivalency needs to be checked.

[0264] Also, the identification information can include the following information. Examples include, but are not limited to, information regarding learning content (e.g., the learning method, details of the learning data, and/or the learning objective) and/or information regarding learning usage (e.g., information regarding the device to be used, information regarding the usage environment, and/or information regarding the user).

[0265] By including such information, it is possible to designate a condition, perform a search, and acquire learning data from a learning database. Accordingly, details of the corresponding learning result can be known at the time of use.

[0266] Also, the identification information is not necessarily required to be information that can be understood to a person. It may be sufficient that a machine can identify learning results.

[0267] Examples include, any text data, any binary data, or anything for identification may be used.

[0268] Examples of using things for identification include, but are not limited to, an electronic identifying means such as an ID tag, using a 3D printer to create a physical shape (e.g., a humanoid robot face), creating a physical key (identification by a physical means), combining any objects (creating identification information by a combination), a biochemical means such as genetic information, and/or a physical means such as a barcode.

[0269] If the identification information needs to be identifiable by a human, the following are possible.

[0270] For example, generating a natural language text string, generating a 2- dimensional image or 3-dimensional image icon, and/or attaching a physical label (this method is used when recording learning results in a physical storage medium). [0271] Also, the generated identification information can be stored.

[0272] If the identification information is stored, it can be assigned as

supplementary information when creating a copy, it can be managed in the classification result DB that records the classification abilities obtained as learning results, and it is possible to search for identification information that satisfies a condition by designating a searchable condition. If it is possible to search for identification information that satisfies a condition, it is possible to output a set of identification information based on information specifying subjects, and it is possible to specify input data and output a set of identification information that belongs thereto. It is also possible to use identification information as a key, search for learning results that have that identification information, and display a list of the search results.

[0273] Next, operations of the equivalency determining device 24 will be described with use of a flowchart. FIG. 24 is a flowchart of exemplary operations performed by the equivalency determining device 24. Note that the following describes an exemplary case of determining the equivalency of a learning result Ml that classifies a subject A and a learning result M2 that classifies a subject B, or determining the equivalency of the classification results.

[0274] First, the learning result comparison unit 242 selects the two learning results Ml and M2 for which equivalency is to be determined (Step 1000). The learning result comparison unit 242 then reads out, from the learning result DB of the learning device 23, the at least two learning results Ml and M2 for which classification result equivalency is to be determined (Step 1001).

[0275] The comparison subject selection unit 243 determines whether the subject A and the subject B of the two learning results Ml and M2 are equivalent (Step 1002). If it is determined that the subject A and the subject B of the learning results Ml and M2 are not equivalent, processing moves to later-described Step 1012.

[0276] If the comparison subject selection unit 243 determines that the subject A and the subject B of the learning results Ml and M2 are equivalent, the equivalency determination control unit 244 reads out a classification result equivalency determination condition from the classification result DB 225 (Step 1003). The equivalency determination control unit 244 then creates a subject A determination result list for experimentation data for determining classification result equivalency (Step 1004). Similarly, the equivalency determination control unit 244 creates a subject B determination result list for experimentation data for determining classification result equivalency (Step 1005).

[0277] Next, the equivalency determination control unit 244 compares the subject A determination result list and the subject B determination result list (Step 1006), and determines whether the equivalency determination condition is satisfied (Step 1007). If the equivalency determination condition is satisfied, it is determined that the classification results of the learning results Ml and M2 are equivalent (Step 1010).

[0278] If it is determined that the classification results of the learning results Ml and M2 are equivalent, in accordance with a user request, the identification information generation unit 245 generates identification information that is to be assigned to the learning results Ml and M2, to the classification results of the learning results Ml and M2, or to both, and assigns the identification information to the learning results Ml and M2, to the classification results of the learning results Ml and M2, or to both (Step 1011).

[0279] On the other hand, if the equivalency determination control unit 244 determines that the classification result equivalency does not satisfy the

determination condition, the equivalency determination condition is changed, and equivalency re -determination preparation processing is performed (Step 1008). When equivalency re -determination is to be performed, the equivalency

determination condition is re-set, or the experimentation data is changed for example, as described in the fourth, fifth, and sixth embodiments described above. After the re -determination preparation processing, the equivalency determination control unit 244 compares the subject A determination result list and the subject B determination result list, and determines whether the equivalency determination condition is satisfied (Step 1009). If the equivalency determination condition is satisfied, it is determined that the classification results of the learning results Ml and M2 are equivalent (Step 1010).

[0280] Furthermore, if it is determined that the classification result equivalency does not satisfy the determination condition, the equivalency determination control unit 244 checks whether or not another equivalency determination condition remains (Step 1012), and determines that the classification results are not equivalent if no other equivalency determination condition remains (Step 1013). [0281] In this way, the equivalency determining device 24 can determine the equivalency of classification results output from learning results, and/or the equivalency of the learning results.

[0282] The following describes specific application examples that employ the embodiments described above.

[0283] Application Example 1

Application Example 1 is a method of determining the equivalency of classification results based on the equivalency of input data input to learning results and the equivalency of classification results.

[0284] If learning is performed based on the assumption that the classification results will be the same when learning is performed multiple times, it may be determined that the classes of the learning results are equivalent.

[0285] For example, in the case of a method of performing classification using input data that is weight and size measurement results, even if the weight and size measurement methods differ slightly, approximately the same data will be input to the learning results.

[0286] In the case where there are multiple classifiers that employ learning results that received approximately the same data, if it can be confirmed that the classifiers output the same classes for a sample group of classification subjects, those classes can be deemed to be the same classes.

[0287] If a predetermined condition for determining classification result equivalency is set in advance, and the predetermined condition is satisfied, the classification results can be deemed to be equivalent.

[0288] If it is determined that the input data sets are the same, the equivalency of classes can be guaranteed with the following method.

[0289] Note that whether or not input data sets are the same can be determined with the following conditions.

The devices that generated the input data are the same or equivalent.

The input data sets were acquired based on the same standards.

· An external device or organization determined that the input data sets are the same.

[0290] Application Example 1 shows a specific example of the case where determination devices (e.g. classifiers that employ learning results) that evaluate property of articles, such as their quality, in particular for classifying good articles and poor articles are disposed in three production lines at a factory, for example. The following application example describes the case of classifying a snack into good articles and poor articles.

[0291] The product has the shape of a sand bag that is curved like a shrimp as shown in FIG. 25. The main ingredient of corn is mixed with water and heated, and then ejected from a nozzle having a special shape and cut, and then foams and expands to the above -described shape.

[0292] The texture and shape are features of the product, and therefore it is necessary to classify good articles and poor articles and exclude the poor articles. For example, if the product is spherical overall, there is no problem in terms of the quality as a food item, but the features of the product are impaired. There is also an influence on the texture. Examples of poor articles include having a noticeably different shape, not expanding, having a portion missing and having a sharp cross- section, not being curved, being under a predetermined weight, being under a predetermined size, being larger than a predetermined size, being partially discolored and too dark, having a different color than the standard product color, having a different weight or shape of a different item or other product or the like, or having another aspect that is disadvantageous to the producer or consumer.

[0293] On the other hand, in the product manufacturing method, individual products do not always have completely the same shape, and the shape differs slightly. For this reason, it is difficult to classify good articles and poor articles by precisely measuring the three-dimensional shape with camera images or the like.

[0294] Accordingly, since the shape can be irregular, the poor article

determination criteria are somewhat ambiguous, and certain abnormalities in shape are allowed, but it is not desirable for the determination criteria to differ between production lines. It should be noted that it is necessary to completely remove foreign objects or other products that have been mixed in. The shape need only be similar to the extent of not making the consumer uncomfortable.

[0295] However, it has been conventionally difficult to perform sensory classification in the same way as a human.

[0296] In view of this, by using deep learning, which is AI technology, it is possible to perform classification similarly to human sensitivity. In this case, it is sufficient to be able to classify a shape that a consumer would think is noticeably different from the actual product shape and is defective. Deep learning can perform classification into any number of classes, but classification is performed into two classes in Application Example 1, namely a good article and a poor article. Note that instead of the two good article and poor article classes, it is possible to perform classification into different classes such as the degree of a good article, the type of defect, and foreign object, for example.

[0297] The following describes an example of classification into a good article and a poor article in order to simplify the description.

[0298] Conventionally, when determining good articles and poor articles on such a line, a human inspection device producer finds features for each product and creates an inspection device program. For this reason, each time the product shape changes, the inspection device program needs to be re -developed again.

Accordingly, it has not been easy to make good/poor article determinations for such products that have an irregular shape. However, by using deep learning

technology, it has become possible to train a neural network to classify subjects based on images or sensor data. For example, according to exemplary embodiments, a plurality of production lines may be controlled based on the identification information assigned to the corresponding classification results and/or to the corresponding learning results by the identification information assigning unit. In particular, production lines including inspection devices may be controlled in view of corresponding classification results based on trained neural network, e.g. based on identification information such as IDs assigned to the classification results, learning results and/or outputs from classifiers.

For example, if the same identification information is assigned to two or more of the classification results and/or to two or more of the learning results, two or more of the plurality of production lines corresponding to the two or more of the

classification results and/or to the two or more of the learning results may be controlled so that the two or more of the plurality of production lines produce a specified number or specified numbers of articles. In particular, the plurality of production lines having the same identification information may be controlled with respect to the number of articles produced by the corresponding production lines under assumption that they produce equivalent articles. The identification information such as IDs assigned to classification results according to an evaluation may be also assigned to inspection devices and/or to classifiers. The production lines may be controlled in view of equivalence evaluation, i.e. based on the assigned IDs. For example, if the same IDs are assigned to inspection devices (i.e. all classifiers), products may be transferred, provided, and/or distributed in any production lines such that, for example, the same number of the products are transferred in each production lines.

On the other hand, if identification information assigned to one of the learning results and/or of the classification results is different from the identification information assigned to any other learning result and/or classification result, a production line corresponding to said one of the learning results and/or of the classification results may be controlled so as not to produce articles. For example, the production line including the inspection device with the different identification information is controlled not to transfer articles. The identification information such as IDs may be assigned to inspection devices and/or to classifiers. If one of inspection devices is assigned with an ID different from any other inspection devices, then products or production may be controlled not to produce articles and/or not to transfer in the corresponding production line including the inspection device with the different ID.

[0299] In Application Example 1, in order to evaluate property of articles, such as their quality, in particular determine good articles and poor articles on each line, sensors are used to acquire data indicating the appearance, shape, or state of products.

[0300] The sensors use moving images. Examples include weight sensing data that estimates a weight by movement relative to wind pressure, and a color sensor or a shape sensor. A shape sensor estimates the projection area and roundness or curviness of a three-dimensional shape using images captured from two directions. A color sensor identifies the product portion in an image and obtains the average browning color. Equivalent devices are selected from among devices that generate input data such as cameras or sensors. The cameras or sensors are in different states for each line, have differences in resolution or lenses, have different imaging element properties, have unique sensor differences, or are different models, and therefore output slightly different values for the same object. Also, products are identified as good articles and poor articles with use of multi-dimensional vectors made up of weight data, shape data, and color data.

[0301] In such a case, if there is only one inspection device (e.g. classifier), the problem to be solved by the present invention does not arise, but inspection cannot be performed on respective lines. If there are multiple inspection devices (e.g.

classifiers), and there are slight differences therebetween, the problem to be solved by the present invention arises. The differences may be due to specific characteristics of the respective production lines (e.g., lighting conditions, temperature, humidity, etc.) and/or variations in the produced articles (e.g., variations in one or more of size, shape, color, etc. of the articles) that are used for the learning.

[0302] In order to prevent product variation between lines, it is necessary for the good/poor article determination to be approximately the same for each line.

[0303] In this application example, learning by a learning device 23 is performed for each line.

[0304] 10,000 good article samples and 1,000 poor article samples are prepared, and data is acquired by inspection sensors in each line. The good article samples and poor article samples are the same in each line.

[0305] The learning device 23 learns to make a good article determination for data obtained from good articles and make a bad article determination for data obtained from poor article samples. For example, 70% of the samples are used as learning data, and the remaining 30% are used as test data for the equivalency determining device 24.

[0306] With the 30% test data, the equivalency determining device 24

determines whether 99.9% or more (note: foreign object is 100%) were correctly classified (identified). In each line, if each foreign object is a 100% poor article, and the total erroneous determination rate is less than or equal to 0.1%, which is the predetermined condition, it is deemed that the classification results of good articles and poor articles used as the inspection devices in the lines are equivalent.

[0307] Accordingly, the equivalency determining device 24 can give, to the output signals (subject product good article or poor article determination signal) of the inspection device (classifier), identification information (an ID) indicating that the signal is a subject product good article or poor article determination signal.

[0308] In the case of other classifiers as well, a configuration is possible in which if all of the classifiers are in a range set as a predetermine condition for a specific class, a predetermined identification information (ID) is assigned to the output from all of the classifiers, thus making it possible to be distinguished from other classes.

[0309] Note that if all of the classes of multiple classifiers satisfy a

predetermined condition (e.g., an accuracy rate of 99.9% or more), an ID that has the meaning of a good/poor article determination signal for subject products can be assigned as grouped identification information for the overall output of the classifiers. For example, this can be applied to the previously described case of different classes such as the degree of a good article, the type of defect, or a foreign object.

[0310] In this application example, a sample group of classification subjects is specified, and learning is performed using input data acquired by the data input units of each classifier for the same sample group. The input data that is input to the classifiers is approximately the same. Also, if a predetermined condition for determining classification result equivalency is satisfied, the classes can be deemed to be the same. The predetermined condition is defined as the condition that the accuracy of classification of good articles and poor articles is 99.9% or higher.

[0311] Note that the predetermined condition is not limited to the above- described condition in this application example. It is possible to use any condition that makes it possible to determine classification result equivalency.

[0312] For example, the condition can be set such that good article and poor article samples are classified with 100% accuracy. Also, rather than using the same test samples, equivalency can be determined using the degree of matching between the results of classification of the classifiers and the classification results of a classifier that are an indicator and have been evaluated for classification suitability in advance. Alternatively, classification results can be deemed to be the same classification result if the variation falls within a predetermined range.

[0313] Also, application of the present invention is not limited to the above application example.

[0314] Although the example of classification into two groups, namely good article and poor article, is described above, the same configuration can be adopted in the case of three or more groups. In this case, it is sufficient to check equivalency for any one class. It is not necessarily required that all of the classes be deemed to be equivalent among multiple classifiers. It is sufficient to check equivalency for classes that require equivalency. Such an example has been described in the above method of assigning identification information.

[0315] Furthermore, there are other cases where the classification results of classifiers can be deemed to be the same. The following describes another case.

For example, as described below, in the case where different learning results or the devices that use them can be deemed to be the same in terms of the application aspect, the same identification information can be assigned to the classification results. [0316] Among devices installed in a vehicle, there are driver monitoring devices. It is possible to monitor the driver's state using camera images, a heartbeat sensor, a body temperature sensor, or the like. The driver's face can be detected in a camera image. It is possible to detect parts such as eyebrows, eyes, a nose, and a mouth in the driver's face, and determine the state of each parts. For example, it is possible to detect the direction of the face, the eyesight direction, and behavioral states such as the eyelids being open/closed or the mouth being open/closed.

[0317] Based on the state of the facial parts or information from the heartbeat sensor, the body temperature sensor, or the like, it is possible to perform detailed classification of the state of the person driving by performing learning with deep learning. It is possible to attain classification ability for multiple classes of driver states. For example, it is possible to attain the ability to perform classification into a state A: suitable driver state, state B: reduced attentiveness, and state C- * unsafe state.

[0318] However, if different data is used in learning, the classification ability will not necessarily be the same.

[0319] For example, if learning is performed by the same learning program using learning data collected in different regions such as Japan, North America, and Europe, or learning data collected in different periods such as from 2000 to 2014 and from 2010 to 2016, there is a possibility that the classification abilities will be different.

[0320] However, if it is determined that predetermined classes are the same in learning results obtained based on different learning data as described above, there are cases where the same identification information can be assigned.

[0321] In view of this, as shown in FIG. 26, the learning service provider system 20 can be provided with a common classifier 25 that extracts classification results that have commonalities from the class output of the classification results output from multiple learning results that attained classification ability through learning, and if it is determined that another class and a predetermined class are the same in the extracted classification results, assigns the same identification information as the other class.

[0322] For example, when data acquired in Europe is input to a device that was trained with data mainly collected in Japan, if the classification results of the state A, the state B, and the state C match the results of classification of the data from Japan within a predetermined range, the common class extraction device 25 extracts the matching classes as common classes. If it is deemed that all of the learning results that employed data in respective regions are the same as each other, the same class ID can be assigned to the learning results.

[0323] If the same class ID is assigned, a driver monitoring device tuned for the Japanese market will have approximately the same classification ability even when used in another market. Note that as a derivative ID, it is possible to assign ID information that indicates the range of the learning data or ID information that indicates a data range such as a period.

[0324] In the example described above, the ability to classify the state of a driver in a vehicle is attained using the output of sensors that detect the state of a human.

A similar method can be applied to a device that classifies the state of a human such as a healthcare or behavior monitoring device.

Also, the subject is not limited to being a person, and the method is applicable to any device that classifies the state of a device, a building, a system, or the like.

[0325] Furthermore, the common class extraction device 25 can be said to be similar to the equivalency determining device 24 from the viewpoint of comparing at least two learning results and determining the equivalency of classification results output from the learning results. In other words, if the concept of common classification results is introduced into the equivalency determination condition of the equivalency determining device 24, implementation in the equivalency determining device 24 is also possible.

[0326] Accordingly, the common class extraction device 25 may be included in the equivalency determining device 24, or a device with a similar configuration may be provided separately from the equivalency determining device 24.

[0327] Application Example 2

In Application Example 1, classification into good articles and poor articles is performed using predetermined sensor information.

[0328] In the method in Application Example 1, there can be cases where the classification results of all of the devise cannot be deemed to be equivalent classes.

In this case, it is possible that the number of samples in the learning data is insufficient.

[0329] In view of this, in Application Example 2, this case is handled by changing the details of the learning data. [0330] If the classification results are not equivalent in the case in Application Example 1, the equivalency determining device 24 notifies that fact to the learning device 23. Upon receiving this notification, the learning device 23 sends a learning data addition request to the learning requester system 10.

[0331] Upon receiving the learning data addition request, the learning requester system 10 adds learning data. The addition of learning data may be performed by a human adding learning data, but a configuration is possible in which a similar notification is given to a robot that prepares learning data. Additional data is prepared by the line control apparatus, robot, or person who received the addition request.

[0332] When additional data is input, the learning device 23 performs learning again. Using a method similar to that in Application Example 1, the equivalency determining device 24 determines whether the classification results output from the learning result obtained in re-learning are in a range according to a predetermined condition.

[0333] Note that there is also a method of performing learning with a reduced amount of input data. A configuration is possible in which the number of types of input data is reduced if the equivalency of the classification results of the classifiers is in a range of a predetermined condition when learning is performed with a reduced number of types of input data.

[0334] Also, the input data that is to be used may be selected with a

predetermined condition. For example, it is possible to exclude data obtained after notification of a failure in the production line. Also, when the number of samples is increased or decreased in the above example, it is possible to select the subjects for which the number of samples is to be increased or decreased according to a predetermined condition. For example, the number of pieces of abnormal data that has a feature different from normal is increased or decreased. Alternatively, the number of pieces of data in a period when an abnormality was confirmed is increased or decreased, for example.

[0335] Application Example 3

In Application Example 3, in the case where equivalency is not determined among multiple neural networks in Application Example 1 or

Application Example 2, classification results are matched by re-setting the classification result equivalency determination condition. [0336] The more there is a need to use deep learning, the more complex and subtle the determination criteria becomes when performing classification, and in such a case, it is possible for slightly different determinations to be made in the vicinity of class boundaries. In this case, by reviewing the conditions, it is sometimes possible to be able to match classes. Alternatively, even in the case of performing classification with different sensor models, there are cases where classes can be matched by reviewing the conditions for determining class equivalency.

[0337] The following shows a specific example of this.

[0338] Cases are envisioned in which a product is determined to be a poor article by one device, but determined to be a good article by another device, or in which a product is determined to a poor article in one, but determined to be a good article in another. Either or both of these cases occur.

[0339] For example, consider the case where when identifying poor articles, products are overall sorted into 20 ranks visually and based on product

measurement data, and the lowest 7 ranks are determined to be defective.

Standard classification is considered to be classification similar to that of a human expert, for example. Alternatively, it is possible to set an appropriate

determination condition, such as the defective rate reaching a predetermined value, and select a standard class.

[0340] In such a case, the range of ranks determined to be poor articles is set wider or narrower, and it is again determined whether the determination of the classification result matches the standard classification.

[0341] If all of the classifiers are within the range of the predetermined condition, it can be deemed that the determination results of all of the

determination devices are equivalent.

[0342] In Application Example 3, by changing the condition of the range of classification as a poor article, it is possible to determine that classification results are equivalent, for example.

[0343] In the above example, the range of ranks classified as poor articles is changed, but another condition can be changed. For example, in the case where products are classified using three evaluation indicators such as shape, weight, and color, and a portion of the products are poor articles, a configuration is possible in which only some conditions (e.g., the shape condition) are changed.

[0344] Application Example 4 In Application Example 4, by increasing or decreasing the number of types of input data, classification results become equivalent between multiple devices. The following shows an application example in which input data is added to Application Example 1.

[0345] In Application Example 1, classification is performed using weight, projection area, roundness or curviness of a three-dimensional shape, and average browning color as input data. If the classification results of all of the classifiers are not equivalent when using this input data, it is conceivable that the number of types of input data is insufficient.

[0346] In view of this, in Application Example 4, browning color unevenness is added. Browning color unevenness can be, for example, the difference in luminance between the lightest portion and the darkest portion in the product portion of an image.

[0347] In the case where the number of types of input data is successively decreased, and a combination for which classes match is found, the same class ID can be assigned to the classification results obtained by that combination. As a result, the computation amount in learning decreases due to reducing the number of types of input data.

[0348] Note that in the reduction method, if classification results do not become equivalent, it is possible to add a type of input data. If there was an insufficient number of types of input data that include features, adding an input data type makes it possible to bring classification results closer to equivalency. For example, the following types of input data are added.

Addition of shape measurement direction

· Edge sharpness

Chroma data

Other product (learning the features of another product with use of data on that product as learning data)

Appearance as an object other than the product

· Detection of a foreign object such as a screw, an insect, a string, or a piece of paper (features thereof are learned using corresponding data)

Output of metal sensor (for foreign object removal precision) When classes are learned by data in the classifiers, by successively increasing the number of types of input data, it is possible to add input data until classes become equivalent between the classifiers (learning results). [0349] The above-described method can be used in big data analysis as well.

[0350] For example, a configuration is possible in which if multiple classification results are not equivalent when using a combination of predetermined types of data, the number of types of input data is increased or decreased, and if equivalency is found, the same class ID is assigned to the classification results. The following describes an example of classifiers that perform consumer behavior prediction at a retail store.

[0351] Classification is performed according to whether or not there is a high possibility of purchasing a specific product such as a snack as described in

Application Example 1. There are four classes, namely high possibility of purchase, possibility of purchase, low possibility of purchase, and almost no possibility of purchase.

[0352] When the classes are learned based on learning data collected in respective predetermined periods (e.g., three months), if large variation occurs in the high possibility of purchase class, the following measures can be taken. Note that it is assumed that the input data included the consumer's age, sex, and occupation.

For example, when there were large variations according to store region in the classification ability attained as the learning results, the same class ID could be assigned when the variation fell within a predetermined range as a result of adding the following input data.

Supplemented the input data with purchase history information regarding a predetermined product such as another specific product from the same manufacturer.

* Added history information regarding the immediately previous purchase of a product that belongs to a specific group, such as an alcoholic drink.

Any type of input data may be added. For example, it is possible to randomly select input data from recorded information, and determine whether or not classification results match according to a predetermined condition.

[0353] Furthermore, a similar method can be used in the IoT field as well. For example, the following describes an example of learning to classify whether or not a cause-effect relationship exists between specific items with use of many sensors installed in a region. [0354] Various sensors are incorporated in corporate infrastructure. By collecting the data output therefrom using a network, it is possible to learn whether cause-effect relationships exist.

[0355] For example, by using people passage detection sensors installed around a concert hall or stadium near a station, it is possible to learn the cause-effect relationship with an increase in vehicle departure from nearby parking lots.

Assume that there was a large difference in the extent of the cause-effect

relationship of the results obtained from learning data in different periods using data from sensors installed on streets heading toward the station. Next, in the case of adding a street that heads toward a parking lot and is provided with a randomly selected sensor, it can be expected that the presence or absence of a cause-effect relationship will be able to be classified.

[0356] Application Example 5

In Application Examples 1 to 4, samples that were classified by a person in advance we prepared. For this reason, supervised learning could be performed.

[0357] However, deep learning can attain the ability to classify subjects through unsupervised learning.

[0358] Accordingly, there is demand for a method that can handle the case where there are no samples that have been classified in advance.

[0359] In the case where multiple classifiers classify subjects, it is sufficient to be able to determine whether classification results can be deemed to be equivalent in accordance with a predetermined condition. If there is a device that can determine classification result equivalency, equivalency can be determined. For example, if there is a means for inputting information that enables determining the equivalency of classes from a human expert, equivalency can be determined based thereon. Alternatively, it is possible to use a classifier that is a reference, and determine equivalency with that classifier in accordance with a predetermined condition.

[0360] Application Example 5 describes the case where the equivalency determining device 24 determines the equivalency of classification results output from learning results that were trained with unsupervised data.

[0361] The equivalency determining device 24 determines whether or not classification results output from neurons as output from different classifiers approximately match the classification results of output from another device. [0362] From the viewpoint of the user, the same ID can be assigned if it can be deemed that classification results are the same. Even if all of the classification results do not have the same classification ability, when specific neurons perform the same classification, the same ID can be assigned to those neurons.

[0363] The following shows an application example for image classification. Note that although the same follows for whether or not a snack is a good article in Application Example 1, it is generally difficult to understand with whether or not a specific snack is a good article, and therefore a more general example will be used.

[0364] For example, many images are input as unsupervised learning data, and learning by deep learning is performed so as to classify the images. In this application example, unsupervised learning is performed.

[0365] As a result of learning, the neural network classifies similar images into groups of similar images, and neurons that correspond to the respective

classification results perform output. In the case where there are multiple classifiers, each classifier attains classification ability.

[0366] If specific classification results obtained by multiple classifiers can be deemed to be equivalent in accordance with a predetermined condition, the same ID can be assigned to those classification results.

[0367] In Application Example 5, in the case where one specific output of multiple classifiers is output as a classification result for the majority in the same image group, it can be determined that the same classification is being performed.

[0368] Note that the naming of identification information that is to be assigned to classification results can be performed by a human.

[0369] For example, in the case where the input images include images of animals, the classification results obtained by learning can be classification into classes such as cat, dog, horse, elephant, giraffe, and fish. When using different learning data, or using different versions of learning programs, it is not always the case that the classification ability is the same.

[0370] If equivalency can be determined by a human, this case can be handled by configuring the system as shown below.

[0371] Classified images are listed and displayed to a person, and the person can be caused to check what the classes are on the screen.

[0372] Also, it is possible to configure a system in which a person is shown a list of classified images, and an appropriate group name is input for each image group. [0373] Also, if the same group name is input in accordance with a predetermined condition (e.g., by 80% or more of people), that group name can be used as the identification information (ID) for the classification result.

[0374] In this case, it is desirable to create a database of synonyms in advance and perform conversion between synonyms. For example, the character for "fish", "fishes", the hiragana for "fish", and the romaji for "fish" are represented by the character for "fish".

[0375] Alternatively, if names in different languages are input, they can be translated into a predetermined language. For example, when "Fish" is input in English, it is translated into the Japanese word for fish.

[0376] For example, in the case of displaying images perceived as fish by a human as classification results, the person inputs the group name "fish" for the classification result. If many people input the same group name, that group name can be used as the class ID.

[0377] The input ID may be converted into a different name space by a predetermined function. (For example, translated into Japanese).

[0378] In the case where there is a desire to align class levels, this can be handled by the following configuration.

[0379] · Group names are converted such that the levels are aligned in accordance with a predetermined condition. For example, a table of a

representative name and other names belonging to the same group is created in advance. In the example of animals, if different names belonging to animals, such as cat, dog, horse, elephant, and giraffe, are input, they are represented by an animal. In the case of fish, if different names corresponding to types of fish, such as carp, goldfish, and shark, are input, they are represented by a fish.

[0380] If there is an external device that assigned names to classes with the above method, it can be used as a reference.

[0381] For example, if a reference classifier is connected to a network, it is possible to transmit input data and compare classification results. If the

classification results match in accordance with a predetermined condition, the names that the reference classifier assigned to the classes can be assigned as class IDs. In this case, equivalency can be determined without human involvement.

[0382] The equivalency determining device 24 can be applied to classification of things other than images as well. The following describes an example of classifying songs. [0383] Classification is performed regarding feelings when a person listened to a song. The following classes can be used, for example.

Happy song, sad song

• Song that brings thoughts of the morning, song that brings thoughts of the afternoon, and song that brings thoughts of the night

One classifier is used as a reference, and classification result equivalency is determined using the degree of similarity to the reference.

[0384] It is not necessarily required to use the above method for a condition for determining classification result equivalency.

[0385] Furthermore, the following is also conceivable as an example of application in Application Example 5.

[0386] The equivalency determining device 24 selects a learning result that is to be a reference in accordance with a predetermined condition. This corresponds to the reference classifier.

[0387] For example, a learning result with high stability in classification results is selected. Specifically, the same sample group is classified repeatedly, and a ratio is obtained for the number of times that different classification from previous classification was performed for a specific object. The lower the ratio is, the higher the stability is, and therefore that learning result is used as a reference.

Alternatively, a configuration is possible in which the learning result with the highest degree of similarity for classification results with another learning result is selected.

[0388] When the reference learning result is determined, classification results can be matched with any of the methods in Application Examples 2 to 5 so as to match the determined learning result.

[0389] It is sufficient to assign any same class ID (identification information) to each of the matched classification results.

[0390] If there is no need to match human sensitivity, using this method enables matching classification results with no human contribution whatsoever.

[0391] Application Example 6

Application Example 6 is an example of assigning the same identification information (ID) to a group of classification results for which equivalency was confirmed by the equivalency determining device 24.

[0392] In the case of images of animals, there are cases where there is a desire to perform classification into the superordinate concept of an animal, rather than a cat or a dog. In such a case, the same ID of being an animal is assigned to classification results for cat, dog, horse, elephant, and giraffe.

[0393] It is possible to align levels with a grouping method similar to that in Application Example 5.

[0394] For example, in advance, for each class for which grouping is to be performed, classes that are to be included in a group are listed, or a table is created to associate data that is a list of classes that belong to each parent group.

[0395] Furthermore, groups may be formed into a hierarchical structure. The following shows a specific example of this.

· The flower group includes the subordinate levels of cherry blossom, plum, and tulip

The grass group includes the subordinate levels of mugwort and pampas.

The tree group includes the subordinate levels of pine, cedar, and cypress.

The plant group includes the subordinate levels of flower, grass, and tree.

[0396] Note that when assigning class IDs, a configuration is possible in which classes are traced back to the highest level group, and if it is determined that group subordinate concepts are equivalent, it is deemed that the superordinate concept groups are also equivalent.

[0397] Application Example 7

In the above application examples, when many scenery images or snapshots with supervised data are input, there are cases where different classification ability is attained for learning results of different learning devices for individual subjects. At this time, different classification is performed for the same subject.

On the other hand, when classification is performed by a person, there are cases where the same subject is classified from different viewpoints. At this time, it is not necessarily the case that the classes from respective viewpoints contradict each other.

[0398] In Application Example 7, multiple class IDs can be assigned to the same subject.

[0399] If classification result equivalency is confirmed in multiple learning results, it is possible to deem that classification results are equivalent even if multiple class IDs are assigned. This assignment of multiple classes is not limited to images, and is applicable to music data and content classification.

[0400] For example, in the case of music data, the class up-tempo or down- tempo, the class light song or dark song, or loud song or quiet song, and the class music for the morning, music for the afternoon, or music for the night can be assigned to the same song. Multiple pieces of identification information from classification results are assigned to the same song.

[0401] For example, in the case of educational content, it is possible to perform classification based on buyer reviews. For example, multiple classes can be assigned to the same content, such as content preferred by a specific generation based on generation-specific learning data, content closely related to a specific event based on data regarding people with interest in a specific event, and content evaluated highly by people with a specific personality.

[0402] Also, there are cases where multiple class IDs are assigned in the IoT field as well.

[0403] For example, the state of a store can be classified by a combination of sensors.

[0404] For example, being a busy time or a slow time is classified based on learning data influenced by the number of people, and being too cold, comfortable, or too hot is classified based on learning data related to air conditioning. More than one of such classes can be assigned to a single store.

[0405] Application Example 8

Application Example 8 is an example in which classification results are combined such that multiple learning results are the same in accordance with a predetermined condition.

[0406] A specific learning result among multiple learning results is set and used as a class reference. For example, a class closest to a class obtained by a human expert can be set and used as a class reference. Alternatively, it is possible to use a superordinate concept group obtained by grouping with the method in Application Example 6.

[0407] A classification result that matches, in accordance with a predetermined condition, a classification result from the reference learning result can be deemed to be an equivalent classification result with the method in Application Example 1.

[0408] Accordingly, the same class identification information as in the reference learning result can be assigned. [0409] In particular, in the case where classification was performed by unsupervised learning, there are cases where the reference class is divided into multiple classes when performing classification. For example, a learning result that classifies feline animals with photograph classes is used as the reference. At this time, there are cases where the learning device that was trained based on other learning data classifies feline animals such as cats, lions, and tigers into different classes. In this case, cat, lion, and tiger are combined to create one classification result (feline), thus making it possible to perform the same classification as the reference classification.

[0410] FIG. 27 shows a flowchart that generalizes the above method.

[0411] First, a reference class is selected from among the classes of multiple learning results (S2000).

[0412] Another class is compared with the reference class in accordance with a predetermined condition (S2001). It is then determined whether the subjects of the learning results match (S2001). If the subjects match, there is no need to perform combining, and therefore processing is ended.

[0413] If the subjects do not match, a different combination of classes is selected (S2003), and the combination of results are combined into a union (S2004). Note that combining may be performed to obtain an intersection.

[0414] The reference classification result and the combined class classification result are compared in accordance with a predetermined condition (S2005). It is then determined whether the subjects of the learning results match (S2006). If the subjects match, there is no need to perform combining, and therefore processing is ended.

[0415] If the result of the comparison satisfies an end condition (S2007), the closest combined result is selected (S2008).

[0416] Specific application examples have been described above. FIG. 28 shows a flowchart in the case of applying these application examples to the equivalency re ¬ determination preparation processing (Step 1008) of the flowchart in FIG. 24. The following are re -determination preparation processes.

Application Example 2 ' · change learning data range (Step 3000)

Application Example 3- re -set condition so that classification results are equivalent (Step 3001)

Application Example 4- change input data type so that classification results are equivalent (Step 3002) Application Example 5: make determination using external device that can determine equivalency (Step 3003)

Application Example 6^ assign same identification information to group of classification results for which equivalency was confirmed (Step 3004)

· Application Example 1 ' · assign multiple pieces of identification information when equivalency is confirmed (label classification results in multiple groups) (Step 3005)

Application Example 8: combine classes such that classification results are equivalent (Step 3006)

The above re -determination preparation processes are one example, and the present invention is not limited to these examples.

[0417] Lastly, fields to which the present invention is applicable will be described.

[0418] The present invention can handle cases where deep learning technology is utilized in order to realize classification ability comparable to a human or

classification that exceeds human identification ability.

[0419] For example, the present invention is applicable to making a poor article determination for a snack that has an irregular shape. Due to the shape being irregular, it is not possible to classify good articles and poor articles by performing precise measurement. However, the purchaser of the product can easily identify an article that is clearly poor. For example, it is easy to know when a foreign object such as a piece of paper is included.

[0420] There are also other examples where there is demand for classification ability at a level that is comparable to human classification ability.

[0421] For example, with a production line for painting an automobile operation unit or surrounding panel, there is demand for similar classification ability in the case of detecting the defect of subtle painting unevenness of the coating. The range of tolerable painting unevenness for the product depends on the judgement of the purchaser of the vehicle, and therefore it is necessary to perform classification that is similar to human sensitivity. In particular, in the case where there are small dents and bumps in the paint surface, or there are partial gradations in the paint rather than being a single color, it is difficult to perform classification with a simple method. Accordingly, in the case of determining good articles or the rank of a product, classification ability that is similar to or exceeds human ability is needed. [0422] There are many of such fields for which advanced classification ability is demanded.

[0423] Machine learning technology used to attain classification ability is not limited to deep learning. The present invention is applicable to any machine learning technology.

The application fields of the application examples are various examples.

[0424] Classification is needed in various scenes, and therefore the present invention is applicable to a wide variety of fields. Examples are shown below.

[0425] The present invention is applicable to the classification of good articles and poor articles for a product that has an irregular shape. The present invention is also applicable to various industrial fields such as food products, machine parts, chemical products, and medicinal products. The present invention is also applicable to shipping inspection and rank classification in fishing, agriculture, and forestry fields, and the like. The present invention is applicable to the service industry. The present invention is applicable to medical and health fields. The present invention is applicable when applying AI technology to products in the integration field. The present invention can be applied to a corporate system. The present invention can be applied to a system that utilizes IT technology. The present invention can be applied to big data analysis. The present invention can be applied with a

classification function in a wide variety of control devices. The present invention can also be applied to any other field in which classification is needed.

[0426] A classification result equivalency determination sometimes differs according to the classification subject or the learning result, but in each of the above application examples, different equivalency determinations may be made using information stored in the learning information storage unit and the determination condition storage unit.

[0427] Also, part or all of the above embodiments can be described as noted below, but there is no limitation to this.

[0428] Note 1

An identification information assigning system including:

a learning information storage unit that stores information regarding at least two or more learning results that attained classification ability through machine learning; a determination condition storage unit that stores a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent; and a processor,

wherein the processor receives classification results output from the at least two or more learning results,

the processor reads out information regarding the learning results and the classification result equivalency determination condition from the learning information storage unit and the determination condition storage unit, and determines whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition,

and the processor assigns the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

[0429] Note 2

An identification information assigning device including:

at least one or more memories, and a processor connected to the at least one or more memories,

wherein the processor

receives classification results output from at least two or more learning results that attained classification ability through machine learning,

acquires information regarding learning results of classification results that are input, and a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent,

determines whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition, and

assigns the same identification information to the classification results that were determined to be equivalent, to learning results that output the

classification results that were determined to be equivalent, or to both. [0430] Although the present invention has been described by way of preferred embodiments and application examples above, it is not necessary to include the configurations of all of the embodiments or application examples, and not only is implementation by appropriate combinations possible, but also the present invention is not necessarily limited to the above embodiments and application examples, and various modifications can be made within the scope of the technical idea.

Further, the following items define aspects of the present disclosure :

Item 1. An identification information assigning system comprising:

a learning information storage unit that stores information regarding at least two or more learning results that attained classification ability through machine learning!

a determination condition storage unit that stores a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a classification result input unit that receives classification results output from the at least two or more learning results;

a determination unit that determines whether, out of classification results output from the at least two or more learning results, at least two

classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and

an identification information assigning unit that assigns the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

Item 2. The identification information assigning system according to item 1,

wherein the identification information assigning unit assigns the same identification information to the learning results that output the classification results that were determined to be equivalent, and

the determination unit determines equivalency of the learning results based on a determination of whether the classification results are equivalent. Item 3. The identification information assigning system according to item 1 or item 2,

wherein the classification result equivalency determination condition includes at least one of a degree of coincidence of classification subjects of the learning results and a degree of coincidence of classification results output from the learning results.

Item 4. The identification information assigning system according to any one of items 1 to 3,

wherein the identification information assigning system has a setting unit that sets a range in which the identification information is assigned, and

the identification information assigning unit assigns identification information to at least one of the learning results and the classification results in the set range in which identification information is assigned.

Item 5. The identification information assigning system according to any one of items 1 to 4,

wherein the identification information assigning system has a classification subject determination unit that determines equivalency of

classification subjects classified by the learning results.

Item 6. The identification information assigning system according to any one of items 1 to 5,

wherein the identification information assigning system has a re-learning request unit that, in a case where classification results are not determined to be equivalent by the determination unit, requests re-learning with the different learning data

Item 7. The identification information assigning system according to any one of items 1 to 6,

wherein in a case where classification results are not determined to be equivalent, the determination unit changes the classification result equivalency determination condition that is used.

Item 8. The identification information assigning system according to any one of items 1 to 7, wherein the identification information assigning system has an input data changing unit that, in a case where classification results are not determined to be equivalent by the determination unit, changes input data that is input to the learning results and is used in classification result equivalency determination.

Item 9. The identification information assigning system according to any one of items 1 to 8,

wherein the identification information assigning system has a classification result combining unit that combines classification results output from the learning results and generates new classification results, and

the determination unit determines whether classification results generated by the classification result combining unit are equivalent.

Item 10. An identification information assigning device comprising:

a classification result input unit that receives classification results output from at least two or more learning results that attained classification ability through machine learning;

an acquisition unit that acquires information regarding learning results of classification results that are input, and a classification result equivalency determination condition that enables determining that classification results output from the at least two or more learning results are equivalent;

a determination unit that determines whether, out of classification results output from the at least two or more learning results, at least two

classification results are equivalent, with use of information regarding the learning results and the classification result equivalency determination condition; and

an identification information assigning unit that assigns the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

Item 11. An identification information assigning method comprising:

receiving classification results output from at least two or more learning results that attained classification ability through machine learning;

determining whether, out of classification results output from the at least two or more learning results that attained classification ability through machine learning, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition for determining that classification results output from the at least two or more learning results are equivalent; and

assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.

Item 12. A program that causes a computer to execute:

a process of receiving classification results output from at least two or more learning results that attained classification ability through machine learning.' a process of determining whether, out of classification results output from the at least two or more learning results, at least two classification results are equivalent, with use of information regarding the at least two or more learning results and a classification result equivalency determination condition for determining that classification results output from the at least two or more learning results are equivalent, the information regarding the at least two or more learning results and the classification result equivalency determination condition being stored in a storage unit; and

a process of assigning the same identification information to the classification results that were determined to be equivalent, to learning results that output the classification results that were determined to be equivalent, or to both.