Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR AN INTERPRETABLE NEURAL NETWORK REPRESENTATION
Document Type and Number:
WIPO Patent Application WO/2019/180310
Kind Code:
A1
Abstract:
The invention relates to a method comprising obtaining a neural network;distilling the neural network into at least one inference model; based on the inference model, determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and outputting the interpretability measure.The invention also relates to an apparatus and a computer program product for implementing the method.

More Like This:
Inventors:
FAN LIXIN (FI)
Application Number:
PCT/FI2019/050209
Publication Date:
September 26, 2019
Filing Date:
March 12, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
G06N5/04; G06N3/02; G06N5/02; G16H50/20; G06N7/02
Foreign References:
US6564198B12003-05-13
EP3291146A12018-03-07
Other References:
CHE, Z. ET AL.: "Interpretable Deep Models for ICU Outcome Prediction", PROCEEDINGS OF THE AMIA ANNUAL SYMPOSIUM, November 2016 (2016-11-01), pages 371 - 380, XP055641002, Retrieved from the Internet [retrieved on 20190823]
FROSST, N. ET AL.: "Distilling a Neural Network Into a Soft Decision Tree", ARXIV, 27 November 2017 (2017-11-27), XP080840510, Retrieved from the Internet [retrieved on 20190822]
AMIT DHURANDHAR ET AL., TIP: TYPIFYING THE INTERPRETABILITY OF PROCEDURES, 9 June 2017 (2017-06-09)
QUANSHI ZHANG ET AL., VISUAL INTERPRETABILITY FOR DEEP LEARNING: A SURVEY, 2 February 2018 (2018-02-02)
LIXIN FAN, DEEP EPITOME FOR UNRAVELLING GENERALIZED HAMMING NETWORK: A FUZZY LOGIC INTERPRETATION OF DEEP LEARNING, 5 November 2017 (2017-11-05)
See also references of EP 3769270A4
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
Claims:

1 . A method, comprising:

- obtaining a neural network;

- distilling the neural network into at least one inference model;

- based on the inference model, determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and

- outputting the interpretability measure.

2. An apparatus comprising at least

- means for obtaining a neural network;

- means for distilling the neural network into at least one inference model;

- based on the inference model, means for determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and

- means for outputting the interpretability measure.

3. The apparatus according to claim 2, wherein the input data is an image.

4. The apparatus according to claim 2 or 3, wherein the apparatus is used for medical diagnosis.

5. The apparatus according to any of the claims 2 - 4, wherein said at least one distilled model comprises one or more decision trees extracted from the neural network.

6. The apparatus according to any of the claims 2 - 5, wherein said distilling is performed by using fuzzy logic.

7. The apparatus according to any of the claims 2 - 6, further comprising means for determining an interpretability measure for a distilling method used for distilling the neural network.

8. The apparatus according to any of claims 5 - 7, wherein the level of interpretability is determined based on at least one of:

a number of tests associated with the one or more decision trees;

at least one Hamming distance between decisions of the decision trees;

an average fuzziness of decisions of the one or more decision trees.

9. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

- obtain a neural network;

- distil the neural network into at least one inference model;

- based on the inference model, to determine an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and

- output the interpretability measure. 10. A computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:

- obtain a neural network;

- distil the neural network into at least one inference model;

- based on the inference model, to determine an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and

- output the interpretability measure.

Description:
A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR AN INTERPRETABLE NEURAL NETWORK REPRESENTATION

Technical Field

The present solution generally relates to machine learning and data analytics. In particular, the solution relates to determining and signaling a level of interpretability of neural network representation.

Background

Many practical applications rely on the availability of semantic information about the content of the media, such as images, videos, etc. Semantic information is represented by metadata which may express the type of scene, the occurrence of a specific action/activity, the presence of a specific object, etc. Such semantic information can be obtained by analyzing the media.

The analysis of media is a fundamental problem which has not yet been completely solved. This is especially true when considering the extraction of high-level semantics, such as object detection and recognition, scene classification (e.g., sport type classification) action/activity recognition, etc.

Recently, the development of various neural network techniques has enabled learning to recognize image content directly from the raw image data, whereas previous techniques consisted of learning to recognize image content by comparing the content against manually trained image features.

Despite of its achievements on a variety of neural network techniques, neural networks have been criticized for their black-box nature. Such kind of black-box decision making is unacceptable in many use cases, such as in medical diagnosis or autonomous driving in which even rare mistakes can be costly or fatal.

Summary

Now there has been invented an improved method and technical equipment implementing the method, for providing an interpretable neural network encapsulation and explainable logic of inferences involved in the decision making. Various aspects of the invention include a method, an apparatus, and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments of the invention are disclosed in the dependent claims.

According to a first aspect, there is provided a method comprising obtaining a neural network; distilling the neural network into at least one inference model; based on the inference model, determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and outputting the interpretability measure.

According to a second aspect, there is provided an apparatus comprising means for obtaining a neural network; means for distilling the neural network into at least one inference model; based on the inference model, means for determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and means for outputting the interpretability measure.

According to a third aspect, there is provided apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following obtain a neural network; distil the neural network into at least one inference model; based on the inference model, to determine an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and output the interpretability measure.

According to a fourth aspect, there is provided a computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to obtain a neural network; distil the neural network into at least one inference model; based on the inference model, to determine an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model; and output the interpretability measure.

According to an embodiment, the input data is an image.

According to an embodiment, the apparatus is used for medical diagnosis.

According to an embodiment, said at least one distilled model comprises one or more decision trees extracted from the neural network. According to an embodiment, said distilling is performed by using fuzzy logic.

According to an embodiment, an interpretability measure is determined for a distilling method used for distilling the neural network.

According to an embodiment, level of interpretability is determined based on at least one of: a number of tests associated with the one or more decision trees; at least one Hamming distance between decisions of the decision trees; an average fuzziness of decisions of the one or more decision trees.

Description of the Drawings

In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which

Fig. 1 shows a computer system according to an embodiment suitable to be used in data processing;

Fig. 2 shows an example of a Convolutional Neural Network (CNN);

Fig. 3 shows an example of a Recurrent Neural Network (RNN);

Figs. 4a-d show examples of 64 channels of merged neuron weights of a

Generalized Hamming Network;

Fig. 5 shows an example of an encapsulation of interpretable neural network representation;

Fig. 6 illustrates a performance-interpretability trade-off curve with respect to that of an ideal model;

Fig. 7 is a flowchart illustrating a method according to an embodiment.

Description of Example Embodiments

In the following, several embodiments of the invention will be described in the context of machine learning and data analytics. Figure 1 shows a computer system suitable to be used in data processing, for example in machine learning according to an embodiment. The generalized structure of the computer system will be explained in accordance with the functional blocks of the system. Several functionalities can be carried out with a single physical device, e.g. all calculation procedures can be performed in a single processor if desired. A data processing system of an apparatus according to an example of Fig. 1 comprises a main processing unit 100, a memory 102, a storage device 104, an input device 106, an output device 108, and a graphics subsystem 1 10, which are all connected to each other via a data bus 1 12.

The main processing unit 100 is a conventional processing unit arranged to process data within the data processing system. The main processing unit 100 may comprise or be implemented as one or more processors or processor circuitry. The memory 102, the storage device 104, the input device 106, and the output device 108 may include conventional components as recognized by those skilled in the art. The memory 102 and storage device 104 store data in the data processing system 100. Computer program code resides in the memory 102 for implementing, for example, machine learning process. The input device 106 inputs data into the system while the output device 108 receives data from the data processing system and forwards the data, for example to a display. The data bus 1 12 is a conventional data bus and while shown as a single line it may be any combination of the following: a processor bus, a PCI bus, a graphical bus, an ISA bus. Accordingly, a skilled person readily recognizes that the apparatus may be any data processing device, such as a computer device, a personal computer, a server computer, a mobile phone, a smart phone or an Internet access device, for example Internet tablet computer.

It needs to be understood that different embodiments allow different parts to be carried out in different elements. For example, various processes of the computer system may be carried out in one or more processing devices; for example, entirely in one computer device, or in one server device or across multiple user devices. The elements of machine learning process may be implemented as a software component residing on one device or distributed across several devices, as mentioned above, for example so that the devices form a so-called cloud.

Data may be analyzed by deep learning. Deep learning is a sub-field of machine learning which has emerged in the recent years. Deep learning may involve learning of multiple layers of nonlinear processing units, either in supervised or in unsupervised manner, or in semi-supervised manner. These layers form a hierarchy of layers. Each learned layer extracts feature representations from the input data. Features from lower layers represent low-level semantics (i.e. less abstract concepts, such as edges and texture), whereas higher layers represent higher-level semantics (i.e., more abstract concepts, like scene class). Unsupervised learning applications typically include pattern analysis and representation (i.e., feature) learning, whereas supervised learning applications may include classification of image objects (in the case of visual data).

Deep learning techniques may be used e.g. for recognizing and detecting objects in images or videos with great accuracy, outperforming previous methods. The fundamental difference of deep learning image recognition technique compared to previous methods is learning to recognize image objects directly from the raw data, whereas previous techniques are based on recognizing the image objects from hand- engineered features (e.g. SIFT features).

During the training stage, deep learning techniques build hierarchical computation layers which extract features of increasingly abstract level. Thus, at least the initial layers of an artificial neural network represent a feature extractor. An example of a feature extractor in deep learning techniques is included in the Convolutional Neural Network (CNN), shown in Fig. 2. A CNN is composed of one or more convolutional layers, fully connected layers, and a classification layer on top. CNNs are easier to train than other deep neural networks and have fewer parameters to be estimated. Therefore, CNNs are highly attractive architecture to use, especially in image and speech applications.

In the example of Fig. 2, the input to a CNN is an image, but any other data could be used as well. Each layer of a CNN represents a certain abstraction (or semantic) level, and the CNN extracts multiple feature maps. A feature map may for example comprise a dense matrix of Real numbers representing values of the extracted features. The CNN in Fig. 2 has only three feature (or abstraction, or semantic) layers C1 , C2, C3 for the sake of simplicity, but CNNs may have more than three, and even over convolution layers.

The first convolution layer C1 of the CNN consists of extracting 4 feature-maps from the first layer (i.e. from the input image). These maps may represent low-level features found in the input image, such as edges and corners. The second convolution layer C2 of the CNN, consisting of extracting 6 feature-maps from the previous layer, increases the semantic level of extracted features. Similarly, the third convolution layer C3 may represent more abstract concepts found in images, such as combinations of edges and corners, shapes, etc. The last layer of the CNN, referred to as fully connected Multi-Layer Perceptron (MLP) may include one or more fully-connected (i.e., dense) layers and a final classification layer. The MLP uses the feature-maps from the last convolution layer in order to predict (recognize) for example the object class. For example, it may predict that the object in the image is a house.

Deep learning is a field, which studies artificial neural networks (ANN), also referred to as neural network (NN). A neural network is a computation graph representation, usually made of several layers of successive computation. Each layer is made of units or neurons computing an elemental/basic computation.

The goal of a neural network is to transform the input data into a more useful output. One example is classification, where input data is classified into one of N possible classes (e.g., classifying if an image contains a cat or a dog). Another example is regression, where input data is transformed into a Real number (e.g. determining the music beat of a song).

The power of neural networks comes from the internal representation which is built inside the layers. This representation is distributed among many units and is hierarchical, where complex concepts build on top of simple concepts. A neural network has two main modes of operation: training phase and testing phase. The training phase is the development phase, where the network learns to perform the final task. Learning consists in iteratively updating the weights or connections between units. The testing phase is the phase in which the network actually performs the task. Learning can be performed in several ways. The main ones are supervised, unsupervised, and reinforcement learning. In supervised training, the model is provided with input-output pairs, where the output is usually a label. In supervised training, the network is provided only with input data (and also with output raw data in case of self-supervised training). In reinforcement learning, the supervision is more sparse and less precise; instead of input-output pairs, the network gets input data and, sometimes, delayed rewards in the form of scores (E.g., -1 , 0, or +1 ).

Another example of neural network is a Recurrent Neural Network (RNN), where the hidden representation (hidden state h) is updated based not only on the current input but also on the hidden representations obtained from past inputs. In other words, RNNs work by recurrently (iteratively) looking at the input at each time step t and building an internal representation of the whole sequence so far. This internal representation is a “summary” and can be thought of as a“memory”. One of the RNN types is Long Short- Term Memory (LSTM) network, which uses special gating mechanisms that help training RNNs more effectively. Figure 3 illustrates an encoder-decoder model with additional RNN in the form of LSTM 310 for modeling the temporal aspect in data. As shown in Figure 3, an input at time t is given to the system 300, and getting the data at time t+1 as an output.

The interpretability of neural network representation (NNR) makes it possible to explain how inference results are obtained by using Neural Networks in question. The interpretability of NNR is mandatory, e.g. in medical diagnosis applications for doctors to explain rationales of decision-making to patients.

Despite of its record-breaking achievement on a variety of artificial intelligence problems such as image classification, speech recognition and game playing, neural network has long been criticized for its black-box nature. Such kind of black-box decision-making is unacceptable in many use cases, e.g. in medical diagnosis, whereas it is imperative by law for doctors to explain rationales of decision-making to patients and their families.

The present embodiments are targeted to an interpretable neural network encapsulation. The embodiments utilize a technique of distillation, where a simple student decision making model is learnt through a distillation from a complex teacher model. In the following, three examples of known distillation methods are given.

In one of the distillation methods a trained neural network is first used to produce the class probabilities, which are subsequently used as“soft targets” for training the soft decision tree (SDT) model. SDT models trained as such are able to achieve higher performances and generalize better, than decision tree models directly trained without exploiting the“soft targets” distilled from the original neural network model. On the other hand, the hierarchical decisions employed in a SDT model are much more easier to explain, as compared with the hierarchical features learned in the original neural network.

A SDT model may comprise for example 15 inner nodes at 4 different depths. At each inner node with a learned filter w, and a bias bi, the probability of taking the rightmost branch is given p,(x) = o(x w, +bi) where x is the input pattern to the model and s is the sigmoid logistic function. Setting off from the top level root node, input patterns branch to the next level inner nodes according to the above mentioned branching condition, and ultimately reach the leaf classification nodes with the most likely labels annotated. Such types of decision tree classification is easy for human users to verify all calculation steps and reach the final classification by themselves. Another way to exploit a trained neural network is to make it classify a large amount of unlabeled data, which are subsequently used to improve the training of SDT models. It is also possible to generate synthetic unlabeled data with the trained neural network.

In another distillation method deep models are optimized for human-simulatability via a new model complexity penalty function (“tree regularization”). The tree regularization favors models whose decision boundaries can be well-approximated by small decision- trees, thus penalizing models that requires many calculations to simulate predictions. In terms of implementation, the true-average-path-length cost function Q(W) is replaced by an estimate of the average-path-length H(W), which is differentiable with respect to neural network parameters on the one hand, and makes reliable estimation on the other hand. This implementation step turns out to be a key technical contribution of the method.

In yet another distillation method a decision tree is learned to explain the logic of each prediction of a pre-trained convolutional neural networks (CNNs). The decision tree learns to identify, qualitatively, which object parts in high convolution-layers of the CNN contributes to the predication in question. By doing do, a parse tree is recursively inferenced in a top-down manner for a given test image. Moreover, the (relative) contributions of each decision tree node can be exactly computed using optimized weights parameters of selected filters).

Examples of evaluation metrics for CNN model interpretability include the following methods: 1 ) the compatibility between a convolution filter f and a specific semantic concept (or parts) may be quantified by the intersection-over-union score. The higher the score, the more relevant the filter is to the semantic concept (thus more interpretable), and vice versa; 2) location stability may account for the interpretability of underlying CNN models. The higher the stability, the more interpretable the model is. Implementation-wise, the location stability can be measured by the average deviation of relative location of detected object parts w.r.t. some reference landmarks in test images.

Another method for obtaining a sub-optimal neural network with sufficient interpretability is to adopt the Generalized Hamming Network (GHN), in which the interplay between neuron inputs X and neuron weights W is explained in the light of fuzzy logic. More specifically, each neuron outputs quantifies the degree of fuzzy equivalence between inputs X and neuron weights W. In other words, each neuron evaluates the fuzzy truth value of the statement“x <® w”, where“<®” denotes a fuzzy equivalence relation. When multiple network layers are stacked together, neighboring neuron outputs from the previous layer are integrated to form composite statements, e.g.“(x 1 i <® w 1 i, . . . ,x 1 , <® w 1 , ) <® w 2 j ”, where superscripts correspond to two adjacent layers. Thus stacked layers will form more complex, more powerful statements as the layer depth increases. The generalized hamming network constructed as such is able to provide insightful and rigorous explanation about how network filters at different layers work jointly to infer decision for any input images. Moreover, a trained deep GHN can be converted into a simplified network with merely one layer, for the sake of high interpretability, reduced algorithmic and computational complexities.

Figures 4a-4d illustrate 64 channels of merged neuron weights, up to layers 1 ,2, 3 and 4 respectively, of a Generalized Hamming Network, which is trained for CIFAR10 classification problem. In Figures 4a-4d a merged neural network filters up to 4 different layers. Note that the merged weights are independent of input test images; and the merging of weights is purely analytic and no optimization of learning steps are needed. Each merged neuron weight represents different types of salient image features from small to large scale, including e.g. dots or corner points (Fig. 4a), oriented edgelets (Fig. 4b), textons at different scales (Fig. 4c-4d). Note that for color images, textons may have associated color information not shown in Figs. 4a-d.

The present embodiments are targeted to an encapsulation of interpretable neural network representation, as illustrated in Figure 5. The encapsulation method comprises

- A top-level neural network model 510 allows the highest performance decision- making (e.g. the highest accuracy) among a set of neural networks. Such a NN model may be complex, and thus, its inference logic may not be easy to explain, i.e. its interpretability measure is low.

- In comparison with the model 510, there exists a sub-optimal model 520 which is distilled (or learned) from the model 510. Such a sub-optimal (i.e. distilled) model may be simpler and easier to explain, but nevertheless, has a sub- optimal performance as compared with the original model.

- Depending on different distillation methods, the inference logic of the distilled model can be explained in terms of e.g.“IF-THEN” propositional logic, or fuzzy logic or combination of simple inference rules.

- Optionally, there may exist multiple sub-optimal models 530 using different distillation methods, which are distilled (or learned) from the model 510.

- Depending on different distillation methods, a numeric interpretability measure can be concretely defined for each distilled model. Such interpretability measures can be used to provide quantitative comparison between different distilled models. In the present embodiments, it is assumed that a neural network 510 is trained for a given learning task, e.g. image-based medical diagnosis, or anomaly detection in autonomous driving scenario. As mentioned, such an optimal neural network 510 is often complex, e.g. consists of thousands of convolutional layers, and hard to explain. The encapsulation of neural network representations according to embodiments provide more explainable justification for decision making in following aspects:

1. Distillation methods and associated interpretability measures of neural network representations

For the original neural network 510, a number of parameters can be used to quantify the interpretability. However, the more parameters are used in the model, the less interpretable the model is. Therefore, the original neural network 510 is distilled into at least one distilled model 520, 530 according any of the distillation methods discussed above. The distillation method is used to extract simple inference models 520, 530 from the optimal neural network models 510.

In particular, the above mentioned distillation methods can be adopted to extract decision-trees from original neural network model 510. For any given data sample in the extracted decision-trees, the decision made at a leaf-node of the tree can be re- casted at sequence of IF-THEN propositional logic, e.g.“if body-temperature >39°C AND if white_blood_cell level > 9.0 then ...”

Moreover, the interpretability of the decision-tree can be concretely defined as the inversely proportional to the number of tests involved, i.e. where /() can be an arbitrary monotonic function [0,1] ® [0,1] and ttests ³ 1 is assumed.

For decision forest (i.e. ensemble of decision trees), the above-mentioned propositional logic inference and interpretability measure are also applicable. Nevertheless, the degree of consensus between different decision tree leaves can be taken into account by: where 7 is the interpretability for tree I, and the consensus\s defined as the (average) consistency of decisions made by different trees for each data sample, e.g. by the averaged hamming distance between decision made by different trees.

According to another embodiment, the distillation method being used may adopt fuzzy logic, the interpretability of such distilled model can be explained as a sequence of Fuzzy Logic predicates e.g.“if body-temperature > 39°C with degree 0.85 fuzzy_AND if white_blood_cell level >9.0 with degree 0.9 then ...”. It is to be noticed that the fuzziness of each fuzzy predicate is often represented as a real number between [0.0 - 1 .0], in which 1 .0 is confirmatively TRUE or FALSE, and 0.0 means completely undetermined. More specifically, the fuzziness of a single predicate may be defined as fuzziness = 1.0 - Absolute( degree-of-truth-value - 0.5 ) * 2.0.

The interpretability of fuzzy logic can then be defined in by the number of tests and the average fuzziness over all test predicates.

1.0— average fuzziness

Other forms of interpretabilities can be defined accordingly for different distillation methods. It is thus appreciated that the overall encapsulation of neural network representation according to the embodiments is applicable regardless of the nature of underlying distillation methods/model.

Still another measure that takes into account of both interpretability and performance is illustrated below. Depending on different tasks, the performance measure may be classification/recognition accuracy, regression accuracy, or even camera pose estimation accuracy. Despite of various types of performance measures, the proposed measure is generic and aims to evaluate each model in terms of its performance- interpretability trade-off curve. Often a distilled model sacrifices performance for better interpretability and vice-versa (see below the curve marked with“Model A” ). The area under the curve (denoted as SA) is thus a good summarization of the model’s performance-interpretability trade off. In particular, we may imagine there exists an ideal model whose performance and interpretability are both maximized (marked as the green model below) and the area in the green rectangle is denoted as S b . The ratio of SA / S b is thus between [0,1 ] and can be used to quantify the gap of performance- interpretability curve from an ideal model. Figure 6 illustrates a performance-interpretability trade-off curve with respect to that of an ideal model.

2. Encapsulation of signaling of interpretable neural network representations

For each neural network model 510, 520, 530 illustrated in Figure 5, corresponding descriptions include, but are not limited to, at least one of the following attributes:

• type (CNN, RNN, tree, forest, etc.);

• structure parameters (#layers, #neurons, #weights, #trees, model-size (bytes) etc.);

• training parameters (learning rate, #epoch, batch-size, fine-tuned from public models, etc.);

• performance measure (accuracy, speed etc.),

• interpretability measure (as defined above).

In addition to above mentioned attributes, the existence and/or type of relationship between different models may also be encapsulated in the representations, such as

• Model 520 is learned/distilled from model 510

• Model X is fine-tuned from model 520

• Model Y is quantized or binarized from model 520

• etc.

3. Application of signaling/encapsulation of interpretable neural network representations

When a neural network is used to make decision for given tasks, e.g. medical diagnosis, users (for example a doctor or a patient in medical applications) may make a request about target performance and interpretability levels of the neural network. Neural network representations that fulfil the query criterion will be searched and returned. Users may also request a particular type of representation (e.g. decision tree) with the level of interpretabilities above certain thresholds.

In a distributed neural network use case, remote parties may request to retrieve/download a neural network representation that fulfils certain criteria, e.g. with the model size below certain thresholds and interpretability above certain levels. Such kind of requests may be used e.g. for mobile healthcare applications, with which people make medical diagnosis for family members or themselves. In one embodiment, a client application or device may send a request to a local or remote server for obtaining one or more neural networks corresponding to one or more interpretability levels. The request may comprise an indication of type of interpretability measure and/or one or more criteria associated with an interpretability measure, for example a threshold for a level of interpretability and/or an indication that the level of interpretability is to be determined based on a number of tests associated with a decision tree. A server may respond by sending a neural network or a set encapsulated neural networks. The response communication may include signaling associated with the provided neural network(s), as discussed above. In one embodiment, a server may determine that it does not possess a neural network fulfilling the requested interpretability criteria, and in response, initiate generation of one or more new distilled neural networks. Having a particular target interpretability level set by the request from the client, the server may adjust the distillation method in order to achieve the target interpretability. For example, the distillation process may be constrained to a lower amount of nodes or filters in a soft-decision tree or fuzziness of logic predicates may be limited. After generating the new neural network, the server may communicate the new neural network along with the associated interpretability level to the client.

Figure 7 is a flowchart illustrating a method according to an embodiment. A method comprises obtaining a neural network 701 ; distilling the neural network into at least one inference model 702; based on the inference model, determining an interpretability measure, said interpretability measure indicating a level of interpretability of the inference model 703; and outputting the interpretability measure 704.

An apparatus according to an embodiment comprises means for obtaining a neural network; means for distilling the neural network into at least one inference model; means for determining an interpretability measure based on the inference model, said interpretability measure indicating a level of interpretability of the inference model; and means for outputting the interpretability measure. The means comprises a processor, a memory, and a computer program code residing in the memory, wherein the processor may further comprise processor circuitry.

The various embodiments may provide advantages. For example, the interpretable neural network encapsulation according to the embodiments has high accuracy in decision-making for given tasks and provides explainable logic of inferences involved in the decision making. The encapsulation also allows numeric interpretability measures to be used for specific decision-making models. The various embodiments of the invention can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.

If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above- described functions and embodiments may be optional or may be combined.

Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.