Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEEP SELF-ATTENTION MATRIX MODEL REFINEMENT
Document Type and Number:
WIPO Patent Application WO/2023/076008
Kind Code:
A1
Abstract:
A computing system scores importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence. The neural network model includes multiple encoding layers. Self-attention matrices of the neural network model are received into an importance evaluator. The self-attention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence. Each self-attention matrix corresponds to one of the multiple encoding layers. The importance evaluator generates an importance score for one or more of the tokens in the input token sequence. Each importance score is based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model.

Inventors:
BARKAN OREN (US)
HAUON EDAN (US)
KATZ ORI (US)
CACIULARU AVI (US)
MALKIEL ITZIK (US)
ARMSTRONG OMRI (US)
HERTZ AMIR (US)
KOENIGSTEIN NOAM (US)
NICE NIR (US)
Application Number:
PCT/US2022/045829
Publication Date:
May 04, 2023
Filing Date:
October 06, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MICROSOFT TECHNOLOGY LICENSING LLC (US)
International Classes:
G06N3/0455; G06N3/044
Other References:
YARU HAO ET AL: "Self-Attention Attribution: Interpreting Information Interactions Inside Transformer", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 February 2021 (2021-02-25), pages 1 - 9, XP081884035
BLAZ SKRLJ ET AL: "Feature Importance Estimation with Self-Attention Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 11 February 2020 (2020-02-11), pages 1 - 8, XP081595486
Attorney, Agent or Firm:
CHATTERJEE, Aaron C. et al. (US)
Download PDF:
Claims:
Claims

1. A computing processor-based method of scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence, wherein the neural network model includes multiple encoding layers, the computing processor-based method comprising: receiving self-attention matrices of the neural network model into an importance evaluator, the self-attention matrices being generated by the neural network model while computing the one or more prediction scores based on the input token sequence, each self-attention matrix corresponding to one of the multiple encoding layers; and generating, in the importance evaluator, an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model, wherein each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

2. The computing processor-based method of claim 1, wherein the summation as a function of the self-attention matrices includes a summation of the self-attention matrices.

3. The computing processor-based method of claim 1, further comprising: receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of gradients of the prediction scores with respect to the self-attention matrices.

4. The computing processor-based method of claim 1, further comprising: receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

5. The computing processor-based method of claim 1, further comprising: receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

6. The computing processor-based method of claim 1, further comprising: receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

7. The computing processor-based method of claim 1, further comprising: refining the neural network model based on the importance score for the one or more of the tokens in the input token sequence.

8. A computing system for scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence, wherein the neural network model includes multiple encoding layers, the computing system comprising: one or more hardware computing processors; a communication interface executable at least in part by the one or more hardware computing processors and configured to receive self-attention matrices of the neural network model, the self-attention matrices being generated by the neural network model while computing the one or more prediction scores based on the input token sequence, each self-attention matrix corresponding to one of the multiple encoding layers; and an importance evaluator executable at least in part by the one or more hardware computing processors and configured to generate an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the selfattention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model, wherein each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

9. The computing system of claim 8, wherein the summation as a function of the selfattention matrices includes a summation of the self-attention matrices.

10. The computing system of claim 8, wherein the communication interface is further configured to receive the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of gradients of the prediction scores with respect to the self-attention matrices.

11. The computing system of claim 8, wherein the communication interface is further configured to receive the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

12. The computing system of claim 8, wherein the communication interface is further configured to receive the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

13. The computing system of claim 8, wherein the communication interface is further configured to receive the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

14. The computing system of claim 8, further comprising: a refinement applicator executable by the one or more computing hardware processors and configured to refine the neural network model based on the importance score for the one or more of the tokens in the input token sequence.

15. One or more tangible processor-readable storage media embodied with instructions for executing on one or more processors and circuits of a computing device a process for scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence, wherein the neural network model includes multiple encoding layers, the process comprising: receiving self-attention matrices of the neural network model into an importance evaluator, the self-attention matrices being generated by the neural network model while computing the one or more prediction scores based on the input token sequence, each self-attention matrix corresponding to one of the multiple encoding layers; and generating, in the importance evaluator, an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model, wherein each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

18

Description:
DEEP SELF-ATTENTION MATRIX MODEL REFINEMENT

Background

Deep contextualized language models can be applied to perform Natural Language Processing (NLP) tasks, such as question answering, co-reference resolution, and other NLP benchmarks. Such models provide an efficient framework for learning representations in a fully self-supervised manner from text corpora (e.g., by relying on co-occurrence statistics). Unlike traditional featurebased machine learning systems that assign and optimize weights for interpretable explicit features, a transformer-based architecture relies on a stack of multi-head self-attention layers composed of a large number (e.g., hundreds of millions) of parameters. Accordingly, this complexity makes it challenging to understand how individual elements of the input data (e.g., words, order, specific sequences, co-occurrence) impacted the results of the NLP tasks.

Summary

The described technology provides a computing system and method for scoring the importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence. The neural network model includes multiple encoding layers. Self-attention matrices of the neural network model are received into an importance evaluator. The self-attention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence. Each selfattention matrix corresponds to one of the multiple encoding layers. The importance evaluator generates an importance score for one or more of the tokens in the input token sequence. Each importance score is based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model. Each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence. This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Other implementations are also described and recited herein.

Brief Descriptions of the Drawings

FIG. 1 illustrates an example machine learning system used in natural language processing (NLP) with an example importance evaluator providing importance scores for deep self-attention matrix model refinement. FIG. 2 illustrates an example machine learning system used with deep self-attention matrix model refinement.

FIG. 3 illustrates an example importance evaluator providing importance scores for deep selfattention matrix model refinement relative to a neural network system.

FIG. 4 illustrates example operations providing importance scores for deep self-attention matrix model refinement relative to a neural network system.

FIG. 5 illustrates an example computing device for use in refining a neural network system.

Detailed Descriptions

At inference, transformer-based models compute pairwise interactions of a large number of resulting vector representations. In order to better understand how the model is working for a particular task, one can probe the model’s performance through evaluation of lower-level components in the model. The described technology provides a sophisticated improvement over past model evaluation techniques with interpretation and model refinement methods that probe the attention matrices of a transformer-based model. In some implementations, the interpretation and refinement methods include consideration of gradients of the attention matrices to improve the accuracy of the model evaluation. Using the described technology, the improvements over previous approaches may include, without limitation:

• evaluating model performance during inference, rather than during training;

• analyzing self-attention units from multiple layers of the model;

• operating in a self-supervised manner;

• analyzing a finetuned transformer model without training an additional extractor network; and

• yielding faithful rationales of a model’s performance independent of the NLP task performed.

By interpreting the model’s performance during inference, a system can refine the model, such as by supplementing/refming training data supplied to the model. For example, if a model incorrectly predicts that the sentence “I’d like to say something positive about this movie, but nothing comes to mind” as a positive movie review, it is likely that the model was heavily influenced by the term “positive” when generating the incorrect prediction - e.g., the importance of the term can be ranked high by an importance evaluator. Accordingly, a refinement applicator can supply similar training data (e.g., training data that includes various uses of the term “positive”) to reinforce the manner in which that term should be interpreted by the model.

FIG. 1 illustrates an example machine learning system 100 used in natural language processing (NLP) with an example importance evaluator 102 providing importance scores for deep selfattention matrix model refinement. In this NLP example, let T = be a vocabulary of supported tokens. Let 1 be a set containing all sentences of length N that can be composed from T (shorter sentences are padded by a reserved token [PAD]), where each sentence starts and ends with the special tokens [CLS] (a sentence-level classification token, which is the first token of every input token sequence) and [SEP] (a separation token between sentences), respectively.

A neural network model 104 (e.g., a transformer-based neural network), such as a Bi-directional Encoder Representations from Transformers (BERT) model or other models based on selfattention (SA) units, is trained based on training data 106. It should be understood that, in some implementations, the training of the neural network model 104 may involve pre-training the neural network model 104 using one or more unsupervised tasks (e.g., a masked language model or MLM, a next sentence prediction or NSP) followed by fine tuning using labeled data for a specific downstream task (e.g., multiclass/binary classification, a regression task). In such as transformer, the number of classes/output dimension is represented by n, which changes with respect to the specific downstream task at hand.

After training (and/or independent of training) of the neural network model 104, unlabeled input data 108 (e.g., defined as x) is input to the neural network model 104 to perform a processing task, such as an NLP task of determining whether the text of a movie review (e.g., the unlabeled input data 108) provides a positive or negative review of the movie. In one or more implementations, the unlabeled input data 108 is received as a sequence (e.g., sentence) of N tokens x = xi)i =1 G ). Given such input, the machine learning system 100 implements a parametric function s: (1 -> and outputs a n-dimensional vector of prediction score 112 over x, defined as s x = s(x). As the model processes each token (each position in the input token sequence), self-attention allows it to look at other positions in the input sequence for clues that can help lead to a better encoding for the current token. For example, when the model is processing one token sequence, self-attention allows the processing of each token in the sequence to look at other tokens to better understand which tokens contribute to the processing for the current token. More intuitively, “self-attention” references a characteristic that allows the token sequence processing to look through the token sequence’s constituent tokens to determine how to represent each token.

The importance evaluator 102 receives the prediction scores 112 and self-attention matrices 114 from the neural network model 104 to generate importance scores 110, wherein each importance score r x indicates the importance of a token x t in the unlabeled input data 108 to the prediction scores 112 generated by the neural network model 104.

Given the model-specific importance scores 110 associated with each token in the unlabeled input data 108 and the resulting prediction scores 112), a refinement applicator 116 provides refinement data 118 to the neural network model 104 to improve its performance. In one example, an incorrect prediction can be flagged such that the refinement application 116 can supply new training data to the neural network model 104 to improve its performance. The new training data can be collected around the terms of greater importance identified by the importance evaluator 102 and the importance scores 110. In other implementations, changes in the design of the neural network model and/or pre-filtering of unlabeled input data, as suggested by the importance scores 110 for certain tokens, may be appropriate for improving the performance of the model.

FIG. 2 illustrates an example machine learning system 200 used with deep self-attention matrix model refinement. The machine learning system 200 includes a four-encoder-layer neural network model 202 as an example, although the number of encoder layers may vary. Furthermore, the first three encoder layers starting from the left are referred to as a “backbone network,” and the rightmost encoder layer is referred to as a “prediction head.” Typically, the prediction head is devoted to a particular processing task (e.g., multiclass/binary classification, a regression task), and a given neural network model may include different heads for different tasks. In FIG. 2, it is assumed that the four-encoder-layer neural network model 202 has already been at least partially trained. Unlabeled input data 204 (e.g., one or more sentences with multiple tokens) is input to the four- encoder-layer neural network model 202 and propagates through the layers of the model.

The parametric function 5 is implemented as a cascade of L encoder layers (L = 4 in FIG. 2). Given a sentence x G (1, each token x t in the unlabeled input data 204 is mapped to a t -dimensional vector (embedding) to form a matrix U x G IR dxW . In practice, this embedding is a summation of the token, positional, and segment embeddings, although other embedding techniques may be employed. Then, U x is passed through a stack of L encoder layers. The /-th encoder layer (1 < I < L) receives the intermediate representations G IR dxW (produced by the (Z — l)-th layer), and outputs the new representations U x l . Finally, W[c LS ] (the first column in U x , which corresponds to the [CLS] token) is used as input to a subsequent fully connected layer that outputs s x .

Each encoder layer employs AT self-attention heads that are applied in parallel to U x ~ producing AT different attention matrices where G IR d xd , are the query and key mappings, and 1 < m < M. Each entry quantifies the amount of attention x t receives by % 7 , according to the attention head m in the layer I. Then, the encoder output U x is obtained by a subsequent set of operations that involves the M attention matrices and value mappings. As a result, the four-encoder-layer neural network model 202 generates the prediction scores 206 (s x ).

A ground truth process is a term used in statistics and machine learning that means checking the results of machine learning for accuracy against the real world. The accuracy of the prediction scores 206 can depend on the performance of the four-encoder-layer neural network model 202, which can be limited at least by its design and/or training. Accordingly, the ground truth can be used to identify prediction errors in the output of the four-encoder-layer neural network model 202, but this process does not provide a robust understanding of why the errors were generated or how to improve the performance of the model.

Accordingly, the described technology explains the predictions made by the four-encoder-layer neural network model 202 using self-attention matrices A 1 ™ together (optionally, with their respective gradients) in order to produce a ranking over the tokens in x such that tokens that affect the model’s prediction the most are ranked higher.

Given a sentence x G 1 and a finetuned transformer-based model s, the prediction scores s x G IRU can be generated by the four-encoder-layer neural network model 202. The description of FIG. 2 focuses on classification tasks. Specifically, for binary classification, n = 1, and we set s x as the logit score. However, for multiclass classification, n > 1 (depending on the number of distinct classes), we focus on a specific entry in s x which is associated with the ground truth class to be explained. For the sake of brevity, from here onwards, s x represents the logit score in binary classification, or the logit score associated with the ground truth class s x (in the case of multiclass classification) and disambiguation should be clear from the context.

The importance of each token x t G x with respect to s can be quantified by importance scores 210 evaluated using the described technology. In other words, tokens in x that contribute to s the most can be identified and indeed ranked, thereby explaining, at least in part, how the prediction was determined by the model. First, the unlabeled input data 204 (x) is propagated through the four- encoder-layer neural network model 202 to compute the prediction scores (s x ). Then, importance scores 210 (e.g., importance rankings) of the token x t with respect to the prediction score s x are computed by: with

H x lm = A 1 ™ o ReLU(G^ m ), (3) where G^-. = are the element-wise gradients of s x with respect to the self-attention matrices A 1 ™, and ° stands for the Hadamard (or element-wise) product. The ReLU (Rectified Linear Unit) function is a non-linear activation function that is used in multi-layer neural networks or deep neural networks. The ReLU function can be represented as: (z) = max(0, z) (4) where z = an input value. The term “rectifier,” as used herein, refers to hardware and/or software that executes the ReLU function or a reLU-like function. The self-attention matrices A 1 ™ are output by the various layers of the four-encoder-layer machine neural network 202 to the importance evaluator 208, which also receives the prediction scores 206 in some implementations. Eq. (2) scores the importance of each token x t G x with respect to s x , enabling ranking the tokens in x according to their importance. Higher values of r x . indicate higher importance of x t , hence a greater influence on the prediction score s x . In practice, for x t G {[CLS], [SEP], [R4D]}, r x . is set to oo, as these tokens cannot provide for good explanations of the predictions scores.

A motivation behind Eqs. (2) and (3) is to identify tokens in x for which 1) high attention is received from other tokens in x (information from the attention activations), and 2) further increase in the amount of the received attention will increase s x the most (information from gradient of the attention activations). Eq. (3) ensures that these two conditions are met, since if [G x m ]ij 0, then [H x lm ]ij = 0, and if [A 1 ™]^ is small, then [H x m ]ij is close to zero (recall that l™]ij > 0, as it is the result of softmax). Finally, Eq. (2) aggregates the overall contribution of the attention scores and the positive gradients from all SA heads across all encoder layers, with respect to x t G x.

Having computed the importance scores, the machine learning system 200 can input the importance scores 210 (along with their corresponding tokens) into a refinement applicator 212 to generate refinement data 214, which can be used to refine the four-encoder-layer neural network model 202. In one implementation, for example, the refinement data 214 may include supplemental training data that is focused on the higher ranking tokens, and on token sequences that include these same tokens, in the unlabeled input data 204. By training the four-encoder-layer neural network model 202 with training data that provides correct labeling based on these higher- ranking tokens, the model can improve its accuracy for future predictions. In another implementation, the refinement data 214 may provide insight into redesigning the four-encoder- layer neural network model 202, such as by adding or subtracting layers, incorporating different prediction heads for a given task, etc.

The ReLU function in Eq. (3) zeroes the negative gradients while preserving the values of H x m having positive gradients, which otherwise may be canceled out by a large accumulated negative value in the summation in Eq. (2). The activations in the i-th row within an attention matrix A 1 ™ quantify the importance of the token x t with respect to the other tokens in x. In addition, if [G x m ]ij > 0, then an increase in the activation [A 1 ™]^ should lead to an increase in the model’s output score. Therefore, the importance of the token x t (according to the attention head m in the encoder layer Z) is determined by the summation over the i -th row in H x m , and the contribution to this sum come from elements for which both the activated self-attention matrix and its gradients are positive. Finally, the overall importance of x t is accumulated from the M self-attention heads in L layers according to Eq. (2).

In some transformer-based models, there are 144 self-attention matrices (M = 12, L = 12) that act as filters. However, in practice, only a few attention entries are substantially activated - a large number of activated self-attention matrices are close to zero and are also associated with negative gradients. The accumulated effect of this negative sum leads to a suppression (or even complete cancellation) of the small number of activated self-attention heads with positive gradients (which hold the actual information intended to be preserved). Hence, the negative gradients are zeroed (using ReLU). The application of the negative gradient trimming, prior to the summation, along with the complementary contribution from the activated self-attention matrices and their positive gradients, has been validated in an ablation study.

In other implementations, the gradients may be omitted, although the importance ranking may not be as robust as those computed using the gradients. In these implementations, the importance rankings may be given as:

In these implementations, the prediction scores s x need not be input to the importance evaluator 208 because the implementations do not require computations of the gradients with respect to the prediction scores. Other implementations using various components based on the self-attention matrices may be employed. For example, implementations of the described technology may include without:

1.Att: Setting H x m = A 1 ™ and using Eq. (2).

2.Att-Grad: Setting H x lm = G x m and using Eq. (2).

3.Att-Grad-R: Setting H x m = ReLU(G^. m ) and using Eq. (2).

4.Att X Att-Grad: Setting H x lm = A 1 ™ ° G x m and using Eq. (2).

5.Grad-SAM: Using Eqs. (2) and (3) as given above.

FIG. 3 illustrates an example importance evaluator 300 providing importance scores for deep selfattention matrix model refinement relative to a neural network system. The importance evaluator 300 includes one or more communication interfaces (see a communication interface 302), which may be implemented in software and/or hardware and configured to receive self-attention matrices 304, such as from a neural network model, and corresponding prediction scores 306 and to output importance scores 308, such as to a refinement applicator.

Calculating elements of the important evaluator 300 may include without limitation a gradient generator 310, an element-wise product calculator 312, a rectifier 314, and a summation calculator 316. The gradient generator 310 is executable by one or more computing hardware processors to generate gradients of the prediction scores across the activated self-attention matrices. In one implementation, the gradients are calculated according to although other gradient calculations may be employed in other implementations.

In one implementation, the element-wise product calculator 312 computes the Hadamard (or element-wise) product of the self-attention matrices and the ReLU of the element-wise gradients, as shown in Eq. 3. Alternative implementations of the element- wise product calculator may be employed, such as computing H x m = A 1 ™ ° G x m for use in Eq. (2).

In at least one implementation, the rectifier 314 computes the ReLU function on the gradients generated by the gradient generator 310. The summation calculator 316 computes the summations given in Eq. (3), and variations thereof, to yield the importance scores 308 (e.g., r x .).

FIG. 4 illustrates example operations 400 providing importance scores for deep self-attention matrix model refinement relative to a neural network system. The importance scores indicate the importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence. The neural network model includes multiple encoding layers. A receiving operation 402 receives self-attention matrices of the neural network model into an importance evaluator. The self-attention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence and input to an importance evaluator. Each self-attention matrix corresponds to one of the multiple encoding layers.

A generating operation 404 generates, in the importance evaluator, an importance score for one or more of the tokens in the input token sequence. Each importance score is based on a summation as a function of the self-attention matrices. The summation is computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model. Accordingly, each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

A refining operation 406 refines the neural network model based on the importance score for the one or more of the tokens in the input token sequence. In one implementation, the refining operation 406 may generate and/or input supplemental training data into the neural network model, wherein the supplemental training data is substantially focused on tokens that rank highly in the importance scores. Other refinements may include refinements to the design of the neural network model (e.g., adding/subtracting encoding layers).

FIG. 5 illustrates an example computing device 500 in a computing system for use in refining a neural network system. The computing device 500 may be a client device, such as a laptop, mobile device, desktop, tablet, or a server/cloud device. The computing device 500 includes one or more processor(s) 502, and a memory 504. The memory 504 generally includes both volatile memory (e.g., RAM) and non-volatile memory (e.g., flash memory). An operating system 510 resides in the memory 504 and is executed by the processor(s) 502.

In an example computing device 500, as shown in FIG. 5, one or more modules or segments, such as applications 550, a neural network model, an importance evaluator, a refinement applicator, all or part of a communication interface, a gradient generator, an element-wise product calculator, a rectifier, a summation calculator, and other modules, are loaded into the operating system 510 on the memory 504 and/or storage 520 and executed by processor(s) 502. The storage 520 may store prediction scores, unlabeled input data, training data, importance scores, self-attention matrices, and other data and be local to the computing device 500 or may be remote and communicatively connected to the computing device 500.

The computing device 500 includes a power supply 516, which is powered by one or more batteries or other power sources and which provides power to other components of the computing device 500. The power supply 516 may also be connected to an external power source that overrides or recharges the built-in batteries or other power sources.

The computing device 500 may include one or more communication transceivers 530 which may be connected to one or more antenna(s) 532 to provide network connectivity (e.g., mobile phone network, Wi-Fi®, Bluetooth®) to one or more other servers and/or client devices (e.g., mobile devices, desktop computers, or laptop computers). The computing device 500 may further include a network adapter 536, which is a type of communication device. The computing device 500 may use the adapter and any other types of communication devices for establishing connections over a wide-area network (WAN) or local-area network (LAN). It should be appreciated that the network connections shown are exemplary and that other communications devices and means for establishing a communications link between the computing device 500 and other devices may be used.

The computing device 500 may include one or more input devices 534 such that a user may enter commands and information (e.g., a keyboard or mouse). These and other input devices may be coupled to the server by one or more interfaces 538, such as a serial port interface, parallel port, or universal serial bus (USB). The computing device 500 may further include a display 522, such as a touch screen display.

The computing device 500 may include a variety of tangible processor-readable storage media and intangible processor-readable communication signals. Tangible processor-readable storage can be embodied by any available media that can be accessed by the computing device 500 and includes both volatile and nonvolatile storage media, removable and non-removable storage media. Tangible processor-readable storage media excludes intangible communications signals (such as signals per se) and includes volatile and nonvolatile, removable and non-removable storage media implemented in any method or technology for storage of information such as processor-readable instructions, data structures, program modules, or other data. Tangible processor-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can be accessed by the computing device 500. In contrast to tangible processor-readable storage media, intangible processor-readable communication signals may embody processor- readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include signals traveling through wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media.

An example computing processor-based method of scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence is provided. The neural network model includes multiple encoding layers. Self-attention matrices of the neural network model are received into an importance evaluator. The self-attention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence. Each self-attention matrix corresponds to one of the multiple encoding layers. The importance evaluator generates an importance score for one or more of the tokens in the input token sequence. Each importance score is based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model. Each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

Another example computing processor-based method of any previous method is provided, wherein the summation as a function of the self-attention matrices includes a summation of the self-attention matrices.

Another example computing processor-based method of any previous method further includes receiving the one or more prediction scores, wherein the summation as a function of the self- attention matrices includes a summation of gradients of the prediction scores with respect to the self-attention matrices.

Another example computing processor-based method of any previous method further includes receiving the one or more prediction scores, wherein the summation as a function of the selfattention matrices includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

Another example computing processor-based method of any previous method further includes receiving the one or more prediction scores, wherein the summation as a function of the selfattention matrices includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

Another example computing processor-based method of any previous method further includes receiving the one or more prediction scores, wherein the summation as a function of the selfattention matrices includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

Another example computing processor-based method of any previous method further includes refining the neural network model based on the importance score for the one or more of the tokens in the input token sequence.

An example computing system for scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence is provided, wherein the neural network model includes multiple encoding layers. The example computing system includes one or more hardware computing processors and a communication interface executable at least in part by the one or more hardware computing processors and configured to receive self-attention matrices of the neural network model. The selfattention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence. Each self-attention matrix corresponds to one of the multiple encoding layers. An importance evaluator is executable at least in part by the one or more hardware computing processors and configured to generate an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the self-attention matrices. The summation is computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model. Each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence. Another example computing system of any preceding system is provided, wherein the summation as a function of the self-attention matrices includes a summation of the self-attention matrices.

Another example computing system of any preceding system is provided, wherein the communication interface is further configured to receive the one or more prediction scores. The summation, as a function of the self-attention matrices, includes a summation of gradients of the prediction scores with respect to the self-attention matrices.

Another example computing system of any preceding system is provided, wherein the communication interface is further configured to receive the one or more prediction scores. The summation, as a function of the self-attention matrices, includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

Another example computing system of any preceding system is provided, wherein the communication interface is further configured to receive the one or more prediction scores. The summation, as a function of the self-attention matrices, includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

Another example computing system of any preceding system is provided, wherein the communication interface is further configured to receive the one or more prediction scores. The summation, as a function of the self-attention matrices, includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the selfattention matrices, wherein any negative gradients of the prediction scores with respect to the selfattention matrices are zeroed.

Another example computing system of any preceding system further includes a refinement applicator executable by the one or more computing hardware processors and configured to refine the neural network model based on the importance score for the one or more of the tokens in the input token sequence.

One or more example tangible processor-readable storage media is provided and embodied with instructions for executing on one or more processors and circuits of a computing device a process for scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence. The neural network model includes multiple encoding layers. The process includes receiving self-attention matrices of the neural network model into an importance evaluator, the self-attention matrices being generated by the neural network model while computing the one or more prediction scores based on the input token sequence, each self-attention matrix corresponding to one of the multiple encoding layers. The process also includes generating, in the importance evaluator, an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model, wherein each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

Other one or more example tangible processor-readable storage media of any preceding media are provided, wherein the summation as a function of the self-attention matrices includes a summation of the self-attention matrices.

Other one or more example tangible processor-readable storage media of any preceding media are provided, wherein the process further includes receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of gradients of the prediction scores with respect to the self-attention matrices.

Other one or more example tangible processor-readable storage media of any preceding media are provided, wherein the process further includes receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed. Other one or more example tangible processor-readable storage media of any preceding media are provided, wherein the process further includes receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

Other one or more example tangible processor-readable storage media of any preceding media are provided, wherein the process further includes receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

An example system for scoring importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence is provided. The neural network model includes multiple encoding layers. The system includes means for receiving self-attention matrices of the neural network model into an importance evaluator, the self-attention matrices being generated by the neural network model while computing the one or more prediction scores based on the input token sequence, each self-attention matrix corresponding to one of the multiple encoding layers. The system further includes means for generating, in the importance evaluator, an importance score for one or more of the tokens in the input token sequence, each importance score being based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model, wherein each importance score indicates an importance of a corresponding token in the input token sequence to the generation of the one or more prediction scores relative to other tokens in the input token sequence.

Another example system of any previous system is provided, wherein the summation as a function of the self-attention matrices includes a summation of the self-attention matrices.

Another example system of any previous system is provided further includes means for receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of gradients of the prediction scores with respect to the selfattention matrices.

Another example system of any previous system is provided further includes means for receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes a summation of positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

Another example system of any previous system is provided further includes means for receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and gradients of the prediction scores with respect to the self-attention matrices.

Another example system of any previous system is provided further includes means for receiving the one or more prediction scores, wherein the summation as a function of the self-attention matrices includes an element-wise product of the self-attention matrices and positive gradients of the prediction scores with respect to the self-attention matrices, wherein any negative gradients of the prediction scores with respect to the self-attention matrices are zeroed.

Another example system of any previous system is provided further includes means for refining the neural network model based on the importance score for the one or more of the tokens in the input token sequence.

Some implementations may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium to store logic. Examples of a storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, operation segments, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one implementation, for example, an article of manufacture may store executable computer program instructions that, when executed by a computer, cause the computer to perform methods and/or operations in accordance with the described embodiments. The executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a computer to perform a certain operation segment. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled, and/or interpreted programming language.

The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.