Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND SYSTEMS FOR DETECTING ECG ANOMALIES
Document Type and Number:
WIPO Patent Application WO/2022/087349
Kind Code:
A1
Abstract:
Methods, apparatus and systems for robust and accurate detection of anomalies in medical images and electrocardiograms are disclosed. One example system for training a neural network engine includes a processor that is configured to receive a set of training electrocardiogram signals. At least one electrocardiogram signal in the set of training electrocardiogram signals is associated with metadata identifying a region of interest that includes a heart anomaly. The processor is configured to input the set of training electrocardiogram signals into the neural network engine. The neural network engine is trained using an objective function having a first regularization parameter and a second regularization parameter. The processor is also configured to operate the neural network engine to identify the heart anomaly by classifying the set of training electrocardiogram signals and adjust the neural network engine based on the identified heart anomaly and the metadata.

Inventors:
AGARWAL PARTH SANDEEP (US)
ZHANG CHICHENG (US)
GNIADY CHRISTOPHER (US)
KC DHARMA R (US)
Application Number:
PCT/US2021/056167
Publication Date:
April 28, 2022
Filing Date:
October 22, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV ARIZONA (US)
International Classes:
A61B5/346; A61B5/318; A61B5/349; A61B5/35; A61B5/36; A61B5/364
Domestic Patent References:
WO2020109630A12020-06-04
Foreign References:
US20100111396A12010-05-06
US20200214618A12020-07-09
US20190150794A12019-05-23
US20170360377A12017-12-21
Other References:
TARTAGLIONE ENZO, LEPSØY SKJALG, FIANDROTTI ATTILIO, FRANCINI GIANLUCA: "Learning Sparse Neural Networks via Sensitivity-Driven Regularization", NIPS'18: PROCEEDINGS OF THE 32ND INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS; MONTRÉAL CANADA DECEMBER 3 - 8, 2018, 28 October 2018 (2018-10-28) - 8 December 2018 (2018-12-08), pages 3882 - 3892, XP055937983, DOI: 10.5555/3327144.3327303
EDMUND KAY; ANURAG AGARWAL: "DropConnected neural networks trained on time-frequency and inter-beat features for classifying heart sounds", PHYSIOLOGICAL MEASUREMENT., INSTITUTE OF PHYSICS PUBLISHING, BRISTOL, GB, vol. 38, no. 8, 31 July 2017 (2017-07-31), GB , pages 1645 - 1657, XP020318315, ISSN: 0967-3334, DOI: 10.1088/1361-6579/aa6a3d
Attorney, Agent or Firm:
TEHRANCHI, Babak (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for performing classification of image data, the method comprising: receiving an input image data having a feature of interest, wherein the input image data is associated with a mask identifying a region that includes the feature of interest; inputting the input image data into a neural network engine that is trained using an objective function having a first regularization parameter and a second, different regularization parameter, wherein the first regularization parameter indicates a first degree of sensitivity associated with samples located within the mask, and wherein the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the mask; and identifying the feature of interest by classifying the input image data using the neural network engine.

2. The method of claim 1, further comprising: receiving feedback information in response to the identified feature, wherein the feedback information is used to validate the identified feature of interest or correct the identified feature, and wherein the feedback information is used to adjust the neural network engine.

3. The method of claim 2, wherein the neural network engine is adjusted by online training or offline re-training of the neural network engine.

4. The method of any of claims 1 to 3, wherein the first regularization parameter and the second regularization parameter are greater than zero.

5. The method of any of claims 1 to 4, wherein the objective function is based on gradient-based regularization using the first regularization parameter and the second regularization parameter.

6. A system for training a neural network engine configured to detect a heart anomaly using electrocardiogram signals, comprising a processor that is configured to: receive a set of training electrocardiogram signals, wherein at least one electrocardiogram signal in the set of training electrocardiogram signals is associated with metadata identifying a region of interest that includes the heart anomaly; input the set of training electrocardiogram signals into the neural network engine, wherein the neural network engine is trained using an objective function having a first regularization parameter and a second regularization parameter, wherein the first regularization parameter indicates a first degree of sensitivity associated with samples located within the region of interest, and wherein the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the region of interest; operate the neural network engine to identify the heart anomaly by classifying the set of training electrocardiogram signals; and adjust the neural network engine based on the identified heart anomaly and the metadata.

7. The system of claim 6, wherein the first regularization parameter and the second regularization parameter are different.

8. The system of claim 6 or 7, wherein the objective function is based on gradient-based regularization using the first regularization parameter and the second regularization parameter.

9. The system of any of claims 6 to 8, wherein the processor is configured to adjust the neural network engine by: storing the metadata associated with the set of training electrocardiogram signals; and re-training the neural network engine using the stored metadata and the set of training electrocardiogram signals.

10. The system of any of claims 5 to 9, wherein the processor is configured to receive the metadata by querying a database configured to store expert feedback information.

11. The system of any of claims 5 to 10, wherein the processor is further configured to: provide a user interface to receive the metadata from an expert identifying the region that includes the heart anomaly.

12. The system of any of claims 5 to 11 , wherein the metadata comprises at least one or more bounding boxes marking boundaries of the region of interest and/or one or more annotations associated with the region ofinterest that includes the heart anomaly.

13. A method for facilitating detection of one or more heart anomalies in electrocardiogram, comprising: receiving information representing a set of electrocardiogram signals; inputting the information representing the set of electrocardiogram signals into a neural network engine, wherein the neural network engine is trained using an objective function having a regularization parameter that indicates a degree of sensitivity associated with signal samples located outside of a region ofinterest, wherein the region of interest is determined based on gradient variation of the set of electrocardiogram signals; identifying one or more regions that include the one or more heart anomalies by classifying the set of electrocardiogram signals using the neural network engine; and generating metadata information to produce an annotated diagram corresponding to the set of electrocardiogram signals, wherein the metadata information indicates the one or more regions that include the one or more heart anomalies annotated based on the metadata information.

14. The method of claim 13 , further comprising: providing an interface to receive feedback information in response to the identified one or more heart anomalies; storing the feedback information in a database; and re-training the neural network engine based on at least the feedback information.

15. The method of claim 13 or 14, wherein the regularization parameter is greater than zero.

16. The method of any of claims 13 to 15, wherein the annotated diagram includes a map generated based on gradient information of the set of electrocardiogram signals determined by the neural network engine. 17. The method of any of claims 13 to 16, wherein the annotated diagram includes textual descriptions of the one or more regions that include the one or more heart anomalies.

18. The method of any of claims 13 to 17, wherein the metadata information indicates how the one or more regions that include the one or more heart anomalies are identified by the neural network engine to enable a medical practitioner to validate or correct the one or more identified regions.

19. A system for facilitating detection of one or more heart anomalies in electrocardiogram, comprising a processor that is configured to: receive information representing a set of electrocardiogram signals; input the information representing the set of electrocardiogram signals into a neural network engine, wherein the neural network engine is trained using an objective function having a regularization parameter that indicates a degree of sensitivity associated with samples located outside of a region of interest, wherein the region of interest is determined based on gradient variation of the set of electrocardiogram signals; identify one or more regions that include the one or more heart anomalies by classifying the set of electrocardiogram signals using the neural network engine; and generate metadata information to produce an annotated diagram corresponding to the set of electrocardiogram signals, wherein the metadata information indicates the one or more regions that include the one or more heart anomalies annotated based on the metadata information.

20. The system of claim 19, wherein the processor is configured to: provide an interface to receive feedback information in response to the identified one or more heart anomalies; store the feedback information in a database; and re-train the neural network engine based on at least the feedback information.

21. The system of claim 19 or 20, wherein the regularization parameter is greater than zero. 22. The system of any of claims 19 to 21, wherein the annotated diagram includes a map generated based on gradient information of the set of electrocardiogram signals determined by the neural network engine.

23. The system of any of claims 19 to 22, wherein the annotated diagram includes textual descriptions of the one or more regions that include the one or more heart anomalies.

24. The system of any of claims 19 to 23, wherein the metadata information indicates how the one or more regions that include the one or more heart anomalies are identified by the neural network engine to enable a medical practitioner to validate or correct the one or more identified regions.

Description:
METHODS AND SYSTEMS FOR DETECTING ECG ANOMALIES CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to and benefits of U.S. Provisional Application 63/104,880, titled “METHODS AND SYSTEMS FOR DETECTING ECG ANOMALIES,” filed on October 23, 2020. The entire disclosure of the aforementioned application is incorporated by reference as part of the disclosure of this application. TECHNICAL FIELD [0002] This patent document relates to signal processing using neural networks, in particular, to feedback-based machine learning system for signal classification. BACKGROUND [0003] Heart disease is the leading cause of death in the US and worldwide for both men and women. In the US, heart diseases have been the leading cause of death since 2015, and the number of deaths from heart disease increased by 4.8% from 2019 to 2020. According to the Center for Disease Control and Prevention (CDC), heart disease accounts for one in every four deaths in the United States each year. These deaths are attributed to many factors ranging from undetected heart diseases causing sudden death, late detection that may damage heart muscles and require repair, or even improper monitoring after successful heart surgery. Every year more than 5 million Americans are affected by Heart Failures. Electrocardiogram (ECG), which records the electrical activity of the heart, has long been the preferred and trusted technique for doctors to detect and diagnose these heart conditions. ECG is also used for monitoring patients after the surgery for signs of trouble during recovery. Accurate ECG analysis is an important diagnostic tool in early disease detection. SUMMARY [0004] This present document discloses systems and methods that can be used in various embodiments to provide more robust and accurate detection of anomalies in medical images and/or electrocardiograms. [0005] In one example aspect, a method for performing image classification includes receiving an input image having a feature of interest. The input image is associated with a mask identifying a region that includes the feature of interest. The method includes inputting the input image into a neural network engine that is trained using an objective function having a first regularization parameter and a second, different regularization parameter. The first regularization parameter indicates a first degree of sensitivity associated with samples located within the mask, and the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the mask. The method also includes identifying the feature of interest in the input image by classifying the input image using the neural network engine.

[0006] In another example aspect, a system fortraining a neural network engine configured to detect heart anomaly using electrocardiogram signals is disclosed. The system includes a processor that is configured to receive a set of training electrocardiogram signals. At least one electrocardiogram signal in the set of training electrocardiogram signals is associated with metadata identifying a region of interest that includes a heart anomaly. The processor is configured to input the set of training electrocardiogram signals into the neural network engine. The neural network engine is trained using an objective function having a first regularization parameter and a second regularization parameter. The first regularization parameter indicates a first degree of sensitivity associated with samples located within the region of interest, and the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the region of interest. The processor is also configured to operate the neural network engine to identify the heart anomaly by classifying the set of training electrocardiogram signals and adjust the neural network engine based on the identified heart anomaly and the metadata.

[0007] In yet another example aspect, a method for facilitating detection of one or more heart anomalies in electrocardiogram. The method includes receiving information representing a set of electrocardiogram signals and inputting the information representing the set of electrocardiogram signals into a neural network engine. The neural network engine is trained using an objective function having a regularization parameter that indicates a degree of sensitivity associated with samples located outside of a region of interest. The region of interest is determined based on gradient variation of the set of electrocardiogram signals. The method includes identifying one or more regions that include the one or more heart anomalies by classifying the set of electrocardiogram signals using the neural network engine and generating an annotated diagram corresponding to the set of electrocardiogram signals. The annotated diagram includes metadata information identifying the one or more regions that include the one or more heart anomalies.

[0008] These, and other, aspects are described in the present document. BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 is a chart illustrating example robust accuracy for the Caltech-UCSD Birds (CUB) dataset in accordance with one or more embodiments of the present technology.

[0010] FIG. 2 A illustrates a picture of a bird that will undergo the process of saliency mapping.

[0011] FIG. 2B illustrate a saliency map generated in accordance with one or more embodiments of the present technology.

[0012] FIG. 2C illustrate another saliency map generated in accordance with one or more embodiments of the present technology.

[0013] FIG. 2D illustrate another saliency map generated in accordance with one or more embodiments of the present technology.

[0014] FIG. 2E illustrate yet another saliency map generated in accordance with one or more embodiments of the present technology.

[0015] FIG. 3 A illustrates an example localization accuracy concept in accordance with one or more embodiments of the present technology.

[0016] FIG. 3B illustrates another example of localization accuracy concept in accordance with one or more embodiments of the present technology.

[0017] FIG. 3C illustrates another example of localization accuracy concept in accordance with one or more embodiments of the present technology.

[0018] FIG. 3D illustrates yet another example of localization accuracy concept in accordance with one or more embodiments of the present technology.

[0019] FIG. 4 is a chart illustrating example robust accuracy of trained interactive deep learning (IDL) models in accordance with one or more embodiments of the present technology.

[0020] FIG. 5 is a chart illustrating example saliency measure of trained IDL models in accordance with one or more embodiments of the present technology.

[0021] FIG. 6 is a chart illustrating example localization accuracy of trained IDL models in accordance with one or more embodiments of the present technology.

[0022] FIG. 7 is a flowchart illustrating a training, validation, and re-training process in accordance with one or more embodiments of the present technology.

[0023] FIG. 8 illustrates example ECG data.

[0024] FIG. 9 illustrates example auxiliary information for the ECG data shown in FIG. 8 in accordance with one or more embodiments of the present technology. [0025] FIG. 10 A illustrates example feedback collected for ECG model training in accordance with one or more embodiments of the present technology.

[0026] FIG. 10B illustrates additional example feedback collected for ECG model training in accordance with one or more embodiments of the present technology.

[0027] FIG. 11 A illustrates an example boxplot for Fmax scores on a test dataset for a normal model and an example model trained with feedback information in accordance with one or more embodiments of the present technology.

[0028] FIG. 1 IB illustrates an example boxplot macro Area Under receiver operating characteristic Curve (AUC) scores on a test dataset for a normal model and an example model trained with feedback information in accordance with one or more embodiments of the present technology.

[0029] FIG. 12A illustrates an example interpretability map for a normal model.

[0030] FIG. 12B illustrates an example interpretability map for a model trained with feedback information in accordance with one or more embodiments of the present technology.

[0031] FIG. 13 is a flowchart representation of a method for performing image classification in accordance with one or more embodiments of the present technology.

[0032] FIG. 14 is a block diagram that illustrates an example of a computer system in which at least some operations described herein can be implemented.

[0033] FIG. 15 is a flowchart representation of a method for detecting one or more heart anomalies in accordance with one or more embodiments of the present technology.

DETAILED DESCRIPTION

[0034] Previous efforts to build conventional neural network models with improved accuracy and interpretability have been so farunsuccessful. One of the reasons for the lack of success is the lack of a comprehensive understanding of when and how learning from auxiliary information help improve the accuracy and trustworthiness of machine learning models. This patent document discloses techniques that can be implemented in various embodiments to learn from auxiliary information to improve accuracy, adversarial robustness and interpretability.

[0035] Advanced model training algorithms with the aim of improving adversarial robustness have been adopted in various fields. However, improvements on robust accuracy often come at the price of lower standard accuracy. In the context of image classification, previous methods aim at improving model accuracy and interpretability by using bounding box -based auxiliary information. For example, one of the methods penalizes the mismatch between the model-generated attention masks and bounding boxes to improve the accuracy and interpretability of convolutional neural networks (CNNs). Another method proposed using a single regularization term in the training objective that penalizes the gradients of cross entropy losses with respect to input features outside bounding boxes. The disclosed techniques incorporate a gradient-based penalty for features inside bounding boxes, thereby enabling the utilization of more refined part localization bounding box information to train the models and improves model accuracy in fine grained classification tasks.

[0036] In some embodiments, the disclosed techniques can be based on the attribution map or saliency map generation for images. Specifically, the user can specify training objectives that promotes “alignment” between such attribution maps and bounding boxes. The attribution maps are more sophisticated than conventional attribution maps in part because they are generated based on a new regularizing algorithm that have different degrees of regularization parameters. Experiments have shown that such attribution maps help improve the trustworthiness of image classifications. Finally, recent works have empirically demonstrated that adversarial robustness and interpretability are tightly connected. On one hand, adversarial robust models can generate more interpretable explanations than non-robust model. On the other, models trained to mimic gradient-based explanations of adversarially robust models exhibit more robustness. This hints at the possibility that robustness is a sidebenefit of interpretability.

[0037] In some embodiments, the disclosed techniques can be implemented as an image classification neural network configured to intake auxiliary information (e.g., bounding boxes, doctor’s annotations) as additional inputs. Inspired by related works on gradient-based regularization, the improved system is configured to employ a training objective (e.g., objective function) that has different degrees of regularization on different parts of inputs, which take into account auxiliary information such as, but not limited to, bounding box information and annotations. In other words, the systemuses a modified objective function with at least two different degrees of regularization.

[0038] To test the accuracy and/or efficiency of the modified object function, the disclosed system was used to train and classify images in the Caltech-UCSD Birds (CUB) dataset. The image classification results (from the improved system) showed improved accuracy, robustness and interpretability, both quantitatively and qualitatively. The training and optimization methods and the results are discussed below.

[0039] Interactive Deep Learning

[0040] In some embodiments, the disclosed techniques can be implemented as a classification system that includes an interactive deep learning (IDL) neural network (e.g., engine). The IDL engine uses a new objective function having two different degrees of regularization, thereby enabling the IDL engine to continuously improve the prediction model based on human-in-the-loop feedback (e.g., interactive) resultingin significant enhancement of human abilities in many cases. In some embodiments, the IDL engine is trained to recognize various arrhythmias or anomalies in electrocardiogram (ECG) signals using one or more ECG training datasets. The IDL engine is further improved by re-training the trained neural network model using auxiliary information such as, but not limited to, users’ provided bounding boxes and annotations.

[0041] In some embodiments, the IDL engine has a Long Short Term Memory (LSTM) architecture, which can be used to trained using back-propagation through time. IDL engine can also have other neural network architectures such as, but not limited to, CNN and Gated Recurrent Unit (GRU).

[0042] Once trained, the IDL engine is capable of detecting numerous heart conditions in even the longest ECG signals collected over multiple days or weeks. This automated detection saves hours of manual labor for the readers and provides them with a clear region of the ECG signals where the disease manifestation is visible, which allows them to focus on validating the results.

[0043] To train the IDL engine to perform image classification, training data with bounding box and annotations are used. Given a set of m training examples where for example i, x i ∈ R d is its feature part (image i’s pixel representation), y i ∈ [K] is its label part (the class of the object in the image), M i ⊂ [d] is the image’s associated bounding box. An example of an image with bounding box information is given in FIG. 3 A. The goal is to train a neural network -based classification model such that, when predicting on test examples, it has high accuracy, robustness and good interpretability. Formally, given an example x, our network outputs a prediction f(x; θ) that is a probability vector in Δ K-1 , the K-dimensional probability simplex. Definethe cross entropy loss of model f(-; 0) on example

(x; y) as I CE (θ, (x, y) ) denotes the j-th coordinate of vector z.

[0044] For model training, the following objective function is optimized: for some λ 1, λ 2 > 0. In addition to minimizing the usual cross entropy loss, it is important to ensure that the model’ s predictions have different degrees of sensitivity to different parts of the training images. Specifically, the magnitude of characterizes the sensitivity of the cross-entropy loss with respect to the j -th pixel. The IDL engine or model is trained such that the sensitivity to input aligns with object bounding boxes as much as possible; formally, should be large for j in M and should be small otherwise.

[0045] In some embodiments, the objective function used to train the above IDL model set λ 1 and λ 2 at values greater than zero and different from each other. In contrast, the objective function of some conventional methods sets λ 1 = λ 2 , which degenerates the objective function to that of double b ackpropagation function.

[0046] To illustrate the improved accuracy, robustness and interpretability of the IDL engine of the disclosed system, the CUB dataset was used to train and test the IDL engine. CUB has approximately 11,788 examples. The data preparation process takes the union of the train and test sets provided by the CUB dataset, permute the set, and perform a three-way split. The first split consists of 1/2 of the data, which is used for training. The remaining data is divided into three sets of equal sizes: the first set is used to select the best model during training, the second for λ 1 and λ 2 hyperparameter selection and the third set is used for testing. In some embodiments, the ResNet architecture was selected. Training was performed with mini-batch stochastic gradient descent and a learning rate of 0.001 . It should be noted that other learning rate is possible such as, but not limited to 0.005 and 0.01. Training with the choices of λ 1 and λ 2 in Λ 2 was considered, where

{—3, —2, ... , 9} }. The following set of algorithms was evaluated: λ -VARY : train a model for each (λ 1 , λ 2 ) in Λ 2 , and use the validation set to select the best performing model. λ -EQUAL: train a model for each (λ 1 , λ 2 ) in { ( λ 1, λ 2 ) ∈ Λ 2 : λ 1 = λ 2 }, and use the validation set to select the best performing model. BLACKOUT: train a model that minimizes the cross-entropy loss over modified training data ( here for each i, is defined as Xi with all coordinates j in M i set to zero.

STANDARD: standard training that minimizes the cross-entropy loss over ( x i ; y i )’s; this is also equivalent to setting § 1 = λ 2 = 0.

[0047] All experiments are repeated three times to generate heatmaps identifying the best performing models and their corresponding λ 1 and λ 2 values. In other words, many versions of the IDL engines were created by training each version with a different set of λ 1 and λ 2 values. In some embodiments, the desired regularization parameters for the trained model can be determined by balancing robust accuracy (e.g., FIG. 4), saliency value (e.g., FIG. 5), and/or localization accuracy (e.g., FIG. 6) to achieve the optimal classification results. The best performing IDL engine is the engine with the highest performance measure of the test dataset.

[0048] Standard and robust accuracy comparison

[0049] The adversarial robustness of the trained IDL models are tested for 10 values of adversarial perturbation radii using the Fast Gradient Sign Method. All adversarial tests were performed using the Foolbox library, which is an adversarial attack tool library. It should be noted that other adversarial attack libraries can also be used.

[0050] Referring to FIG. 4, which shows the robust accuracy heatmap generated by training and validating 196 different trained IDL models. Each IDL model is trained using a different set of λ 1 and λ 2 . Each cell indicates the performance of each trained IDL model on the validation dataset. Using the heatmap, the best IDL model can be identified.

[0051] Recall that for λ -VARY and λ -EQUAL, for each value of e, separate values of ( λ 1, λ 2 ) pairs are chosen using the validation set. The results are shown in FIG. 1, which illustrate test robust accuracy for different value of e's for the CUB dataset. It can be seen that λ -VARY trains models that have higher standard accuracy and also robust to adversarial attacks; the performance of the learned models beat those of λ -EQUAL (especially when e is large), showing the utility of incorporating bounding box information in the training objective.

[0052] Interpretability comparison

[0053] The interpretability comparison process compares the interpretability of the trained models both qualitatively and quantitatively. The gradient-based saliency map, generated by the model trained by each algorithm on a few bird images in the CUB dataset, is plotted. FIG. 2B shows the saliency map of the STANDARD parameter where saliency features are dispersed and clearly not focusing on the bird body. FIG. 2C shows the saliency map of λ -EQUAL . As shown, the saliency features are better than STANDARD as they exhibit more a bird shape. FIG. 2D shows the model trained by λ -VARY, which shows the complete shape of bird and even with highlights on the subtle parts such as beaks and legs. The saliency map of the BLACKOUT model (FIG. 2E) is reasonably moderate. But obviously, it does not perform as well as the λ -VARY model.

[0054] Quantitative Results

[0055] To quantitatively measure the interpretability of the gradient-based saliency maps output by different IDL models (generated using different sets of λ 1 and λ 2 ), bounding boxes were extracted from the test images of the test dataset. The bounding boxes were evaluated by: employing the saliency metric shown in Table 1; and comparing the extracted bounding boxes with the original bounding boxes using localization accuracy. To generate a bounding box from a saliency map, the image is binarized by thresholding, and the tightest rectangular box that contains the pixels whose grayscale is above the threshold is outputted.

Table 1: Saliency metric comparison among the evaluated methods.

[0056] To measure the quality of our saliency map, after generating a bounding box, the corresponding region from the original image is cropped and is passed into the network to make prediction. The saliency metric is defined as: s(a,p) = log(a) - log(p), where a = max(0.05; â), and â is the area fraction of the boundingbox, and p is the model’s predictive probability for the correct label. The lower value the saliency metric the better. Table 1 shows the lowest saliency value for each evaluated method (i.e., standard, blackout, λ -EQUAL , and λ -VARY) trained by all methods on the CUB dataset. As shown in Table 1 , λ -VARY outperforms all baselines by having the lowest saliency value.

[0057] Referring to FIG. 5, which shows the saliency heat map generated by training and validating 196 different trained IDL models. Each IDL model is trained using a different set of λ 1 and λ 2 . Each cell indicates the performance of each trained IDL model on the validation dataset. For the saliency heatmap, the lower the cell value the better the performance.

[0058] Localization accuracy [0059] The localization accuracy is defined as the fraction of examples where the model prediction is correct and the generated bounding box (FIGS. 3 A-3D) has intersection over union (IOU) value of ≥ 0.5 with the ground truth boundingbox. Table 2 shows the test localization accuracy of models trained by all methods on the CUB dataset, where λ - VARY outperforms all baselines (the larger the better).

[0060] FIG. 6 illustrates the localization accuracy heatmap generated by training and validating 196 different trained IDL models. Each IDL model is trained using a different set of λ 1 and λ 2 . Each cell indicates the performance of each trained IDL model on the validation dataset. Using the heatmap, the best IDL model can be identified. Here, the higher the cell value the better the performance.

Table 2: Localization accuracy comparison among the evaluated methods,

[0061] IDL Engine for ECG Classification

[0062] Accurate ECG interpretation is critical in detecting heart diseases. However, they are often misinterpreted due to a lack of training or insufficient time spent to detect minute anomalies. A recent study found that 30% of myocardial infarction events were misclassified as low risk, with ECG misinterpretation responsible for half of the misclassifications. Misdiagnosis is also a top concern expressed by cardiac patients.

[0063] The automation of ECG reading has been a long-standing need. Analyzing the

ECG signals manually requires extreme concentration, causes mental fatigue, and is not reimbursed adequately by the insurance companies. Consequently, the number of people who can read ECG signals is shrinking, and experienced doctors do not have enough time to scrutinize patients’ ECGs. However, commercial ECG machines only provide preliminary processing and analysis, and offer limited assistance in detection of more complex cardiac conditions.

[0064] The machine learning community observed this need, and numerous machine learning algorithms have been proposed for disease detection. For example, Convolutional Neural Network (CNN) has been used to assist arrhythmia and myocardial detection. Other algorithms such as decision trees, k-nearest neighbor, logistic regression, support vector machines, and inception neural networks have also been evaluated. While existing machine learning algorithms have succeeded in classifying basic cardiac conditions, classification of more complex cardiac events remains challenging. Furthermore, existing solutions provide diagnosis in a black-box manner, requiring the medical personnel to carefully analyze the ECG again to validate the algorithm’s interpretations.

[0065] The disclosed techniques can adaptively adjust the detection algorithms based on feedback collected from expert ECG readers. In some embodiments, the disclosed techniques can be implemented as a signal importance mask feedback-based machine learning system that accepts expert feedback continuously (e.g., for online learning) or in a periodic/aperiodic manner (e.g., for offline learning). In some embodiments, the system can provide medical personnel with a precise region of the ECG signals where the disease manifestation is visible. In some embodiments, a visual representation of the system’s decision process can be shown to the medical personnel to illustrate what portion of the signal is used in the decision process, thereby enabling the medical personnel to quickly validate the results or correct areas of misinterpretation without the need to fully reexamining the ECG again.

[0066] FIG. 7 illustrates a training process 700 for training an IDL model (e.g., neural network, engine) in accordance with some embodiments of the present disclosure. Process 700 starts at Operation 705 where a training dataset is prepared and used to train the IDL model. For ECG classification, the training dataset can include thousands of labelled ECG data that can include ECG data of healthy heart signals and abnormal heart signals. An example of labelled ECG data is shown in FIG. 8. It should be noted that the label can be in the form of metadata.

[0067] In some embodiments, process 700 can use the open source ECG data from Physionet for training, which may include unlabeled ECG data. The training process 700 can query a database to receive expert feedback associated with one or more ECG signals. For example, the feedback can be returned in the form of metadata associated with the corresponding training data. The training process 700 can associate the unlabeled ECG data with the metadata to prepare labelled ECG data such as shown in FIG. 8.

[0068] Once the training data set is prepared, the model is trained at Operation 710 using an objective function to create a trained model 715. The trained model is then used to classify a validation dataset and/or real patient ECG data at Operation 720.

[0069] ECG data can be represented using one-dimensional data. Specifically, the training data can be represented as a set of tuples where for each example i, x i∈ R d is ECG signal representation. In selected example ECG datasets for training 71 -way heart disease classification from 12-lead ECG signals, d = 12 x 300 andy, ∈ {-1, 1 } K is the class labeling, where K = 71 for each coordinate j ∈ and +1 indicate that label j is present and absent, respectively. The training data can also include signal importance masks for at leastpart of the samples. The index set of samples having corresponding signal importance masks is denote by E.

[0070] As compared to detecting features of interest in image data, it is easier to distinguish sample values and detect the ECG signal given a set of input data. Therefore, only the sensitivity of the loss with respect to the irrelevant features need to be penalized. In some embodiments, one of the two regularization parameters can be set to 0 for ECG data training. For example, the model can be represented by a function f(x; θ), where x represents the ECG signal and 6 represents the model parameter. Given x, 0, the model output f(x; θ) lies in IR K . The multi -lab el classification result can be denoted using sign(f(x; θ)) := (sign( f(x; θ) 1 ), ..., sign(f(x; θ) K )) ∈{-1, 1 } K , wherein sign(z) = 1 if z > 0 and sign(z) = -1 otherwise. The training process hinge on finding 0 that has a small average multi-label logistic loss on the training examples. Here, the multi-label logistic loss of model f(-; 0) on example (x, y) is defined as l logistic (θ, (x,y)) = (1 + exp (— y j F j (x; θ))).

[0071] The training objective function can be a regularized loss objective that takes advantage of signal importance masks, defined as:

[0072] Here, λ 1 = 0 and λ 2 >0. Sample (x, y) can be viewed as introducing two parts of losses that contribute to the training objective: the first part is the standard multi-label logistic loss l logistic (θ, (x,y)) and the second partis which applies to samples in E whose signal importance mask M is available. This term regularizes the sensitivity of the model with respect to the input using the signal importance masks: it penalizes the model from being too sensitive to parts of the ECGs outside their corresponding signal importance masks.

[0073] In some embodiments, two different regularization parameterscan be used for ECG data training to achieve greater robustness. An objective function such as shown in Eq. (1) can be used to regularize the sensitivity of the model with respect to the input using the signal importance masks. [0074] Referring back to Operation 720, the trained IDL model makes prediction(s) on the patient’ s heart condition, while also pinpoints parts of the ECG that are responsible for its prediction. For example, system 700 can be configured to highlight the difference in peaks in the ECG (e.g., gradient changes in the ECG signal indicating the important regions). Alternatively, or in addition, the system 700 can provide additional information (e.g., graphical or textual) associated with how the important regions are determined. In this way, the results can be readily interpreted, trusted, and acceptable to the medical practitioner, without the need for the medical practitioner to completely re-examine the ECG data again. [0075] The results are then validated and annotated by the medical practitioner at Operation 725. The doctors can also help the trained model to better focus on the important region(s) of the ECG by drawing in bounding boxes 905, 910, and 915. Each bounding box can be annotated (text not shown) to explain the anomaly or arrhythmia. FIG. 9 illustrates an example of an annotated ECG data with bounding boxes. Here, the medical practitioner can either confirm the system’s finding and/or highlight (e.g., draw bounding box) and annotate missing diagnosis.

[0076] At Operation 730, images with manually inputted boundingboxes and corresponding auxiliary data (e.g., annotation) are fed back into the training algorithm to improve the IDL model. In some embodiments, the auxiliary data is provided during the training process (e.g., during online training) to adaptively improve the IDL model while the IDL model is being trained. In some embodiments, the IDL model is retrained after feedback information is collected and stored in a data base. The feedback information is retrieved prior to the IDL model is retrained.

[0077] In some embodiments, the training system can provide a user interface to collect feedback information from medical personnel. For example, a web application can be provided to the doctors to allow the doctors to highlight important regions in ECG signals. FIGS. 10A-B illustrate example feedback collected for ECG model training using a web interface in accordance with one or more embodiments of the present technology. As shown in FIGS. 10 A-B, one or more regions 1005 are marked by the doctor as important regions in the ECG signals. An additional text input field (not shown) can be provided to collect feedback information in the form of natural language explanation. For example, Table 1 shows example textual feedback collected from the doctor for the set of ECG signals (I, II, III, aVR, aVL, aVF, VI -V6).

Table 1 Example textual feedback from a doctor

[0078] The highlighted regions and the textual descriptions can be converted to signal importance masks and/or metadata for the training of the neural network engine. The information is then stored in a database and subsequently be retrieved by the model training algorithm.

[0079] FIGS. 11A-B illustrate an example boxplot for Fmax and macro AUC scores ona test dataset for a normal model and an example model trained with feedback information in accordance with one or more embodiments of the present technology. As shown in FIGS.

11 A-l IB, incorporating the feedback information in trainingthe neural network engine 1105 result in superior performance as compared to a normal neural network engine 1110 trained without any feedback information.

[0080] FIGS. 12A-B illustrate example interpretability maps for a normal model and an example model trained with feedback information in accordance with one or more embodiments of the present technology. The interpretability maps canbe provided to medical practitioners to facilitate the validation and/or annotation process. To generate the interpretability maps, the gradients of the output with respect to the input signals are computed. Regions with large gradients are highlighted as important regions. As shown in FIG. 12 A, only a small amount of gradient variations has been detected by the model trained without any feedback information, leading to only a few regions of interest 1205 available for the doctors to review and/or validate. The model trained with feedback information, on the other hand, can accurately identify the gradient variations in the ECG signals and detect the regions of importance 1215 for detecting anomalies. In addition to the graphical depiction of the classification results (e.g., interpretability maps), additional textual information can be included to describe the classification results. Such information canbe provided to the doctors to indicate how the regions are identified by the model, thereby allowing the doctors to validate or correct the classification results without the need to completely re-evaluate the ECG data again. When presented with the example information (e.g., interpretability maps such as shown in FIG. 12B) in a blind study, the medical team has confirmed that the model trained with feedback information can correctly identify regions that normal models fail to detect.

[0081] FIG. 13 is a flowchart representation of method for performing image classification in accordance with one or more embodiments of the present technology. The method 1300 includes, at operation 1310, receiving an input image having a feature of interest. The input image is associated with a mask identifying a region that includes the feature of interest. The method includes, at operation 1320, inputtingthe input image into a neural network model that is trained using an objective function having a first regularization parameter and a second, different regularization parameter. The first regularization parameter indicates a first degree of sensitivity associated with samples located within the mask, and the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the mask. The method 1300 includes, at operation 1330, identifying the feature of interest in the input image by classifying the input image using the neural network model.

[0082] In some embodiments, the method includes receiving feedback information in response to the identified feature. The feedback information is used to validate the identified feature of interest or correct the identified feature. The feedback information is further used to adaptively adjust the neural network engine (e.g., via online training or offline re-training). In some embodiments, the first regularization parameter and the second regularization parameter are greater than zero. In some embodiments, the objective function is based on gradient-based regularization using the first regularization parameter and the second regularization parameter.

[0083] FIG. 14 is a block diagram that illustrates an example of a computer system 1400 in which at least some operations described herein can be implemented. As shown, the computer system 1400 can include: one or more processors 1402, main memory 1406, nonvolatile memory 1410, a network interface device 1412, video display device 1418, an input/output device 1420, a control device 1422 (e.g., keyboard and pointing device), a drive unit 1424 that includes a storage medium 1426, and a signal generation device 930 that are communicatively connected to abus l416. Thebus l416 represents one or more phy sical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. Various common components (e.g., cache memory) are omitted for brevity. Instead, the computer system 1400 is intended to illustrate a hardware device on which components illustrated or described relative to the examples of the figures and any other components described in this specification can be implemented.

[0084] The computer system 1400 can take any suitable physical form. For example, the computing system 1400 can share a similar architecture as that of a server computer, personal computer (PC), tablet computer, mobile telephone, game console, music player, wearable electronic device, network-connected (“smart”) device (e.g., a television or home assistant device), AR/VR systems (e.g., head-mounted display), or any electronic device capable of executing a set of instructions that specify action(s) to be taken by the computing system 1400. In some implementation, the computer system 400 can be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) or a distributed system such as a mesh of computer systems or include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1400 can perform operations in real-time, near real-time, or in batch mode.

[0085] The network interface device 1412 enables the computing system 1400 to mediate data in a network 1414 with an entity that is external to the computing system 1400 through any communication protocol supported by the computing system 1400 and the external entity. Examples of the network interface device 1412 include a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, and/or a repeater, as well as all wireless elements noted herein.

[0086] The memory (e.g., main memory 1406, non-volatile memory 1410, machine- readable medium 1426) can be local, remote, or distributed. Although shown as a single medium, the machine-readable medium 1426 can include multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 1428. The machine-readable (storage) medium 1426 can include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computing system 400. The machine-readable medium 1426 can be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium can include a device that is tangible, meaning that the device has a concrete physical form, although the device can change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state.

[0087] Although implementations have been described in the context of fully functioning computing devices, the various examples are capable of being distributed as aprogram product in a variety of forms. Examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and non-volatile memory devices 1410, removable flash memory, hard disk drives, optical disks, and transmission-type media such as digital and analog communication links.

[0088] In general, the routines executedto implement examples herein can be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1404, 1408, 1428) set at various times in various memory and storage devices in computing device(s). When read and executed by the processor 1402, the instruction(s) cause the computing system 1400 to perform operations to execute elements involving the various aspects of the disclosure.

[0089] In one example aspect, a system (e.g., the computer system 1400 shown in FIG. 14) for training a neural network engine configured to detect heart anomaly using electrocardiogram signals is disclosed. The system comprises a processor that is configured to receive a set of training electrocardiogram signals. At least one electrocardiogram signal in the set of training electrocardiogram signals is associated with metadata identifying a region of interest that includes a heart anomaly. The processor is configured to input the set of training electrocardiogram signals into the neural network engine. The neural network engine includes an objective function having a first regularization parameter and a second regularization parameter. The first regularization parameter indicates a first degree of sensitivity associated with samples located within the region of interest, and the second regularization parameter indicates a second degree of sensitivity associated with samples located outside of the region of interest. The processor is configured to operate the neural network engine to identify the heart anomaly by classifyingthe set of training electrocardiogram signals and adaptively adjust the neural network engine based on the identified heart anomaly and the metadata.

[0090] In some embodiments, the first regularization parameter and the second regularization parameter are different. In some embodiments, the objective function is based on gradient-based regularization using the first regularization parameter and the second regularization parameter. In some embodiments, the processor is configuredto adjustthe neural network engine by storing the metadata associated with the set of training electrocardiogram signals; and re-training the neural network engine using the stored metadata and the set of training electrocardiogram signals. [0091] In some embodiments, the processor is configured to receive the metadata by querying a database configured to store expert feedback information. In some embodiments, the processor is further configured to provide a user interface to receive the metadata from an expert identifying the region that includes the heart anomaly. In some embodiments, the metadata comprises at least one or more bounding boxes marking boundaries of the region of interest and/or one or more annotations associated with the region of interest that includes the heart anomaly

[0092] FIG. 15 is a flowchart representation of a method for detecting one or more heart anomalies in electrocardiogram in accordance with one or more embodiments of the present technology. The method 1500 includes, at operation 1510, receiving a set of electrocardiogram signals. The method 1500 includes, at operation 1520, inputting the set of electrocardiogram signals into a neural network engine. The neural network engine includes an objective function having a regularization parameter that indicates a degree of sensitivity associated with samples located outside of a region of interest. The region of interest is determined based on gradient variation of the set of electrocardiogram signals. The method 1500 includes, at operation 1530, identifying one or more regions that include the one or more heart anomalies by classifying the set of training electrocardiogram signals using the objective function. The method 1500 also includes, at operation 1540, generating metadata information to produce an annotated diagram corresponding to the set of electrocardiogram signals. The metadata information indicates the one or more regions that include the one or more heart anomalies annotated based on the metadata information.

[0093] In some embodiments, the method includes providing an interface to receive feedback information in response to the identified one or more heart anomalies, storing the feedback information in a database, and re-training the neural network engine based on at least the feedback information. In some embodiments, the regularization parameter is greater than zero.

[0094] In some embodiments, the annotated diagram includes a map generated based on gradient information of the set of electrocardiogram signals determined by the neural network engine. In some embodiments, the annotated diagram includes textual descriptions of the one or more regions that include the one or more heart anomalies. In some embodiments, the metadata information indicates how the one or more regions that include the one or more heart anomalies are identified by the neural network engine to enable a medical practitioner to validate or correct the one or more identified regions. [0095] In another example aspect, a system (e.g., the computer system 1400 shown in FIG. 14) for facilitating detection of one or more heart anomalies in electrocardiogram is disclosed. The system includes a processor that is configured to receive information representing a set of electrocardiogram signals, and input the information representing the set of electrocardiogram signals into a neural network engine. The neural network engine is trained using an objective function having a regularization parameter that indicates a degree of sensitivity associated with samples located outside of a region of interest. The region of interest is determined based on gradient variation of the set of electrocardiogram signals. The processor is configured to identify one or more regions that include the one or more heart anomalies by classifying the set of electrocardiogram signals using the neural network engine and generate an annotated diagram corresponding to the set of electrocardiogram signals. The annotated diagram includes metadata information identifying the one or more regions that include the one or more heart anomalies.

[0096] In some embodiments, the processor is configured to provide an interface to receive feedback information in response to the identified one or more heart anomalies, store the feedback information in a database, and re-train the neural network engine based on at least the feedback information. In some embodiments, the regularization parameter is greater than zero.

[0097] In some embodiments, the annotated diagram includes a map generated based on gradient information of the set of electrocardiogram signals determined by the neural network engine. In some embodiments, the annotated diagram includes textual descriptions of the one or more regions that include the one or more heart anomalies. In some embodiments, the metadata information indicates how the one or more regions that include the one or more heart anomalies are identified by the neural network engine to enable a medical practitioner to validate or correct the one or more identified regions.

[0098] It is thus appreciated that the disclosed techniques can be implemented in various neural network engines to provide greater robustness, and interpretability for image classification and ECG anomaly detection.

[0099] Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[00100] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

[00101] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

[00102] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, includingby way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[00103] While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment.

Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination.

[00104] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

[00105] Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.