Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CROSS-MODAL NEURAL NETWORKS FOR PREDICTION
Document Type and Number:
WIPO Patent Application WO/2019/166601
Kind Code:
A1
Abstract:
Various embodiments of the present disclosure are directed to a deep learning model employing lower layer neural networks of different architectures to independently learn the embedded feature representation of each data type of a partitioned multimodal electronic data including an encoded data (11), an embedded data (12) and a sampled data (13). In an optimal embodiment, at a lower neural network layer, encoded data (11) is inputted into an encoded neural network (30) to produce an encoded feature vector (14), embedded data (12) is inputted into an embedded neural network (40) to output an embedded feature vector (15), and sampled data (13) is inputted into a sampled neural network (60) (50) to output an sample feature vector (16) (16). At an upper neural network layer, the encoded feature vector (14), the embedded feature vector (15) and the sample feature vector (16) are inputted into a convolutional neural network (60) to produce a prediction (217) (17).

Inventors:
CHEUNG PATRICK (NL)
Application Number:
PCT/EP2019/055089
Publication Date:
September 06, 2019
Filing Date:
March 01, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKLIJKE PHILIPS NV (NL)
International Classes:
G06N3/04
Other References:
VELICKOVIC PETAR ET AL: "X-CNN: Cross-modal convolutional neural networks for sparse datasets", 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), IEEE, 6 December 2016 (2016-12-06), pages 1 - 8, XP033066384, DOI: 10.1109/SSCI.2016.7849978
SALVADOR AMAIA ET AL: "Learning Cross-Modal Embeddings for Cooking Recipes and Food Images", IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. PROCEEDINGS, IEEE COMPUTER SOCIETY, US, 21 July 2017 (2017-07-21), pages 3068 - 3076, XP033249653, ISSN: 1063-6919, [retrieved on 20171106], DOI: 10.1109/CVPR.2017.327
QI ZHANG ET AL: "Retweet Prediction with Attention-based Deep Neural Network", CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 24 October 2016 (2016-10-24), pages 75 - 84, XP058299441, ISBN: 978-1-4503-4073-1, DOI: 10.1145/2983323.2983809
SUNGYONG SEO ET AL: "Representation Learning of Users and Items for Review Rating Prediction Using Attention-based Convolutional Neural Network", 3RD INTERNATIONAL WORKSHOP ON MACHINE LEARNING METHODS FOR RECOMMENDER SYSTEMS (MLREC) (SDM'17), 29 April 2017 (2017-04-29), Houston, Texas, USA, pages 1 - 8, XP055507936
NGUYEN PHUOC ET AL: "$\mathtt {Deepr}$: A Convolutional Net for Medical Records", IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, IEEE, PISCATAWAY, NJ, USA, vol. 21, no. 1, 31 January 2017 (2017-01-31), pages 22 - 30, XP011640117, ISSN: 2168-2194, [retrieved on 20170201], DOI: 10.1109/JBHI.2016.2633963
"Image Analysis and Recognition : 11th International Conference, ICIAR 2014, Vilamoura, Portugal, October 22-24, 2014, Proceedings, Part I; IN: Lecture notes in computer science , ISSN 1611-3349 ; Vol. 8814", vol. 8485, 16 June 2014, SPRINGER, Berlin, Heidelberg, ISBN: 978-3-642-17318-9, article YI ZHENG ET AL: "Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks", pages: 298 - 310, XP055303306, 032548, DOI: 10.1007/978-3-319-08010-9_33
CHEUNG BING LEUNG PATRICK ET AL: "Deep learning from electronic medical records using attention-based cross-modal convolutional neural networks", 2018 IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL & HEALTH INFORMATICS (BHI), IEEE, 4 March 2018 (2018-03-04), pages 222 - 225, XP033345164, DOI: 10.1109/BHI.2018.8333409
Attorney, Agent or Firm:
DE HAAN, Poul, Erik (NL)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A cross-modal neural network controller (90) for processing multimodal electronic data including a plurality of different data types, the cross-modal neural network controller (90) comprising a processor (91) and a non-transitory memory (92) configured to:

at a lower neural network layer, at least two of:

input a first data type (211) into a first neural network (230) to produce a first feature vector (214),

input a second data type (212) into a second neural network (240) to output a second feature vector (215), and

input a third data type (213) into a third neural network (250) to output a third feature vector,

wherein the first neural network (230), the second neural network (240) and the third neural network (250) have different neural architectures; and

at an upper neural network layer, input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into a fourth neural network (260) to produce a prediction (217).

2. The cross-modal neural network controller (90) of claim 1,

wherein the first data type (211) is encoded data (11);

wherein the input of the first data type (211) into the first neural network (230) to produce the first feature vector (214) includes:

an input of the encoded data (11) into an encoded neural network (30) to produce an encoded feature vector (14); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the encoded feature vector (14), the second feature vector (215) and the third feature vector (216) into a convolutional neural network to produce the prediction (217).

3. The cross-modal neural network controller (90) of claim 1, wherein the input of the first data type (211) into the first neural network (230) to produce the first feature vector (214) includes:

an application of a deep learning network to the first data type (211) to generate the first feature vector (214); or

a convolution of an application of the deep learning network and an attention module to the first data type (211) to generate the first feature vector (214).

4. The cross-modal neural network controller (90) of claim 1,

wherein the second data type (212) is embedded data (12);

wherein the input of the second data type (212) into the second neural network (240) to produce the second feature vector (215) includes:

an input of the embedded data (12) into an embedded neural network (40) to produce an embedded feature vector (15); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the first feature vector (214), the embedded feature vector (15) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217).

5. The cross-modal neural network controller (90) of claim 1, wherein the input of the second data type (212) into the second neural network (240) to produce the second feature vector (215) includes:

an application of a one-stage convolutional neural network to the second data type (212) to generate the second feature vector (215); or

a convolution of an application of the one- stage convolutional neural and an attention module to the second data type (212) to generate the second feature vector (215).

6. The cross-modal neural network controller (90) of claim 1,

wherein the third data type (213) is sampled data (13);

wherein the input of the third data type (213) into the third neural network (250) to produce the third feature vector (216) includes:

an input of the sampled data (13) into an sampled neural network (60) to produce a sample feature vector (16); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the first feature vector (214), the second feature vector (215) and the sample feature vector (16) into a convolutional neural network to produce the prediction (217).

7. The cross-modal neural network controller (90) of claim 1, wherein the input of the third data type (213) into the third neural network (250) to produce the third feature vector (216) includes:

an application of a two-stage convolutional neural to the third data type (213) to generate the third feature vector (216); or

a convolution of an application of the two-stage convolutional neural and an attention module to the third data type (213) to generate the third feature vector (216).

8. The cross-modal neural network controller (90) of claim 1, wherein the processor (91) and the non-transitory memory (92) are at least one of installed in and linked to at least one of a server, a client and a workstation.

9. A non-transitory machine-readable storage medium encoded with instructions for execution by a processor (91) for processing multimodal electronic data including an encoded data (11), an embedded data (12) and a sampled data (13), the non-transitory machine-readable storage medium comprising instructions to:

at a lower neural network layer, at least two of:

input a first data type (211) into a first neural network (230) to produce a first feature vector (214), input a second data type (212) into a second neural network (240) to output a second feature vector (215), and

input a third data type (213) into a third neural network (250) to output a third feature vector,

wherein the first neural network (230), the second neural network (240) and the third neural network (250) have different neural architectures; and

at an upper neural network layer, input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into a fourth neural network (260) to produce a prediction (217).

10. The non-transitory machine -readable storage medium of claim 9,

wherein the first data type (211) is encoded data (11);

wherein the input of the first data type (211) into the first neural network (230) to produce the first feature vector (214) includes:

an input of the encoded data (11) into an encoded neural network (30) to produce an encoded feature vector (14); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the encoded feature vector (14), the second feature vector (215) and the third feature vector (216) into a convolutional neural network to produce the prediction (217).

11. The non-transitory machine -readable storage medium of claim 9, wherein the input of the first data type (211) into the first neural network (230) to produce the first feature vector (214) includes:

an application of a deep learning network to the first data type (211) to generate the first feature vector (214); or

a convolution of an application of the deep learning network and an attention module to the first data type (211) to generate the first feature vector (214).

12. The non-transitory machine -readable storage medium of claim 9, wherein the second data type (212) is embedded data (12);

wherein the input of the second data type (212) into the second neural network (240) to produce the second feature vector (215) includes:

an input of the embedded data (12) into an embedded neural network (40) to produce an embedded feature vector (15); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the first feature vector (214), the embedded feature vector (15) and the third feature vector (216) into a convolutional neural network to produce the prediction (217).

13. The non-transitory machine -readable storage medium of claim 9, wherein the input of the second data type (212) into the second neural network (240) to produce the second feature vector (215) includes:

an application of a one-stage convolutional neural network to the second data type (212) to generate the second feature vector (215); or

a convolution of an application of the one- stage convolutional neural and an attention module to the second data type (212) to generate the second feature vector (215).

14. The non-transitory machine -readable storage medium of claim 9,

wherein the third data type (213) is sampled data (13);

wherein the input of the third data type (213) into the third neural network (250) to produce the third feature vector (216) includes:

an input of the sampled data (13) into an sampled neural network (60) to produce a sample feature vector (16); and

wherein the input of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

an input of at least two of the first feature vector (214), the second feature vector (215) and the sample feature vector (16) into a convolutional neural network to produce the prediction (217).

15. The non-transitory machine -readable storage medium of claim 9, wherein the input of the third data type (213) into the third neural network (250) to produce the third feature vector (216) includes:

an application of a two-stage convolutional neural to the third data type (213) to generate the third feature vector (216); or

a convolution of an application of the two- stage convolutional neural and an attention module to the third data type (213) to generate the third feature vector (216).

16. A method for processing multimodal electronic data including an encoded data (11), an embedded data (12) and a sampled data (13),

the method comprising:

at a lower neural network layer, at least two of:

inputting a first data type (211) into a first neural network (230) to produce a first feature vector (214),

inputting a second data type (212) into a second neural network (240) to output a second feature vector (215), and

inputting a third data type (213) into a third neural network (250) to output a third feature vector,

wherein the first neural network (230), the second neural network (240) and the third neural network (250) have different neural architectures; and

at an upper neural network layer, inputting at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into a fourth neural network (260) to produce a prediction (217).

17. The method of claim 16,

wherein the first data type (211) is encoded data (11);

wherein the inputting of the first data type (211) into the first neural network (230) to produce the first feature vector (214) includes:

inputting the encoded data (11) into an encoded neural network (30) to produce an encoded feature vector (14); wherein the inputting of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

inputting at least two of the encoded feature vector (14), the second feature vector (215) and the third feature vector (216) into a convolutional neural network to produce the prediction (217).

18. The method of claim 16,

wherein the second data type (212) is embedded data (12);

wherein the inputting of the second data type (212) into the second neural network (240) to produce the second feature vector (215) includes:

inputting the embedded data (12) into an embedded neural network (40) to produce an embedded feature vector (15); and

wherein the inputting of at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

inputting at least two of the first feature vector (214), the embedded feature vector (15) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217).

19. The method of claim 16,

wherein the third data type (213) is sampled data (13);

wherein the inputting of the third data type (213) into the third neural network (250) to produce the third feature vector (216) includes:

inputting the sampled data (13) into a sampled neural network (60) to produce a sample feature vector (16); and

wherein the inputting of at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

inputting at least two of the first feature vector (214), the second feature vector (215) and the sample feature vector (16) into a convolutional neural network to produce the prediction (217).

20. The method of claim 16, wherein the inputting of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216) into the fourth neural network (260) to produce the prediction (217) includes:

applying a sigmoid function to a convolving and a max pooling of the at least two of the first feature vector (214), the second feature vector (215) and the third feature vector (216).

Description:
( R()SS-M()I) \I. NEURAL NETWORKS FOR PREDICTION

TECHNICAL FIELD

[0001] Various embodiments described in the present disclosure relate to systems, controllers and methods for future event predictions computed by neural networks employing two or more lower layer neural networks for analyzing different data types, particularly attention-based lower layer neural networks.

BACKGROUND

[0002] Electronic records provide a wide range of heterogenous information about a subject, and historically traditional machine learning methods (e.g., logistic regression, decision tree, support vector machine, gradient boosting machine) have been applied to electronic records to predict an occurrence or a nonoccurrence of a future event.

Recently, deep learning models of a specific form of network architecture (e.g., convolutional neural networks, recurrent neural networks) have been shown to

outperform traditional machine learning models in predicting an occurrence or a nonoccurrence of a future event. However, predictive outputs of such deep learning models have been difficult to interpret, because a classifier at a final layer of a neural network processes a compact latent predictive feature representation extracted by the lower layers of the neural network that does not have an optimal architecture to process the heterogeneous information available in electronic records.

[0003] More particular to hospital/clinical readmissions, electronic medical records provide a wide range of heterogenous information in a variety of forms including, but not limited to, patient background information (e.g., demographics, social history and previous hospital/clinical readmissions), patient admission information (e.g., diagnosis, procedures, medication codes, free text from clinical notes) and patient physiological information (e.g., vital sign measurements and laboratory test results). An application of deep learning models with a specific form of neural network architecture to such electronic medical records may not generate an optimal analysis of the heterogenous information for predicting an occurrence or a nonoccurrence of a patient hospital/clinical readmission, because again a classifier at a final layer of the neural network architecture processes a compact latent predictive feature representation extracted by lower layers of the neural network architecture that does not have an optimal architecture to process the heterogeneous information.

[0004] One such known deep learning model involves (1) at a bottom layer, an extraction and sequencing of words from an electronic medical record (EMR) whereby each word is a discrete object or event (e.g., a diagnosis or a procedure) or a derived object (e.g., a time interval or a hospital transfer), (2) at a next layer, an embedding of the words in into a Euclidean space, (3) on top of the embedding layer is a convolutional neural network for generating an EMR-level feature vector based on an identification, transformation and max-pooling of predictive motifs, and (4) at a final layer, an application of classifier of the EMR-level feature vector to predict an occurrence or a nonoccurrence of a patient hospital/clinical readmission. This approach fails to generate an optimal analysis of the EMR for predicting an occurrence or a nonoccurrence of the patient hospital/clinical readmission, because the model does not have an optimal neural network architecture at the lower layers to process differing data types of information available in EMRs.

[0005] The inventions of the present disclosure addresses an ideal of neural network systems, controllers and methods for processing differing data types of information available in an electronic record (e.g., an electronic medical record) to thereby generate an optimal analysis of the electronic record for predicting an occurrence or a nonoccurrence of a future event (e.g., a patient hospital/clinical readmission).

SUMMARY

[0006] Embodiments described in the present disclosure provide for a partitioning of electronic data. For example, an electronic medical record may be partitioned into three (3) categories. A first category is patient background information which is not associated with any specific hospital visit (e.g., patient demographics, social history and prior hospitalizations). A second category is patient admission information associated with patient encounters in multiple hospital/clinical visits which illustrates the past history of medical conditions of the patient (e.g., structure data such as diagnosis, procedures and medication codes or unstructured such as free text from clinical notes). A third category is patient physiological information from the patient most recent hospital visit (e.g., a time series of vital sign measurements and laboratory test results).

[0007] The inventions of the present disclosure are premised on (1) a pre processing of electronic data of different data types (e.g., the partitioned data categories), (2) an inputting of the pre-processed data into neural networks of different neural architectures selected for optimally extracting feature representations from the different data types and (3) combining feature vectors from the neural networks to produce a prediction whereby the prediction is based on an extracted compact predictive feature representation derived from the different data types. As such, embodiments described in the present disclosure further provide a novel and unique cross-modal neural network systems, controllers and methods for processing the partitioned electronic data. The cross-modal neural network systems, controllers and methods are based on a plurality of lower layer neural networks of different architectures to independently learn the feature representation of each category of the partitioned electronic data. The feature

representations of the data from each category are then combined at a higher upper layer in order to generate a compact predictive feature representation of each category of the partitioned electronic data. Additionally, an attention module may be optionally utilized at each lower layer neural network in order to promote model interpretability.

[0008] One embodiment of the inventions of the present disclosure is a controller for processing multimodal data including a plurality of different data types. The controller comprises a processor and a non-transitory memory configured to at a lower neural network layer, at least two of (1) input a first data type into a first neural network to produce a first feature vector, (2) input a second data type into a second neural network to output a second feature vector, and (3) input a third data type into a third neural network to output an third feature vector, and at an upper neural network layer, (4) input at least two of the first feature vector, the second feature vector and the third feature vector into a fourth neural network to produce a prediction. The neural networks have different neural architectures (e.g., the neural networks include different types of neural networks or the neural networks include different versions of the same type of neural network).

[0009] A second embodiment of the inventions of the present disclosure is a controller for processing multimodal electronic data including an encoded data, an embedded data and a sampled data. The controller comprises a processor and a non- transitory memory configured to at a lower neural network layer, at least two of (1) input the encoded data into an encoded neural network to produce an encoded feature vector,

(2) input the embedded data into an embedded neural network to output an embedded feature vector, and (3) input the sampled data into a sampled neural network to output an sampled feature vector, and at an upper neural network layer, (4) input at least two of the encoded feature vector, the embedded feature vector and the sampled feature vector into a convolutional neural network to produce a prediction.

[0010] A third embodiment of the inventions of the present disclosure is a non- transitory machine-readable storage medium first with instructions for execution by a processor for processing multimodal electronic data including a plurality data types. The non-transitory machine -readable storage medium comprising instructions to at a lower neural network layer, at least two of (1) input a first data type into a first neural network to output a first feature vector, (2) input a second data type into a second neural network to output a second feature vector and (3) input a third data type into a third neural network to output a third feature vector and at an upper neural network layer, (4) input at least two of the first feature vector, the second feature vector and the third feature vector into a fourth neural network to produce a prediction. The first neural network, the second neural network and the third neural network have different neural architectures (e.g., the neural networks include different types of neural networks or the neural networks include different versions of the same type of neural network). [0011] A fourth embodiment of the inventions of the present disclosure is a non- transitory machine-readable storage medium encoded with instructions for execution by a processor for processing multimodal electronic data including an encoded data, an embedded data and a sampled data. The non-transitory machine-readable storage medium comprising instructions to at a lower neural network layer, at least two of (1) input the encoded data into an encoded neural network to output an encoded feature vector, (2) input the embedded data into an embedded neural network to output an embedded feature vector and (3) input the sampled data into a sampled neural network to output an sampled feature vector and at an upper neural network layer, (4) input at least two of the encoded feature vector, the embedded feature vector and the sampled feature vector into a convolutional neural network to produce a prediction.

[0012] A fifth embodiment of inventions of the present disclosure a method for processing multimodal electronic data including a plurality of different data types. The method comprises, at a lower neural network layer, at least two of (1) inputting a first data type into an first neural network to output an first feature vector, (2) inputting a second data type into an second neural network to output an second feature vector and (3) inputting a third data type into a third neural network to output an third feature vector and at an upper neural network layer, (4) inputting at least two of the first feature vector, the second feature vector and the third feature vector into a fourth neural network to produce a prediction the first neural network, the second neural network and the third neural network have different neural architectures (e.g., the neural networks include different types of neural networks or the neural networks include different versions of the same type of neural network).

[0013] A sixth embodiment of inventions of the present disclosure a method for processing multimodal electronic data including an encoded data, an embedded data and a sampled data. The method comprises, at a lower neural network layer, at least two of (1) inputting the encoded data into an encoded neural network to output an encoded feature vector, (2) inputting the embedded data into an embedded neural network to output an embedded feature vector and (3) inputting the sampled data into a sampled neural network to output an sampled feature vector and at an upper neural network layer, (4) inputting at least two of the encoded feature vector, the embedded feature vector and the sampled feature vector into a convolutional neural network to produce a prediction. [0014] For purposes of describing and claiming the inventions of the present disclosure:

[0015] (1) the terms of the art of the present disclosure including, but not limited to, "electronic data", "electronic record", "pre-processing", "neural network", "deep learning network", "convolutional network", "attention module", "encoding", "embedding", "sampling", "convolution", "max pooling", "feature vector", "predictive feature representation" and "prediction", are to be broadly interpreted as known in the art of the present disclosure and exemplary described in the present disclosure;

[0016] (2) the term "encoded data" broadly encompasses electronic data encoded in accordance with neural network technology as understood in the art of the present disclosure and hereinafter conceived. Examples of encoded data in the context of electronic medical records includes, but are not limited to, a one-hot encoding, a binary encoding and an autoencoding of categorial and numerical data informative of patient background information (e.g., demographics, social history and previous hospital/clinical readmissions);

[0017] (3) the term "encoded neural network" broadly encompasses any neural network, as understood in the art of the present disclosure and hereinafter conceived, having an architecture exclusively designated by an embodiment of the present disclosure for learning predictive feature representations of encoded data;

[0018] (4) the term "encoded feature vector" broadly encompasses a neural network vector representative of predictive features of encoded data as understood in the art of the present disclosure and hereinafter conceived;

[0019] (5) the term "embedded data" broadly encompasses electronic data embedded in accordance with neural network technology as understood in the art of the present disclosure and hereinafter conceived. Examples of embedded data in the context of electronic medical records includes, but are not limited to, a word embedding of discrete cords and words informative of patient admission information (e.g., diagnosis, procedures, medication codes and free text from clinical notes);

[0020] (6) the term "embedded neural network" broadly encompasses any neural network, as understood in the art of the present disclosure and hereinafter conceived, having an architecture exclusively designated by an embodiment of the present disclosure for learning feature representations of embedded data; [0021] (7) the term "embedded feature vector" broadly encompasses a neural network vector representative of predictive features of embedded data as understood in the art of the present disclosure and hereinafter conceived;

[0022] (8) the term "sampled data" broadly encompasses a sampling of time series data, continuous or discontinuous, as understood in the art of the present disclosure and hereinafter conceived. Examples of sampled data in the context of electronic medical records includes, but is not limited to, a sampling of time series data informative of patient physiological information (e.g., vital sign measurements and laboratory test results);

[0023] (9) the term "sampled neural network" broadly encompasses any neural network, as understood in the art of the present disclosure and hereinafter conceived, having an architecture exclusively designated by an embodiment of the present disclosure for learning feature representations of sampled data;

[0024] (10) the term "sampled feature vector" broadly encompasses a neural network vector representative of predictive features of sampled data as understood in the art of the present disclosure and hereinafter conceived;

[0025] (11) the phrase "different neural architectures" broadly encompass each neural network differing from the other neural networks by at least one structural aspect. Examples of different neural architectures include, but are not limited to, the neural networks being different types of neural networks (e.g., a deep learning network and a convolutional neural network) or the neural networks having different structural versions of the same type of neural network (e.g., a one-stage convolutional neural network and a two-stage convolutional neural network). The phrase "different neural architecture" excludes neural networks of the same type and same version configured with different parameters;

[0026] (12) the term“controller” broadly encompasses all structural configurations, as understood in the art of the present disclosure and as exemplary described in the present disclosure, of an application specific main board or an application specific integrated circuit for controlling an application of various inventive principles of the present disclosure as subsequently described in the present disclosure. The structural configuration of the controller may include, but is not limited to, processor(s), computer- usable/ computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s);

[0027] (13) the term“module” broadly encompasses electronic

circuitry/hardware and/or an executable program (e.g., executable software stored on non- transitory computer readable medium(s) and/or firmware) incorporated within or accessible by a controller for executing a specific application; and

[0028] (14) the descriptive labels for term“module” herein facilitates a distinction between modules as described and claimed herein without specifying or implying any additional limitation to the term“module”; and

[0029] (15) “data” may be embodied in all forms of a detectable physical quantity or impulse (e.g., voltage, current, magnetic field strength, impedance, color) as understood in the art of the present disclosure and as exemplary described in the present disclosure for transmitting information and/or instructions in support of applying various inventive principles of the present disclosure as subsequently described in the present disclosure. Data communication encompassed by the inventions of the present disclosure may involve any communication method as known in the art of the present disclosure including, but not limited to, data transmission/reception over any type of wired or wireless datalink and a reading of data uploaded to a computer-usable/computer readable storage medium.

[0030] The foregoing embodiments and other embodiments of the inventions of the present disclosure as well as various features and advantages of the present disclosure will become further apparent from the following detailed description of various embodiments of the inventions of the present disclosure read in conjunction with the accompanying drawings. The detailed description and drawings are merely illustrative of the inventions of the present disclosure rather than limiting, the scope of the inventions of present disclosure being defined by the appended claims and equivalents thereof. BRIEF DESCRIPTION OF THE DRAWINGS

[0031] In order to better understand various example embodiments, reference is made to the accompanying drawings, wherein:

[0032] FIG. 1 illustrates a first exemplary embodiment of a cross-modal neural network in accordance with the present disclosure for future event predictions;

[0033] FIG. 2 illustrates an exemplary embodiment of a cross-modal neural network in accordance with the present disclosure for patient hospital/clinical readmission predictions;

[0034] FIG. 3 illustrates an exemplary embodiment of a data preprocessor in accordance with the present disclosure;

[0035] FIG. 4A illustrates an exemplary embodiment of deep neural network in accordance with the present disclosure;

[0036] FIG. 4B illustrates an exemplary embodiment of attention-based deep neural network in accordance with the present disclosure;

[0037] FIG. 5A illustrates an exemplary embodiment of a one-stage convolutional neural network in accordance with the present disclosure;

[0038] FIG. 5B illustrates an exemplary embodiment of an attention-based one- stage convolutional neural network in accordance with the present disclosure;

[0039] FIG. 6A illustrates an exemplary embodiment of a two-stage convolutional neural network in accordance with the present disclosure;

[0040] FIG. 6B illustrates an exemplary embodiment of an attention-based two- stage convolutional neural network in accordance with the present disclosure;

[0041] FIG. 7 illustrates an exemplary embodiment of a sigmoid-based convolutional neural network in accordance with the present disclosure;

[0042] FIG. 8 illustrates a cross-modal neural network system in accordance with the present disclosure;

[0043] FIG. 9 illustrates an exemplary embodiment of a cross-modal neural network controller in accordance with the present disclosure; and

[0044] FIG. 10 a second exemplary embodiment of a cross-modal neural network in accordance with the present disclosure for future event predictions. DETAILED DESCRIPTION

[0045] The description and drawings presented herein illustrate various principles.

It will be appreciated that those skilled in the art will be able to devise various

arrangements that, although not explicitly described or shown herein, embody these principles and are included within the scope of this disclosure. As used herein, the term, “or,” as used herein, refers to a non-exclusive or (i.e., and/or), unless otherwise indicated (e.g.,“or else” or“or in the alternative”). Additionally, the various embodiments described in the present disclosure are not necessarily mutually exclusive and may be combined to produce additional embodiments that incorporate the principles described in the present disclosure.

[0046] The inventions of the present disclosure are premised on a pre-processing of different data types. For example, in the context of an electronic medical record, a first data type is patient background information which is not associated with any specific hospital visit (e.g., patient demographics, social history and prior hospitalizations), a second data type is patient admission information associated with patient encounters in multiple hospital/clinical visits which illustrates the past history of medical conditions of the patient (e.g., structure data such as diagnosis, procedures and medication codes or unstructured such as free text from clinical notes), and a third data type is patient physiological information from the patient most recent hospital visit (e.g., a time series of vital sign measurements and laboratory test results).

[0047] The inventions of the present disclosure are further premised on inputting of the pre-processed data into neural networks of different neural architectures selected for optimally extracting predictive feature representations from the different data types. For example, a first data type is pre-processed and inputted into a first neural network for extracting predictive feature representations from the first data type, a second data type is pre-processed and inputted into a second neural network for extracting predictive feature representations from the second data type, and a third data type is pre-processed and inputted into a third neural network for extracting predictive feature representations from the third data type, where the three (3) neural networks have different neural architectures (e.g., the neural networks include different types of neural networks or the neural networks include different versions of the same type of neural network). More particularly in the context of an electronic medical record, patent background information is encoded and inputted into an encoded neural network (e.g., a deep learning network or an attention-based deep learning network) for extracting predictive feature representations from the encoded data, patent admission information is embedded and inputted into an embedded neural network (e.g., a one-stage convolutional neural network or an attention- based one-stage convolutional neural network) for extracting predictive feature representations from the embedded data, and patient physiological information is sampled and inputted into a sampled neural network (e.g., a two-stage convolutional neural network or an attention-based two-stage convolutional neural network) for extracting predictive feature representations from the sampled data.

[0048] The inventions of the present disclosure are further premised on combining feature vectors from the neural networks having different neural architectures to produce a prediction whereby the prediction is based on an extracted compact predictive feature representation derived from the different data types. For example, a fourth neural network inputs a first feature vector representing predictive feature representations of a first data type, a second feature vector representing predictive feature representations of a second data type and a third feature vector representing predictive feature representations of a third data type to produce a prediction whereby the prediction is based on an extracted compact predictive feature representation derived from the different data types. More particularly in the context of an electronic medical record, a convolutional neural network (e.g., a sigmoid-based convolutional neural network) inputs a encoded feature vector representing predictive feature representations of encoded patent background information, a embedded feature vector representing predictive feature representations of embedded patent admission information and sampled feature vector representing predictive feature representations of sampled patient physiological information to produce a patient hospital/clinical readmission prediction whereby the prediction is based on an extracted compact predictive feature representation derived from the patient background information, the patient admission information and patient physiological information.

[0049] To facilitate an understanding of the inventions of the present disclosure, the following description of FIG. 1 teaches a cross-modal neural network of the present disclosure for future event predictions and FIG. 2 teaches a cross-modal 1 neural network of the present disclosure for patient hospital/clinical readmission predictions. From the description of FIGS. 1 and 2, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of cross-modal neural network of the present disclosure.

[0050] Referring to FIG. 1, a cross-modal neural network system of the present disclosure for future event predictions employs a data preprocessor 20, an encoded neural network 30, an embedded neural network 40, a sampled neural network 50 and a convolutional neural network 60.

[0051] In operation, data preprocessor 20 is a module having an architecture for extracting different data types from electronic record(s) 10 to produce encoded data 11 form a first data type, embedded data 12 from a second data type and sampled data 13 form a third data type.

[0052] Encoded neural network 30 is a module having a neural architecture trained for analyzing encoded data 11 to learn predictive features as related to an occurrence or a nonoccurrence of a future event and inputs encoded data 11 to produce an encoded feature vector 14 representative of the predictive features of encoded data 11.

[0053] Embedded neural network 40 is a module having a neural architecture trained for analyzing embedded data 12 to leam predictive features as related to the occurrence or the nonoccurrence of the future event and inputs embedded data 12 to produce an embedded feature vector 15 representative of the predictive features of embedded data 12.

[0054] Sampled neural network 50 is a module having a neural architecture trained for analyzing sampled data 13 to leam predictive features as related to the occurrence or the nonoccurrence of the future event and inputs sampled data 13 to produce a sampled feature vector 16 representative of the predictive features of sampled data 16.

[0055] Convolutional neural network 60 is a module having a neural architecture trained for combining encoded feature vector 14, embedded feature vector 15 and sampled feature vector 16 to produce a prediction 17 of the occurrence or the

nonoccurrence of the future event.

[0056] In practice, encoded data 11 is a first data type of electronic record(s) 10 encoded by the data preprocessor 20 as known in the art of the present disclosure (e.g., one-hot coded binary coded or autoencoding), embedded data 12 is a second data type of electronic record(s) 10 embedded by the data preprocessor 20 as known in the art of the present disclosure (e.g., a word embedding), and sampled data 13 is third data type of electronic record(s) 10 sampled by the data preprocessor 20.

[0057] Data preprocessor 20 may include a user interface for a manual loading of electronic record(s) 10 by data type or may be trained to identify the different data types of electronic record(s) 10 as known in the art of the present disclosure.

[0058] Further in practice, in view of a neural processing different types of data, embodiments of the neural architectures of encoded neural network 30, embedded neural network 40 and sampled neural network 50 will differ by one, several or all stages of neural processing (e.g., encoded neural network 30, embedded neural network 40 and sampled neural network 50 will be different types of neural networks or encoded neural network 30, embedded neural network 40 and sampled neural network 50 will be different versions of the same type of neural network).

[0059] Exemplary neural architectures of encoded neural network 30 include, but are not limited to, a deep learning network (e.g. multilayer perceptrons).

[0060] Exemplary neural architectures of embedded neural network 40 include, but are not limited to, a one-stage convolutional network (e.g. inception architecture).

[0061] Exemplary neural architectures of sampled neural network 50 include, but are not limited to, a two-stage convolutional network (e.g. recurrent neural network).

[0062] Also in practice, the neural architectures of encoded neural network 30, embedded neural network 40 and/or sampled neural network 50 may include an attention module as known in the art of the present disclosure.

[0063] Additionally in practice, the neural architecture of cross-modal convolutional neural network 60 may produce prediction 17 as a binary output delineating either a predictive occurrence or a predictive nonoccurrence of the future event, or a percentage output delineating a predictive probability of an occurrence of the future event.

[0064] Exemplary neural architectures of convolutional neural network 60 include, but are not limited to, a sigmoid-based convolutional neural network (e.g.

multilayer perceptrons). [0065] Even further in practice, electronic record(s) 10 may only include two (2) of three (3) data types and therefore only the corresponding neural networks 30, 40 and 50 will be utilized, or electronic record(s) 10 may include an additional different data type whereby an additional neural network having a neural architecture different from the architectures of neural networks 30, 40 and 50 will be utilized to produce a feature vector representative of predictive features of the additional different data type.

[0066] Referring to FIG. 2, a cross-modal neural network of the present disclosure for patient hospital/clinical readmission predictions employs data preprocessor 20 (FIG.

1) embodied as a data preprocessor 120, encoded neural network 30 (FIG. 1) embodied as a deep neural network 130, embedded neural network 40 (FIG. 1) embodied as a one- stage convolutional neural network 140, sampled neural network 50 (FIG. 1) embodied as a two-stage convolutional neural network 150 and convolutional neural network 60 embodied as a sigmoid-based convolutional neural network 160.

[0067] In operation, data preprocessor 120 is a module for extracting encoded data 111, embedded data 112 and sampled data 113 from one or more electronic medical records 110.

[0068] In one embodiment as shown in FIG. 3, electronic medical record(s) 1 lOa includes categorical and numerical data 1 l8a (e.g., demographics, social history and previous hospital/clinical admissions), discrete codes and words H8b (e.g., diagnosis, procedure and medication) and time series data H8c (e.g., vital signs and lab results).

[0069] Data preprocessor 120 extracts and encodes categorical and numerical data

1 l8a informative of patient background information into encoded data 11 la, extracts and embeds discrete codes and words 118b informative of patient admission information into embedded data 1 l2a and extracts and samples time series data 1 l8c informative of patient physiological information into sampled data 1 l3a.

[0070] Deep neural network 130 is a module having a neural architecture trained for analyzing encoded data 111 to learn predictive features as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission and inputs encoded data 111 to produce an encoded feature vector 114 representative of the predictive features of encoded data 111.

[0071] In one embodiment as shown in FIG. 4A, a deep neural network l30a has a neural architecture employing a module including a flatten stage S131 and a deep neural network stage S132 trained to learn predictive features as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission and inputs encoded data 11 la (FIG. 3) to produce an encoded feature vector 1 l4a representative of the predictive features of encoded data 11 la.

[0072] In an attention-based embodiment as shown in FIG. 4B, a deep neural network l30b has a neural architecture employing a DNN module including flatten stage S131 (FIG. 4A) and deep neural network stage S132 (FIG. 4A) trained to learn predictive features as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission and inputs encoded data 11 la.

[0073] Still referring to FIG. 4B, the neural architecture of deep neural network l30b further employs an attention module including a convolution (attention) stage S134, a weighted embedded stage S135 and a summation/convolution stage S 136 to produce an attention output for visualizing features of encoded data 11 la that are considered to be important by the prediction model.

[0074] In practice, the architecture of the attention module is based on U ; £ M dxl as the i th input to deep neural network l30b where d is a number of encoding bits for the background data. Convolution stage S134 are performed on the sequence of inputs to generate an attention score a t in accordance with following equations (1) and (2):

[0075]

[0076] where W att £ M wxd is the weight matrix, * is the convolution operation, h att is a bias term, w is the filter length and g is the sigmoid activation function.

Attention scores for input variables are used as weights to compute the context vector c = åi cqU j during stage weighted embedded S135. The context vector is then processed by a convolution stage S 136 to generate the attention representation at the output of the attention module. [0077] Still referring to FIG. 4B, the outputs of the DNN module and the attention module are then concatenated and convoluted at a stage S137 using N conv number of filters to produce an encoded feature vector 1 l4b representative of the predictive features of encoded data 11 la.

[0078] Referring back to FIG. 2, one-stage convolutional neural network 140 is a module having a neural architecture trained for analyzing embedded data 112 to leam predictive features as related to an occurrence or a nonoccurrence of a patient

hospital/clinical readmission and inputs embedded data 112 to produce an embedded feature vector 115 representative of the predictive features of embedded data 112.

[0079] In one embodiment as shown in FIG. 5A, a one-stage convolutional neural network l40b has a neural architecture employing a module including a multiple convolutional neural network stage S141 applying convolution and max pooling with different filter widths for multi-level feature extraction and a fully connected stage S142 trained to leam predictive features of embedded data 112 (FIG. 3) as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission and inputs embedded data 1 l2a to produce an embedded feature vector 1 l5a representative of the predictive features of embedded data 1 l2a.

[0080] In an attention-based embodiment as shown in FIG. 5B, a one-stage convolutional neural network l40b employs a convolutional module including multiple convolutional neural network stage S141 (FIG. 5A) applying convolution and max pooling with different filter widths for multi-level feature extraction trained to learn predictive features embedded data 112 (FIG. 3) as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission.

[0081] Still referring to FIG. 5B, the neural architecture of one-stage

convolutional neural network l40b further employs an attention module including a convolution (attention) stage S143, a weighted embedded stage S144 and a

summation/convolution stage S 145 to produce an attention output for visualizing features of embedded data 1 l2a that are considered to be important by the prediction model.

[0082] In practice, the architecture of the attention module is based on U ; £ M dxl as the i th input to one- stage convolutional neural network l40b where d is a word embedding dimension of discrete medical codes. Convolution stage S143 is performed on the sequence of inputs to generate an attention score a L in accordance with following equations (1) and (2):

[0083]

[0084] where W atf: e M wxd is the weight matrix, * is the convolution operation, h att is a bias term, w is the filter length and g is the sigmoid activation function.

Attention scores for input variables are used as weights to compute the context vector c = åi ff jUj during stage S144. The context vector is then processed by a second convolution stage S145 to generate the attention representation at the output of the attention module.

[0085] Still referring to FIG. 5B, the outputs of the convolutional module and the attention module are then concatenated and convoluted at a stage S146 using N conv number of filters to produce an embedded feature vector 1 l5b representative of the predictive features of embedded data 1 l2a.

[0086] Referring back to FIG. 2, two-stage convolutional neural network 150 is a module having a neural architecture trained for analyzing sampled data 113 to leam predictive features as related to an occurrence or a nonoccurrence of a patient

hospital/clinical readmission and inputs sampled data 113 to produce a sampled feature vector 116 representative of the predictive features of sampled data 113.

[0087] In one embodiment as shown in FIG. 6A, a two-stage convolutional neural network l50b has a neural architecture employing a module including two stacked convolutional neural network stages S151 and S152 applying convolution and max pooling for multi-level feature extraction and a fully connected stage S153 trained to leam predictive features sampled data 113 (FIG. 3) as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission and inputs sampled data 113a to produce a sampled feature vector 1 l6a representative of the predictive features of sampled data 1 l3a. [0088] More particularly, each time series is considered as a channel input whereby the stage S151 and S152 is denoted as Cl(Size)-Sl-C2(Size)-S2 where Cl and C2 are the numbers of convolutional filters in stages S151 and S152, Size is the kernel size and SI and S2 are subsampling factors. Subsampling is implemented by a max pooling operation and subsampling factors are chosen such that a maximum value is obtained for each filter after stage S152.

[0089] In an attention-based embodiment as shown in FIG. 6B, a two-stage convolutional neural network l50b employs a convolutional module including two stacked convolutional neural network stages S151 and S152 (FIG. 5 A) applying convolution and max pooling for multi-level feature extraction trained to learn predictive features sampled data 113 (FIG. 3) as related to an occurrence or a nonoccurrence of a patient hospital/clinical readmission.

[0090] Again, each time series is considered as a channel input whereby the stage

S151 and S152 is denoted as Cl(Size)-Sl-C2(Size)-S2 where Cl and C2 are the numbers of convolutional filters in stages S151 and S152, Size is the kernel size and SI and S2 are subsampling factors. Subsampling is implemented by a max pooling operation and subsampling factors are chosen such that a maximum value is obtained for each filter after stage S152.

[0091] Still referring to FIG. 6B, the neural architecture of two-stage

convolutional neural network l50b further employs an attention module including a convolution (attention) stage S154, a weighted embedded stage S155 and a

summation/convolution stage S 156 to produce an attention output for visualizing features of sampled data 1 l3a that are considered to be important by the prediction model.

[0092] In practice, the architecture of the attention module is based on U ; £ M dxl as the i th input to two-stage convolutional neural network l50b where d is a number of data points in a time-series. Convolution stage S 154 is performed on the sequence of inputs to generate an attention score a t in accordance with following equations (1) and

(2):

[0093]

[0094] where W atf: e M wxd is the weight matrix, * is the convolution operation, h att is a bias term, w is the filter length and g is the sigmoid activation function.

Attention scores for input variables are used as weights to compute the context vector c = åi ff jUj during stage S155. The context vector is then processed by a second convolution stage S 156 to generate the attention representation at the output of the attention module.

[0095] Still referring to FIG. 6B, the outputs of the convolution module and the attention module are then concatenated and convoluted at a stage S157 using N conv number of filters to produce a sampled feature vector 1 l6b representative of the predictive features of sampled data 113a

[0096] Referring back to FIG. 2, convolutional neural network 160 is a module having a neural architecture trained for combining encoded feature vector 114, embedded feature vector 115 and sampled feature vector 116 to produce a prediction 117 of the occurrence or the nonoccurrence of a patient hospital/clinical readmission.

[0097] In one embodiment as shown in FIG. 7, cross-modal convolutional neural network l60a has a neural architecture employing a module including a convolutional stage S161, a fully connected stage S162 and a sigmoid function stage S163 trained to combine encoded feature vector H4a (FIG. 4A) or H4b (FIG. 4B), embedded feature vector H5a (FIG. 5A) or vector H5b (FIG. 5B) and sampled feature vector H6a (FIG.

6 A) or vector 1 l6b (FIG. 6B) to produce a prediction 117a of the occurrence or the nonoccurrence of a patient hospital/clinical readmission.

[0098] In practice, cross-modal convolutional neural network l60a is based on x k E M. NconvXl being a feature vector for the k lh EMR category. The feature vectors 114, 115 and 116 from K = 3 modules are concatenated and convoluted at stage S161 with a matrix W £ , KxNxconv and a bias b E , NxconvXl in accordance with the following equations (3), (4) and (5):

[0099]

X=(xi,x 2 , ... , x K y, (3)

[00100] where N XCO nv is the number of filters and / is a non-linear activation function. A max pooling operation is applied to each filter to extract a scalar y (/) =

MAX( Y(: ,])). Scalars from N XCO nv filters are then concatenated to form a compact predictive feature vector which is then fed to a final connected network at stage S162 followed by a sigmoid function at stage S163 to produce prediction 1 l7a.

[00101] To facilitate a further understanding of the inventions of the present disclosure, the following is a description of an exemplary implementation of an attention- based cross-modal neural network (AXCNN) of the present disclosure in practice employing deep neural network l30b (FIG. 4B), one-stage convolutional neural network l40b (FIG. 5B), two-stage convolutional neural network l50b (FIG. 6B) and sigmoid- based convolutional neural network l60a (FIG. 7).

[00102] Data Pre-Processing. The AXCNN was applied to a 30-day unplanned readmission data for heart failure (HF) collected from a large hospital system in Arizona, United States. The dataset consisted of patient encounter information for 6730 HF patients of age 18 or over (mean: 72.7, std: 14.4), 60% are males, between October 2015 and June 2017. Among them 853 patients have at least a readmission within 30 days after discharge which gives about 13% of the unplanned HF readmission rate. For each patient, the last hospital visit was identified in which the patient was diagnosed with heart failure among multiple visits and the AXCNN was used to predict if the HF patient would be readmitted within the next 30 days. The following Table I shows the summary statistics of the dataset.

[00103] TABLE I SUMMARY STATISTICS OF THE EMR DATASET

[00104] From the EMRs, 19 background variables were selected from the patient demographics, social history and prior hospitalizations (e.g. race, tobacco use, number of prior inpatient admissions) as the input to deep neural network l30b (FIG. 4B).

Unknown is assigned to missing data for nominal variables and 0 for ordinal and count variables. For the one-stage convolutional neural network l40b (FIG. 5B), level 3 ICD- 10 CM and PCS codes were collected for diagnosis and procedure codes respectively, and order catalog codes for medications for each patient’s encounters in the dataset up to and include the most recent hospital visit. The codes were transformed into a sentence with a time-gap word (e.g. 0-lm for 0 to 1 month interval gap) inserted between two consecutive visits and assign any codes that appear only once in the dataset to the code word rareword for robustness. The sequence was then truncated to keep the last 100 words which results in 75% of the sequences remain to be conserved. For the two-stage convolutional neural network l50b (FIG. 6B), five vital sign measurements (respiratory rate, systolic blood pressure, heart rate, blood oxygen saturation (Sp02) and temperature) and five laboratory test results (sodium, potassium, blood urea nitrogen (BUN), creatinine and the ratio of BUN to creatinine) were extracted from the last encounter before prediction for each patient. Each time series was normalized by z-score across all patients and any patient laboratory test results that are not found in EMRs are set to zeros. Furthermore, vital sign measurements and laboratory test results were resampled with backward-filled for every hour and every 4 hours respectively based on the hospital system’s adult standard of care policy. Those resampled time series which have more than 100 points were truncated to keep the last 100 measurement points while others which have fewer than 100 points were zero-padded to maintain 100 points in length. [00105] After preprocessing, the dataset were randomly divided into training

(70%), validation (10%) and test sets (20%), with each set containing the same ratio of readmitted to non-readmitted patients. A validation set was used to fine tune the following hyper-parameters: number of layers in DNN l30a, number of neurons per layer in DNN l30a and FC layers, number of convolutional filters and the dropout probability.

[00106] Parameter Setting. For the DNN in network l30b (FIG. 4B), 3 hidden layers with 64 neurons per layer were chosen. For network l40b (FG. 5B), the discrete medical codes in the sentence were embedded using the word2vec skip-gram model with the embedding dimension set to 100. The three filter widths were set to 3, 4 and 5 with 20 filters each for CNN. For the MC-DCNN in network l50b (FIG. 6B), Cl(Size)-Sl- C2(Size)-S2 equal to 10(5)- 10-5(5)-10 were set. For the attention modules, the filter length w = 5 was used for the first convolutional layer to generate the attention scores and 10 filters for the second convolutional layer with tank to generate the attention representation. N CO nv = 20 was selected for the convolutional layer of network l60a (FIG. 7) located at the output of networks l30b, l40b and l50b. For the final layer of network l60a, N XCO nv = 50 was chosen for the cross-modal convolutional layer with relu and 256 neurons for the FC layer. Dropouts with probability of 0.4 were utilized at the outputs of hidden layers in DNN and both the inputs and outputs of FC during training.

[00107] Implementation. The AXCNN was implemented using a deep learning library utilizing an Adadelta optimizer with the default parameter values and batch size of 256 for training the model. Binary cross-entropy was used as the loss function to adjust the weights. Training was stopped when no further improvement on the validation loss is found after 25 epochs. The results provided an improvement over the prior art of the present disclosure.

[00108] To facilitate a further understanding of the inventions of the present disclosure, the following description of FIG. 8 teaches an embodiment of a cross-modal neural network system of the present disclosure and FIG. 9 teaches a cross-modal neural network controller of the present disclosure. From the description of FIGS. 8 and 9, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of cross-modal neural network systems and cross-modal neural network controllers of the present disclosure.

[00109] Referring to FIG. 8, a cross-modal neural network controller 90 of the present disclosure is installed within an application server 80 accessible by a plurality of clients (e.g., a client 81 and a client 82 as shown) and/or is installed within a workstation 83 employing a monitor 84, a keyboard 85 and a computer 86.

[00110] In operation, cross-modal convolutional neural network 90 inputs electronic record(s) 10 (FIG. 1) from one or more data sources 70 (e.g., a database server 71 and a file server 72) to produce prediction 17 (FIG. 1) as previously described in the present disclosure. Prediction 17 is communicated by controller 90 to a variety of reporting sources including, but not limited to, a printer 101, a tablet 102, a mobile phone 103, a print server 104, an email server 105 and a file server 106.

[00111] In practice, cross-modal convolutional neural network 90 may be implemented as hardware/circuity/software/firmware.

[00112] In one embodiment as shown in FIG. 9, a cross-modal convolutional neural network 90a includes a processor 91, a memory 92, a user interface 93, a network interface 94, and a storage 95 interconnected via one or more system bus(es) 96. In practice, the actual organization of the components 91-95 of controller 90a may be more complex than illustrated.

[00113] The processor 91 may be any hardware device capable of executing instructions stored in memory or storage or otherwise processing data. As such, the processor 91 may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar devices.

[00114] The memory 92 may include various memories such as, for example Ll,

L2, or L3 cache or system memory. As such, the memory 92 may include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices.

[00115] The user interface 93 may include one or more devices for enabling communication with a user such as an administrator. For example, the user interface 93 may include a display, a mouse, and a keyboard for receiving user commands. In some embodiments, the user interface 93 may include a command line interface or graphical user interface that may be presented to a remote terminal via the network interface 94. [00116] The network interface 94 may include one or more devices for enabling communication with other hardware devices. For example, the network interface 94 may include a network interface card (NIC) configured to communicate according to the Ethernet protocol. Additionally, the network interface 94 may implement a TCP/IP stack for communication according to the TCP/IP protocols. Various alternative or additional hardware or configurations for the network interface will be apparent.

[00117] The storage 95 may include one or more machine-readable storage media such as read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media. In various embodiments, the storage 95 may store instructions for execution by the processor 91 or data upon with the processor 91 may operate. For example, the storage 95 store a base operating system (not shown) for controlling various basic operations of the hardware.

[00118] More particular to the present disclosure, storage 95 further stores control modules 97 including an embodiment of data preprocessor 20 (e.g., data processor l02a of FIG. 3), an embodiment of encoded neural network 30 (e.g., deep learning network l30a of FIG. 4A, or attention-based deep learning network l30b of FIG. 4B), an embodiment of embedded neural network 40 (e.g., one-stage convolutional neural network l40a of FIG. 5A, or attention-based one-stage convolutional neural network l40b of FIG. 5B), an embodiment of sampled neural network 50 (e.g., two-stage convolutional neural network l50a of FIG. 6A, or attention-based two-stage

convolutional neural network l50b of FIG. 6B) and an embodiment of convolutional neural network 60 (e.g., sigmoid-based convolution neural network l60a of FIG. 7).

[00119] Referring back to FIG. 1, to facilitate an understanding of the inventions of the present disclosure, the embodiments described herein were directed to three (3) specific pre-processing techniques of electronic data including encoding (e.g., encoding of categorical and numerical data), embedding (e.g., embedding of discrete codes and words) and sampling (e.g., sampling of time series data). These pre-processing techniques were chosen in view of embodiments of pre-processing categorial and numerical data, discrete codes and words and time series data as different data types of electronic medical records. Nonetheless, in practice, electronic records, particularly electronic medical records, may include additional types of data different from categorial and numerical data, discrete codes and words and time series data. As stated earlier in the present disclosure, the inventions of the present disclosure are premised on (1) pre processing of electronic data of different data types, (2) an inputting of the pre-processed data into neural networks of different neural architectures selected for optimally extracting feature representations from the different data types and (3) combining feature vectors from the neural networks to produce a prediction whereby the prediction is based on an extracted compact predictive feature representation derived from the different data types. Thus, the claims of the present disclosure should not be limited to embodiments of encoded neural networks, embedded neural networks, sampled neural networks unless a claim explicitly recites an encoded neural network, an embedded neural network and/or a sampled neural network.

[00120] Referring to FIGS. 1-9, those having ordinary skill in the art will appreciate the many benefits of the inventions of the present disclosure including, but not limited to, a cross-modal neural network that addresses compact latent predictive feature representation as known in the art of the present disclosure by extracting a compact predictive feature representation derived from the different data types.

[00121] More particularly, those having ordinary skill in the art of the present disclosure will appreciate the inventions of the present disclosure are premised on a pre processing of different data types.

[00122] For example, FIG. 10 illustrate an electronic record 10 including first data type 211, a second data type 212 and a third data type 213. In the context of electronic record 10 being an electronic medical record, a first data type 211 may be patient background information which is not associated with any specific hospital visit (e.g., patient demographics, social history and prior hospitalizations), a second data type 212 may be patient admission information associated with patient encounters in multiple hospital/clinical visits which illustrates the past history of medical conditions of the patient (e.g., structure data such as diagnosis, procedures and medication codes or unstructured such as free text from clinical notes), and third data type 213 may be patient physiological information from the patient most recent hospital visit (e.g., a time series of vital sign measurements and laboratory test results).

[00123] Still referring to FIG. 10, first data type 211 is pre-processed by a data pre processor 220 and inputted into a first neural network 23- for extracting predictive feature representations from first data type 211 to produce a first feature vector 214, second data type 212 is pre-processed by data pre -processor 220 and inputted into a second neural network 240 for extracting predictive feature representations from second data type 212 to produce a second feature vector 215, and third data type 213 is pre-processed by data pre-processor 220 and inputted into a third neural network 250 for extracting predictive feature representations from third data type 213 to produce a third feature vector 216, where the three (3) neural networks 230, 240 and 250 have different neural architectures (e.g., the neural networks include different types of neural networks or the neural networks include different versions of the same type of neural network).

[00124] Still referring to FIG. 10, a fourth neural network 260 combines feature vectors 214, 215, 216 from the neural networks 230, 240, 250 to produce a prediction whereby the prediction 217 that is based on an extracted compact predictive feature representation derived from the different data types 211, 212 and 213.

[00125] Furthermore, it will be apparent that various information described as stored in the storage may be additionally or alternatively stored in the memory. In this respect, the memory may also be considered to constitute a“storage device” and the storage may be considered a“memory.” Various other arrangements will be apparent. Further, the memory and storage may both be considered to be“non-transitory machine- readable media.” As used herein, the term“non-transitory” will be understood to exclude transitory signals but to include all forms of storage, including both volatile and non volatile memories.

[00126] While the device is shown as including one of each described component, the various components may be duplicated in various embodiments. For example, the processor may include multiple microprocessors that are configured to independently execute the methods described in the present disclosure or are configured to perform steps or subroutines of the methods described in the present disclosure such that the multiple processors cooperate to achieve the functionality described in the present disclosure. Further, where the device is implemented in a cloud computing system, the various hardware components may belong to separate physical systems. For example, the processor may include a first processor in a first server and a second processor in a second server. [00127] It should be apparent from the foregoing description that various example embodiments of the invention may be implemented in hardware or firmware.

Furthermore, various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a machine-readable storage medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.

[00128] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

[00129] Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be affected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.