Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR THE PREDICTION OF POST-OPERATIVE COGNITIVE DECLINE USING BLOOD-BASED INFLAMMATORY BIOMARKERS
Document Type and Number:
WIPO Patent Application WO/2024/064892
Kind Code:
A1
Abstract:
Embodiments herein describe systems and methods to generate a risk score for an individual to develop postoperative neurocognitive disorder (POND). Various embodiments obtain multi-omics data from an individual, such as genomics, transcriptomics, and proteomics. In certain embodiments, a machine learning algorithm is used to generate the risk score based on the multi-omics data. In further embodiments, clinical data is further used in the determination of the risk score.

Inventors:
GAUDILLIERE BRICE (US)
HEDOU JULIEN (FR)
VERDONK FRANCK (FR)
Application Number:
PCT/US2023/074903
Publication Date:
March 28, 2024
Filing Date:
September 22, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LELAND STANFORD JUNIOR (US)
International Classes:
G06N20/20; G16H50/30; A61B5/00; A61N1/36; G01N33/68
Domestic Patent References:
WO2022152912A12022-07-21
WO2022067189A12022-03-31
Foreign References:
US20180236235A12018-08-23
Attorney, Agent or Firm:
LEE, Paul, J. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for determining risk for postoperative neurocognitive disorder (POND) for an individual following surgery, comprising: obtaining values of a plurality of features, wherein the plurality of features comprise a plurality of biological and clinical features from a plurality of omics; computing a risk score for POND for the individual based on the plurality of features using a model obtained via a machine learning technique; and providing an assessment of the individual’s risk for developing POND based on the computed risk score.

2. The method of claim 1 , wherein obtaining the plurality of features comprises: obtaining a sample for analysis from the individual subject to surgery; and measuring the values of a plurality of omic biological and clinical features.

3. The method of claim 1 or 2, wherein the plurality of features further includes demographic features.

4. The method of any of claims 1-3, wherein the plurality of biological features comprise at least one feature of the group consisting of: a genomic feature, a transcriptom ic feature, a proteomic feature, a cytomic feature, and a metabolomic feature.

5. The method of any of claims 1-4, wherein the machine learning model is trained using a bootstrap procedure on a plurality of individual data layers, wherein each data layer represents one type of data from the plurality of features and at least one artificial feature.

6. The method of claim 5, wherein each type is chosen among the group consisting of: genomic, transcriptom ic, proteomic, cytomic, metabolomic, clinical and demographic.

7. The method of claim 5 or 6, wherein: each data layer comprises data for a population of individuals; wherein each feature includes feature values for all individuals in the population of individuals; and for a respective data layer, each artificial feature is obtained from a non-artificial feature among the plurality of features, via a mathematical operation performed on the feature values of the non-artificial feature.

8. The method of claim 7, wherein the mathematical operation is chosen among the group consisting of: a permutation, a sampling with replacement, a sampling without replacement, a combination, a knockoff and an inference.

9. The method of any of claims 5-8, wherein the model includes weights ((3s) for a set of selected biological and clinical or demographic features; and during the machine learning and for each data layer, for every repetition of the bootstrap, initial weights (wj) are computed for the plurality of features and the at least one artificial feature associated with that data layer using an initial statistical learning technique, and at least one selected feature is determined for each data layer, based on a statistical criteria depending on the computed initial weights (wj).

10. The method of claim 9, wherein the initial statistical learning technique is selected from a regression technique and a classification technique.

11. The method of claim 9 or 10, wherein the initial statistical learning technique is selected from a sparse technique and a non-sparse technique.

12. The method of claim 11 , wherein the sparse technique is selected from a Lasso technique and an Elastic Net technique.

13. The method of any of claims 9-12, wherein the statistical criteria depends on significant weights among the computed initial weights (wj).

14. The method of claim 13, wherein the significant weights are non-zero weights, when the initial statistical learning technique is a sparse regression technique.

15. The method of claim 13, wherein the significant weights are weights above a predefined weight threshold, when the initial statistical learning technique is a non-sparse regression technique.

16. The method of any of claims 9-15, wherein the initial weights (Wj) are further computed for a plurality of values of a hyperparameter, wherein the hyperparameter is a parameter whose value is used to control the learning process.

17. The method of claim 16, wherein the hyperparameter is a regularization coefficient used according to a respective mathematical norm in the context of a sparse initial technique.

18. The method of claim 17, wherein the mathematical norm is a p-norm, with p being an integer.

19. The method of any of claims 16-18, together with claim 11 , wherein the hyperparameter is an upper bound of the coefficient of the L1-norm of the initial weights (wj) when the initial statistical learning technique is the Lasso technique, wherein the L1 - norm refers to a sum of all absolute values of the initial weights.

20. The method of any of claims 16-18, together with claim 11 , wherein the hyperparameter is an upper bound of the coefficient of the to both the L1-norm sum of the initial weights (wj) and the L2-norm sum of the initial weights (wj) when the initial statistical learning technique is the Elastic Net technique, wherein the L1-norm refers to the sum of all absolute values of the initial weights, and L2-norm refers to the square root of the sum of all squared values of the initial weights.

21. The method of any of claims 13-20, wherein the statistical criteria is based on an occurrence frequency of the significant weights.

22. The method of claim 21 , together with any of claims 16-20, wherein for each feature, a unitary occurrence frequency is calculated for each hyperparameter value and is equal to a number of the significant weights related to said feature for the successive bootstrap repetitions divided by the number bootstrap repetitions.

23. The method of claim 22, wherein the occurrence frequency is equal to the highest unitary occurrence frequency among the unitary occurrence frequencies calculated for the plurality of hyperparameter values.

24. The method of any of claims 21-23, the statistical criteria is that each feature is selected when its occurrence frequency is greater than a frequency threshold, the frequency threshold being computed according to the occurrence frequencies obtained for the artificial features.

25. The method of any of claims 5-24, wherein the number bootstrap repetitions is between 50 and 100,000.

26. The method of any of claims 16-23, together with claim 11 , wherein the plurality of hyperparameter values is between 0.5 and 100 for the Lasso technique or the Elastic Net technique.

27. The method of any of claims 9-26, wherein during the machine learning, the weights (|3i) of the model are further computed using a final statistical learning technique on the data associated to the set of selected features.

28. The method of claim 27, wherein the final statistical learning technique is selected from a regression technique and a classification technique.

29. The method of claim 27 or 28, wherein the final statistical learning technique is selected from a sparse technique and a non-sparse technique.

30. The method of claim 29, wherein the sparse technique is selected from a Lasso technique and an Elastic Net technique.

31. The method of any of claims 9-30, wherein during a usage phase subsequent to the machine learning, the risk score is computed based on measured values of the individual for the set of selected features.

32. The method of claim 31 , wherein the risk score is a probability calculated according to a weighted sum of the measured values multiplied by the respective weights (|3i) for the set of selected features, when the final statistical learning technique is the classification technique.

33. The method of claim 32, wherein the risk score is calculated according to the following equation:

Odd

P = -

1 + Odd where P represents the risk score, and

Odd is a term depending on the weighted sum.

34. The method of claim 33, wherein Odd is an exponential of the weighted sum.

35. The method of claim 31 , wherein the risk score is a term depending on a weighted sum of the measured values multiplied by the respective weights (|3i) for the set of selected features, when the final statistical learning technique is the regression technique.

36. The method of claim 35, wherein the risk score is equal to an exponential of the weighted sum.

37. The method of any one of claims 7-36, wherein during the machine learning, the method further comprises, before obtaining artificial features: generating additional values of the plurality of non-artificial features based on the obtained values and using a data augmentation technique; and the artificial features being then obtained according to both the obtained values and the generated additional values.

38. The method of claim 37, wherein the data augmentation technique is chosen among a non-synthetic technique and a synthetic technique.

39. The method of claim 37 or 38, wherein the data augmentation technique is chosen among the group consisting of: SMOTE technique, ADASYN technique and SVMSMOTE technique.

40. The method of any one of claims 37-39, wherein, for a given non-artificial feature, the less values have been obtained, the more additional values are generated.

41 . The method of any of claims 1-40, wherein the plurality of omics are selected from one or more of cytomic features, proteomic features, transcriptom ic features, and metabolomic features.

42. The method of claim 41 , wherein the cytomic features comprise single cell levels of surface and intracellular proteins in immune cell subset; and the proteomic features comprise circulating extracellular proteins.

43. The method of any one of claims 2 to 42, wherein the sample comprises at least one sample obtained prior to surgery.

44. The method of claim 42, wherein sample is obtained during the period of time from any time before surgery to the day of surgery, before a surgical incision is made.

45. The method of any one of claims 2 to 44, wherein the sample comprises at least one sample obtained after surgery.

46. The method of claim 45, wherein the after surgery sample is obtained approximately 24 hours after surgery.

47. The method of any one of claims 2 to 46, wherein the sample is a blood sample, a peripheral blood mononuclear cells (PBMC) fraction of a blood sample, a plasma sample, a serum sample, a urine sample, a saliva sample, or dissociated cells from a tissue sample.

48. The method of any one of claims 2 to 47, wherein the sample is contacted ex vivo with an activating agent in an effective dose and for a period of time sufficient to activate immune cells in the sample.

49. The method of any one of claims 2-48, wherein measuring or having measured the values comprises measuring single cell levels of surface or intracellular proteins in an immune cell subset by contacting the sample with isotope-labeled or fluorescent-labeled affinity reagents specific for the surface or intracellular proteins.

50. The method of claim 49, wherein the single cell levels of surface or intracellular proteins in an immune cell subset is performed by flow cytometry or mass cytometry.

51 . The method of any one of claims 2-50, wherein measuring or having measured the values comprises analyzing circulating proteins by contacting the sample with a plurality of isotope-labeled or fluorescent-labeled affinity reagents specific for extracellular proteins.

52. The method of claim 51 , wherein an affinity reagent is an antibody or an aptamer.

53. The method of any one of claims 1 to 52, wherein the demographic or clinical features comprise data selected from the group consisting of: age, sex, body mass index (BMI), functional status, emergency case, American Society of Anesthesiologists (ASA) class, steroid use for chronic condition, ascites, disseminated cancer, diabetes, hypertension, congestive heart failure, dyspnea, smoking history, history of severe COPD, dialysis, acute renal failure.

54. The method of any one of claims 1 to 53, wherein the clinical features are obtained from a patient’s medical record using a machine learning algorithm.

55. The method of any one of claims 2-54, wherein measuring or having measured the values comprises contacting the sample ex vivo with an activating agent in an effective dose and for a period of time sufficient to activate immune cells in the sample, wherein the activating agent is one or a combination of a TLR4 agonist (such as LPS), interleukin (IL)-2, IL-4, IL-6, IL-1 p, TNFa, IFNa, PMA/ionomycin.

56. The method of claim 55, wherein the period of time is from about 5 to about 240 minutes.

57. The method of any one of claims 55 to 56, wherein measuring or having measured the values comprises measuring single cell levels of surface or intracellular proteins in an immune cell subset by contacting the sample with isotope-labeled or fluorescent-labeled affinity reagents specific for the surface or intracellular proteins.

58. The method of claim 57, wherein immune cells are identified using single-cell surface or intracellular protein markers selected from the group consisting of CD235ab, CD61 , CD45, CD66, CD7, CD19, CD45RA, CD11 b, CD4, CD8, CD11c, CD123, TCRyb, CD24, CD161 , CD33, CD16, CD25, CD3, CD27, CD15, CCR2, 0LMF4, HLA-DR, CD14, CD56, CRTH2, CCR2, and CXCR4.

59. The method of claims 57 or 58, wherein said single-cell intracellular proteins are selected from the group consisting of phospho (p) pMAPKAPK2 (pMK2), pP38, pERK1/2, p-rpS6, pNFKB, IKB, p-CREB, pSTATI , pSTAT5, pSTAT3, pSTAT6, cPARP, FoxP3, and Tbet.

60. The method of any one of claims 57 to 59, wherein said intracellular protein levels are measured in immune cell subsets selected from the group consisting of neutrophils, granulocytes, basophils, CXCR4+neutrophils, OLMF4+neutrophils, CD14+CD16' classical monocytes (cMC), CD14 CD16+ nonclassical monocytes (ncMC), CD14+CD16+ intermediate monocytes (iMC), HLADR+CD11c+ myeloid dendritic cells (mDC), HLADR+CD123+ plasmacytoid dendritic cells (pDC), CD14+HLADR CD11b+ monocytic myeloid derived suppressor cells (M-MDSC), CD3+CD56+ NK-T cells, CD7+CD19 CD3' NK cells, CD7+ CD56loCD16hi NK cells, CD7+CD56hiCD16'° NK cells, CD19+ B-Cells, CD19+CD38+ Plasma Cells, CD19+CD38- non-plasma B-Cells, CD4+ CD45RA+ naive T Cells, CD4+ CD45RA- memory T cells, CD4+CD161 + Th17 cells, CD4+Tbet+ Th1 cells, CD4+CRTH2+ Th2 cells, CD3+TCRyb+ y5T Cells, Th17 CD4+T cells, CD3+FoxP3+CD25+ regulatory T Cells (Tregs), CD8+ CD45RA* naive T Cells, and CD8+ CD45RA- memory T Cells.

61 . The method of any one of claims 55 to 60, wherein the patient’s risk for developing POND correlates with increased pMAPKAPK2 (pMK2) in neutrophils, increased prpS6 in mDCs, or decreased IKB in neutrophils, decreased PNFKB in CD7+CD56hiCD16'° NK cells in response to ex vivo activation of a sample collected before surgery with LPS.

62. The method of any one of claims 55 to 61 , wherein the patient’s risk for developing POND correlates with increased pSTAT3 in neutrophils, mDCs, or Tregs increased prpS6 in CD56hiCD16l0 NK cells or mDCs, increase pSTAT5 in mDCs, or pDCs, or decreased IKB in CD4+Tbet+ Th1 cells, decreased pSTATI in pDCs, in response to ex vivo activation of a sample collected before surgery with IL-2, IL-4, and/or IL-6.

63. The method of any one of claims 55 to 62, wherein the patient’s risk for developing POND correlates with increased prpS6 in neutrophils or mDCs, increased pERK in M- MDSCs or ncMCs, increased pCREB in y5T Cells or decrease IKB, pP38 or pERK in neutrophils or decreased pCREB or pMAPKAPK2 in CD4+Tbet+ Th1 cells or decreased pERK in CD4+CRTH2+ Th2 cells, in response to ex vivo activation of a sample collected before surgery with TNFa.

64. The method of any one of claims 55 to 63, wherein the patient’s risk for developing POND correlates with increased pSTAT3 in neutrophils, M-MDSCs, cMCs, or ncMCs, increased pSTAT5 in Tregs or CD45RA' memory CD4+T cells, increased pMAPKAPK2 in mDCs, pCREB or IKB in CD4+Tbet+ Th1 cells, increased pSTAT6 in NKT cells, or decreased pERK in CD4+Tbet+ Th1 cells in unstimulated samples collected before and/or after surgery.

65. The method of any one of claims 55 to 64, wherein the patient’s risk for developing POND correlates with increased M-MDSC, G-MDSC, ncMC, Th17 cells, or decreased CD4+CRTH2+ Th2 cell frequencies collected before and/or after surgery.

66. The method of any one of claims 55 to 65, wherein the patient’s risk for developing POND correlates with increased IL-1 [3, ALK, WWOX, HSPH1 , IRF6, CTNNA3, CCL3, STREM1, ITM2A, TGFa, LIF, ADA, or decreased ITGB3, EIF5A, KRT19, NTproBNP collected before and/or after surgery.

67. A system comprising a processor and memory containing instructions, which when executed by the processor, direct the processor to perform the method of any of claims 1 , 3-42, 53-54, and 61-66.

68. A non-transitory machine readable medium containing instructions that when executed by a computer processor direct the processor to perform the method of any of claims 1 , 3-42, 53-54, and 61 -66.

69. The method of any one of claims 1 -66, further comprising treating the individual before surgery is made in accordance with the assessment of an individual’s risk for developing POND.

70. The method of claim 69, wherein the treatment before surgery is selected from the group consisting of: cognitive prehabilitation training, physical exercises, and preoperative geriatric consultation, and combinations thereof.

71. The method of any one of claims 1 -66 and 69-70, further comprising treating the individual during surgery is made in accordance with the assessment of an individual’s risk for developing POND.

72. The method of claim 69, wherein the treatment during surgery is selected from the group consisting of: multimodal pain management, opioid-sparing analgesia, and combinations thereof.

73. The method of any one of claims 1 -66 and 69-72, further comprising treating the individual after surgery is made in accordance with the assessment of an individual’s risk for developing a POND.

74. The method of claim 1 , wherein the machine learning technique comprises: generating artificial features based on real features of an overall participating cohort; concatenating the artificial features to the real features to create an overall matrix; obtaining a plurality of subsets of features from the overall matrix; computing a plurality of models wherein each model is based on each of the obtained subsets of features; selecting stable features from each of the plurality of subset of features; and combining the stable features from each subset into a set of stable features.

75. The method of claim 74; wherein the selecting of stable features from each of the plurality of subset of features further comprises: fitting a model on each of the obtained subset of features; extracting non-zero coefficients and associated features of each subset of features based on a set of hyperparameters; obtaining occurrence frequency of extracted non-zero coefficients and associated features; estimating a threshold of occurrence frequency; and selecting features with occurrence frequencies above the threshold.

Description:
SYSTEMS AND METHODS FOR THE PREDICTION OF POST-OPERATIVE COGNITIVE DECLINE USING BLOOD-BASED INFLAMMATORY BIOMARKERS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The current application claims the benefit of and priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 63/376,690 entitled “Systems and Methods to Predict a Risk Score for Postoperative Cognitive Decline and Uses Thereof,” filed September 22, 2022, the disclosure of which is hereby incorporated by reference in its entirety for all purposes.

FIELD OF THE INVENTION

[0002] The present invention relates to predicting postoperative cognitive decline; more specifically, using a machine learning model to predict a patient’s risk of developing postoperative neurocognitive disorder (POND) from clinical and multi-omics data.

BACKGROUND

[0003] Over 300 million operations are performed annually worldwide, a number that is expected to increase. Approximately 25-30% of the operations involve patients aged 60 years or older, the rate of which is expected to quadruple over the next 30 years. Out of these patients, 35-55% will develop postoperative neurocognitive disorder (POND), which includes disorders of memory, language comprehension, visuo-spatial abstraction, attention, or concentration, and that may last up to 12 months after surgery. POND is associated with increased morbidity and mortality, resulting in a significant economic cost due to more frequent hospitalizations in care facilities, earlier retirement, and greater use of socioeconomic support.

[0004] Although there is no therapy for POND, several interventions have been shown to mitigate the risk of POND. However, there is currently a lack of biological and clinical markers for the accurate prediction of POND.

SUMMARY OF THE INVENTION

[0005] This summary is meant to provide some examples and is not intended to be limiting of the scope of the invention in any way. For example, any feature included in an example of this summary is not required by the claims, unless the claims explicitly recite the features. Various features and steps as described elsewhere in this disclosure may be included in the examples summarized here, and the features and steps described here and elsewhere can be combined in a variety of ways.

[0006] In some aspects, the techniques described herein relate to a method for determining the risk for postoperative neurocognitive disorder (POND) for an individual following surgery, including obtaining or having obtained values of a plurality of features, where the plurality of features includes omic biological features and clinical features, computing a risk score for POND for the individual based on the plurality of features using a model obtained via a machine learning technique, and providing an assessment of the patient's risk for developing POND based on the computed risk score.

[0007] In some aspects, the techniques described herein relate to a method, where obtaining or having obtained values of a plurality of features includes obtaining or having obtained a sample for analysis from the individual subject to surgery, and measuring or having measured the values of a plurality of omic biological and clinical features.

[0008] In some aspects, the techniques described herein relate to a method, where the plurality of features further includes demographic features.

[0009] In some aspects, the techniques described herein relate to a method, where omic biological features include at least one of a genomic feature, a transcriptom ic feature, a proteomic feature, a cytomic feature, and a metabolomic feature.

[0010] In some aspects, the techniques described herein relate to a method, where the machine learning model is trained using a bootstrap procedure on a plurality of individual data layers, where each data layer represents one type of data from the plurality of features and at least one artificial feature.

[0011] In some aspects, the techniques described herein relate to a method, where each type is chosen among genomic, transcriptom ic, proteomic, cytomic, metabolomic, clinical and demographic.

[0012] In some aspects, the techniques described herein relate to a method, where: each data layer includes data for a population of individuals, where each feature includes feature values for all individuals in the population of individuals, and for a respective data layer, each artificial feature is obtained from a non-artificial feature among the plurality of features, via a mathematical operation performed on the feature values of the nonartificial feature.

[0013] In some aspects, the techniques described herein relate to a method, where the mathematical operation is chosen among a permutation, a sampling with replacement, a sampling without replacement, a combination, a knockoff and an inference.

[0014] In some aspects, the techniques described herein relate to a method, where the model includes weights (|3i) for a set of selected biological and clinical or demographic features, during the machine learning and for each data layer, for every repetition of the bootstrap, initial weights (wj) are computed for the plurality of features and the at least one artificial feature associated with that data layer using an initial statistical learning technique, and at least one selected feature is determined for each data layer, based on a statistical criteria depending on the computed initial weights (wj).

[0015] In some aspects, the techniques described herein relate to a method, where the initial statistical learning technique is selected from a regression technique and a classification technique.

[0016] In some aspects, the techniques described herein relate to a method, where the initial statistical learning technique is selected from a sparse technique and a non- sparse technique.

[0017] In some aspects, the techniques described herein relate to a method, where the sparse technique is selected from a Lasso technique and an Elastic Net technique.

[0018] In some aspects, the techniques described herein relate to a method, where the statistical criteria depends on significant weights among the computed initial weights (wj).

[0019] In some aspects, the techniques described herein relate to a method, where the significant weights are non-zero weights, when the initial statistical learning technique is a sparse regression technique.

[0020] In some aspects, the techniques described herein relate to a method, where the significant weights are weights above a predefined weight threshold, when the initial statistical learning technique is a non-sparse regression technique. [0021] In some aspects, the techniques described herein relate to a method, where the initial weights (wj) are further computed for a plurality of values of a hyperparameter, where the hyperparameter is a parameter whose value is used to control the learning process.

[0022] In some aspects, the techniques described herein relate to a method, where the hyperparameter is a regularization coefficient used according to a respective mathematical norm in the context of a sparse initial technique.

[0023] In some aspects, the techniques described herein relate to a method, where the mathematical norm is a p-norm, with p being an integer.

[0024] In some aspects, the techniques described herein relate to a method, where the hyperparameter is an upper bound of the coefficient of the L1 -norm of the initial weights (wj) when the initial statistical learning technique is the Lasso technique, where the L1 -norm refers to the sum of all absolute values of the initial weights.

[0025] In some aspects, the techniques described herein relate to a method, where the hyperparameter is an upper bound of the coefficient of the to both the L1-norm sum of the initial weights (wj) and the L2-norm sum of the initial weights (wj) when the initial statistical learning technique is the Elastic Net technique, where the L1-norm refers to the sum of all absolute values of the initial weights, and L2-norm refers to the square root of the sum of all squared values of the initial weights.

[0026] In some aspects, the techniques described herein relate to a method, where the statistical criteria is based on an occurrence frequency of the significant weights.

[0027] In some aspects, the techniques described herein relate to a method, together with any, where for each feature, a unitary occurrence frequency is calculated for each hyperparameter value and is equal to a number of the significant weights related to said feature for the successive bootstrap repetitions divided by the number bootstrap repetitions.

[0028] In some aspects, the techniques described herein relate to a method, where the occurrence frequency is equal to the highest unitary occurrence frequency among the unitary occurrence frequencies calculated for the plurality of hyperparameter values.

[0029] In some aspects, the techniques described herein relate to a method, the statistical criteria is that each feature is selected when its occurrence frequency is greater than a frequency threshold, the frequency threshold being computed according to the occurrence frequencies obtained for the artificial features.

[0030] In some aspects, the techniques described herein relate to a method, where the number of bootstrap repetitions is between 50 and 100,000.

[0031] In some aspects, the techniques described herein relate to a method, where the plurality of hyperparameter values is between 0.5 and 100 for the Lasso technique or the Elastic Net technique.

[0032] In some aspects, the techniques described herein relate to a method, where during the machine learning, the weights ( i) of the model are further computed using a final statistical learning technique on the data associated to the set of selected features.

[0033] In some aspects, the techniques described herein relate to a method, where the final statistical learning technique is selected from a regression technique and a classification technique.

[0034] In some aspects, the techniques described herein relate to a method, where the final statistical learning technique is selected from a sparse technique and a non- sparse technique.

[0035] In some aspects, the techniques described herein relate to a method, where the sparse technique is selected from a Lasso technique and an Elastic Net technique.

[0036] In some aspects, the techniques described herein relate to a method, where during a usage phase subsequent to the machine learning, the risk score is computed according to the measured values of the individual for the set of selected features.

[0037] In some aspects, the techniques described herein relate to a method, where the risk score is a probability calculated according to a weighted sum of the measured values multiplied by the respective weights ([3i) for the set of selected features, when the final statistical learning technique is the classification technique.

[0038] In some aspects, the techniques described herein relate to a method, where the risk score is calculated according to the equation

Odd P = -

1 + Odd where P represents the risk score, and Odd is a term depending on the weighted sum. [0039] In some aspects, the techniques described herein relate to a method, where Odd is an exponential of the weighted sum.

[0040] In some aspects, the techniques described herein relate to a method, where the risk score is a term depending on a weighted sum of the measured values multiplied by the respective weights (pi) for the set of selected features, when the final statistical learning technique is the regression technique.

[0041] In some aspects, the techniques described herein relate to a method, where the risk score is equal to an exponential of the weighted sum.

[0042] In some aspects, the techniques described herein relate to a method, where during the machine learning, the method further includes, before obtaining artificial features: generating additional values of the plurality of non-artificial features based on the obtained values and using a data augmentation technique, the artificial features being then obtained according to both the obtained values and the generated additional values. [0043] In some aspects, the techniques described herein relate to a method, where the data augmentation technique is chosen among a non-synthetic technique and a synthetic technique.

[0044] In some aspects, the techniques described herein relate to a method, where the data augmentation technique is chosen among SMOTE technique, ADASYN technique and SVMSMOTE technique.

[0045] In some aspects, the techniques described herein relate to a method, where, for a given non-artificial feature, the less values have been obtained, the more additional values are generated.

[0046] In some aspects, the techniques described herein relate to a method, where the omic biological features are selected from one or more of cytomic features, proteomic features, transcriptom ic features, and metabolomic features.

[0047] In some aspects, the techniques described herein relate to a method, where the cytomic features include single cell levels of surface and intracellular proteins in immune cell subset, and the proteomic features include circulating extracellular proteins. [0048] In some aspects, the techniques described herein relate to a method, where the sample includes at least one sample obtained prior to surgery. [0049] In some aspects, the techniques described herein relate to a method, where sample is obtained during the period of time from any time before surgery to the day of surgery, before a surgical incision is made.

[0050] In some aspects, the techniques described herein relate to a method, where the sample includes at least one sample obtained after surgery.

[0051] In some aspects, the techniques described herein relate to a method, where the after surgery sample is obtained approximately 24 hours after surgery.

[0052] In some aspects, the techniques described herein relate to a method, where the sample is a blood sample, a peripheral blood mononuclear cells (PBMC) fraction of a blood sample, a plasma sample, a serum sample, a urine sample, a saliva sample, or dissociated cells from a tissue sample.

[0053] In some aspects, the techniques described herein relate to a method, where the sample is contacted ex vivo with an activating agent in an effective dose and for a period of time sufficient to activate immune cells in the sample.

[0054] In some aspects, the techniques described herein relate to a method, where measuring or having measured the values includes measuring single cell levels of surface or intracellular proteins in an immune cell subset by contacting the sample with isotopelabeled or fluorescent-labeled affinity reagents specific for the surface or intracellular proteins.

[0055] In some aspects, the techniques described herein relate to a method, where the single cell levels of surface or intracellular proteins in an immune cell subset is performed by flow cytometry or mass cytometry.

[0056] In some aspects, the techniques described herein relate to a method, where measuring or having measured the values includes analyzing circulating proteins by contacting the sample with a plurality of isotope-labeled or fluorescent-labeled affinity reagents specific for extracellular proteins.

[0057] In some aspects, the techniques described herein relate to a method, where an affinity reagent is an antibody or an aptamer.

[0058] In some aspects, the techniques described herein relate to a method, where the demographic or clinical features include data selected from age, sex, body mass index (BMI), functional status, emergency case, American Society of Anesthesiologists (ASA) class, steroid use for chronic condition, ascites, disseminated cancer, diabetes, hypertension, congestive heart failure, dyspnea, smoking history, history of severe COPD, dialysis, acute renal failure.

[0059] In some aspects, the techniques described herein relate to a method, where the clinical features are obtained from a patient's medical record using a machine learning algorithm.

[0060] In some aspects, the techniques described herein relate to a method, where measuring or having measured the values includes contacting the sample ex vivo with an activating agent in an effective dose and for a period of time sufficient to activate immune cells in the sample, where the activating agent is one or a combination of a TLR4 agonist (such as LPS), interleukin (IL)-2, IL-4, IL-6, IL-1 [3, TNFa, IFNa, PMA/ionomycin.

[0061] In some aspects, the techniques described herein relate to a method, where the period of time is from about 5 to about 240 minutes.

[0062] In some aspects, the techniques described herein relate to a method, where measuring or having measured the values includes measuring single cell levels of surface or intracellular proteins in an immune cell subset by contacting the sample with isotopelabeled or fluorescent-labeled affinity reagents specific for the surface or intracellular proteins.

[0063] In some aspects, the techniques described herein relate to a method, where immune cells are identified using single-cell surface or intracellular protein markers selected from the group consisting of CD235ab, CD61 , CD45, CD66, CD7, CD19, CD45RA, CD11 b, CD4, CD8, CD11 c, CD123, TCRyb, CD24, CD161 , CD33, CD16, CD25, CD3, CD27, CD15, CCR2, OLMF4, HLA-DR, CD14, CD56, CRTH2, CCR2, and CXCR4.

[0064] In some aspects, the techniques described herein relate to a method, where said single-cell intracellular proteins are selected from the group consisting of phospho (p) pMAPKAPK2 (pMK2), pP38, pERK1/2, p-rpS6, PNFKB, IKB, p-CREB, pSTATI , pSTAT5, pSTAT3, pSTAT6, cPARP, FoxP3, and Tbet.

[0065] In some aspects, the techniques described herein relate to a method, where said intracellular protein levels are measured in immune cell subsets selected from the group consisting of neutrophils, granulocytes, basophils, CXCR4+neutrophils, OLMF4+neutrophils, CD14+CD16- classical monocytes (cMC), CD14-CD16+ nonclassical monocytes (ncMC), CD14+CD16+ intermediate monocytes (iMC), HLADR+CD11 c+ myeloid dendritic cells (mDC), HLADR+CD123+ plasmacytoid dendritic cells (pDC), CD14+HLADR-CD11 b+ monocytic myeloid derived suppressor cells (M- MDSC), CD3+CD56+ NK-T cells, CD7+CD19-CD3- NK cells, CD7+ CD56loCD16hi NK cells, CD7+CD56hiCD16lo NK cells, CD19+ B-Cells, CD19+CD38+ Plasma Cells, CD19+CD38- non-plasma B-Cells, CD4+ CD45RA+ naive T Cells, CD4+ CD45RA- memory T cells, CD4+CD161 + Th17 cells, CD4+Tbet+ Th1 cells, CD4+CRTH2+ Th2 cells, CD3+TCRyb+ y6T Cells, Th17 CD4+T cells, CD3+FoxP3+CD25+ regulatory T Cells (Tregs), CD8+ CD45RA+ naive T Cells, and CD8+ CD45RA- memory T Cells.

[0066] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased pMAPKAPK2 (pMK2) in neutrophils, increased prpS6 in mDCs, or decreased IKB in neutrophils, decreased PNFKB in CD7+CD56hiCD16lo NK cells in response to ex vivo activation of a sample collected before surgery with LPS.

[0067] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased pSTAT3 in neutrophils, mDCs, or Tregs increased prpS6 in CD56hiCD16lo NK cells or mDCs, increased pSTAT5 in mDCs, or pDCs, or decreased IKB in CD4+Tbet+ Th1 cells, decreased pSTATI in pDCs, in response to ex vivo activation of a sample collected before surgery with IL-2, IL- 4, and/or IL-6.

[0068] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased prpS6 in neutrophils or mDCs, increased pERK in M-MDSCs or ncMCs, increased pCREB in y5T Cells or decrease IKB, pP38 or pERK in neutrophils or decreased pCREB or pMAPKAPK2 in CD4+Tbet+ Th1 cells or decreased pERK in CD4+CRTH2+ Th2 cells, in response to ex vivo activation of a sample collected before surgery with TNFa.

[0069] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased pSTAT3 in neutrophils, M-MDSCs, cMCs, or ncMCs, increased pSTAT5 in Tregs or CD45RA- memory CD4+T cells, increased pMAPKAPK2 in mDCs, pCREB or IKB in CD4+Tbet+ Th1 cells, increased pSTAT6 in NKT cells, or decreased pERK in CD4+Tbet+ Th1 cells in unstimulated samples collected before and/or after surgery.

[0070] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased M-MDSC, G-MDSC, ncMC, Th17 cells, or decreased CD4+CRTH2+ Th2 cell frequencies collected before and/or after surgery.

[0071] In some aspects, the techniques described herein relate to a method, where the patient's risk for developing POND correlates with increased IL-113, ALK, WWOX, HSPH1 , IRF6, CTNNA3, CCL3, sTREMI , ITM2A, TGFa, LIF, ADA, or decreased ITGB3, EIF5A, KRT19, NTproBNP collected before and/or after surgery.

[0072] In some aspects, the techniques described herein relate to a system including a processor and memory containing instructions, which when executed by the processor, direct the processor to perform methods as described herein.

[0073] In some aspects, the techniques described herein relate to a non-transitory machine readable medium containing instructions that when executed by a computer processor, direct the processor to perform methods as described herein.

[0074] In some aspects, the techniques described herein relate to a method, further including treating the individual before surgery is made in accordance with the assessment of an individual's risk for developing POND.

[0075] In some aspects, the techniques described herein relate to a method, where the treatment before surgery is selected from cognitive prehabilitation training, physical exercises, and preoperative geriatric consultation, and combinations thereof.

[0076] In some aspects, the techniques described herein relate to a method, further including treating the individual during surgery is made in accordance with the assessment of an individual's risk for developing POND.

[0077] In some aspects, the techniques described herein relate to a method, where the treatment during surgery is selected from multimodal pain management, opioidsparing analgesia, and combinations thereof.

[0078] In some aspects, the techniques described herein relate to a method, further including treating the individual after surgery is made in accordance with the assessment of an individual's risk for developing a POND. [0079] In some aspects, the techniques described herein relate to a method, further including generating artificial features based on real features of an overall participating cohort, concatenating the artificial features to the real features to create an overall matrix, obtaining a plurality of subsets of features from the overall matrix, computing a plurality of models wherein each model is based on each of the obtained subsets of features, selecting stable features from each of the plurality of subset of features, and combining the stable features from each subset into a set of stable features.

[0080] In some aspects, the techniques described herein relate to a method, further including fitting a model on each of the obtained subset of features, extracting non-zero coefficients and associated features of each subset of features based on a set of hyperparameters, obtaining occurrence frequency of extracted non-zero coefficients and associated features, estimating a threshold of occurrence frequency, and selecting features with occurrence frequencies above the threshold.

[0081] Other features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0082] The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiments of the invention and should not be construed as a complete recitation of the scope of the invention.

[0083] FIG. 1 illustrates an exemplary method for the prediction of a patient’s clinical outcome after surgery using a machine learning algorithm that integrates multi-omic biological (e.g. single cell immune responses and plasma proteomic data) and clinical data in accordance with various embodiments. Various embodiments provide for a method of guiding a surgeon or healthcare provider’s clinical decision using a Multi-Omic Bootstrap (MOB) machine learning algorithm to generate a predictive model for the probability of a patient developing postoperative neurocognitive disorder (POND).

[0084] FIG. 2 illustrates a process for generating stable features used to train a machine learning model to predict post-operative outcomes in accordance with an embodiment. [0085] FIG. 3 illustrates a process for selecting stable features in accordance with an embodiment of the invention.

[0086] FIGS. 4A-C illustrate an exemplary methodology for the MOB machine learning model that integrates biological and clinical data for the prediction of POND in accordance with various embodiments.

[0087] FIGS. 5A-B illustrate exemplary pseudo-code for MOB algorithms in accordance with various embodiments.

[0088] FIG. 6 illustrates an exemplary workflow for the identification of a predictive model of POND in patients undergoing abdominal surgery in accordance with various embodiments.

[0089] FIG. 7 illustrates a block diagram of components of a processing system in a computing device that can be used to generate a risk score for POND in accordance with an embodiment of the invention.

[0090] FIG. 8 illustrates a network diagram of a distributed system to generate a risk score for POND in accordance with an embodiment of the invention.

[0091] FIGS. 9A-C illustrate an exemplary flowchart for an exemplary proof of concept experiment in accordance with an embodiment of the invention.

DETAILED DESCRIPTION

[0092] Surgery is associated with significant tissue trauma, triggering a programmed inflammatory response that engages the innate and adaptive branches of the immune system. Within hours of surgical incision, a highly diverse network of innate immune cells (including monocytes, neutrophils and their subsets) is activated in response to circulating DAMPs (damage-associated molecular patterns) and inflammatory cytokines (e.g., HMGB1 , TNFa, and IL-1 [3). Following the early innate immune response to surgery, a compensatory, anti-inflammatory adaptive immune response has been traditionally described. However, recent transcriptomic and mass cytometry analysis suggest that adaptive immune responses are mobilized jointly with innate immune responses and coincide with the activation of specialized immunosuppressive immune cell subsets, such as myeloid-derived suppressor cells (MDSCs). In the context of uncomplicated surgical recovery, innate and adaptive responses synergize to orchestrate pro- and antiinflammatory (pro-resolving) processes required for pathogen defense tissue remodeling and the resolution of pain and inflammation after injury. (See e.g.,Stoecklein VM et al. J Leukoc Biol 2012; Gaudilliere B et al. Sci Transl Med 2014; 6(255):255ra131 ; the disclosure of which is hereby incorporated by reference herein in its entirety.)

[0093] As noted previously, there are currently no tools to predict or assess an individual’s risk for POND. Emerging evidence suggests that the peripheral immune response to surgical trauma contributes to important neuroinflammatory processes implicated in the pathogenesis of postoperative neurocognitive disorder (POND). As such, the comprehensive analysis of patients’ immune systems in peripheral blood is a powerful approach to searching for mechanistic and easily accessible predictive biomarkers of POND. Thus, the integration of biological parameters echoing mechanisms that drive the pathogenesis of POND is a highly plausible approach to increase risk prediction accuracy. Such predictive methods and systems can allow the development of personalized preventive interventions tailored to individual patients, including cognitive prehabilitation training, physical exercises, and preoperative geriatric consultation, any other effective intervention, and combinations thereof.

[0094] A major impediment has been the lack of high-content, functional assays that can characterize the complex, multicellular inflammatory response to surgery with singlecell resolution. In addition, analytical tools that can integrate single-cell immunological data with other ‘omics and clinical data to predict the development of POND are lacking. Thus, there is a need for improved measures for the diagnosis, prognosis, treatment, management, and therapeutic development of POND after surgery.

[0095] High-throughput omics assays, including (but not limited to) metabolomics, proteomic, and cytometric immunoassay data, can potentially capture complex mechanisms of diseases and biological processes by providing thousands of measurements systematically obtained on each biological sample.

[0096] The analysis of mass cytometry immunoassay as well as other omics assays typically has two related goals analyzed by dichotomous approaches. The first goal is to predict the outcome of interest and identify biomarkers that are the best set of predictors of the considered outcome; the second goal is to identify potential pathways implicated in the disease offering a better understanding of the underlying biology. The first goal is addressed by deploying machine learning methods and fitting a prediction model that selects typically a handful of the most informative biomarkers among thousands of measurements. The second goal is usually addressed by performing a univariate analysis of each measurement to determine the significance of that measurement with respect to the outcome by evaluating its p-value which is then adjusted for multiple hypothesis testing.

[0097] In the context of machine learning, omics data - characterized by a high number of features p and a much smaller number of samples n - fall in the scenario for which p » n. The gold-standard machine learning methodology for this scenario consists of the usage of regularized regression or classification methods, and specifically sparse linear models, such as the Lasso; (See e.g., Tibshirani, Robert. "Regression shrinkage and selection via the lasso." Journal of the Royal Statistical Society: Series B (Methodological) 58.1 (1996): 267-288; the disclosure of which is hereby incorporated by reference herein in its entirety;) and Elastic Net. (See e.g., Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the elastic net." Journal of the royal statistical society: series B (statistical methodology) 67.2 (2005): 301-320; the disclosure of which is hereby incorporated by reference herein in its entirety.) Consider for instance the following linear model, given by:

Y = xp + e where X = X , ....X^) e ]R nxp and Y = (Y 1 , T n ) e IR n are respectively the input and the response variables; e = e lt ..., e n ) G H is the random noise with independent, identically distributed components. p = (J3 1 , ...,p p ) G H are the coefficients associated to each feature, that need to be learned. Sparse linear models add a regularization of the model coefficients p, which allows for balancing the bias-variance tradeoff and prevents overfitting of models. The Lasso and the Elastic Net use L1 -regularization in the model, inducing sparsity in the fit of the coefficients p. In the optimal fit of such models, we end up determining a subset S = {p k ,p k 0} where many of the coefficients p become zeros, resulting in only a subset of features playing a role in the model.

[0098] Instability is an inherent problem in feature selection of machine learning model. Since the learning phase of the model is performed on a finite data sample, any perturbation in data may yield a somewhat different set of selected variables. In settings where the performance is evaluated via cross-validation, this implies that the Lasso yields a somewhat different set of chosen biomarkers making any biological interpretation of the result impossible. Consistent feature selection in Lasso is challenging as it is achieved only under restrictive conditions. Most sparse techniques such as the Lasso cannot provide a quantification of how far the chosen model is from the correct one, nor quantify the variability of chosen features.

[0099] Another major limitation of existing methods is the difficulty of integrating different sources of biological information. Most machine learning algorithms use input data agnostically in the learning process of the models. The main challenge lies in the integration of multiple sources of data with their differences in modalities, size and signal- to-noise ratio in the learning process. In the learning process, current approaches are typically limited with biased assessments of the contribution of individual sources of data when juxtaposed as a unique dataset. Finally, it is key to use identified informative features from different layers together to optimize the predictive power of such algorithms. Most methods, when ensembling different results from individual data sources also lack the capacity to assess individual interactions between features that are key to model biological mechanisms at play.

[0100] Turning now to the drawings, systems and methods to generate a POND risk prediction and uses thereof are provided. In many embodiments, compositions and methods are provided for the prediction, classification, diagnosis, and/or theranosis, of a clinical outcome following surgery in a subject based on the integration of multi-omic biological and clinical data using a machine learning model (e.g., Fig. 1). Many embodiments provide methods to generate a predictive model of a patient’s probability to develop POND. In many embodiments the predictive model is obtained by quantitating specific biological and clinical features, before and/or after surgery. Various embodiments use at least one omic (including, but not limited to, genomic, cytomic, proteomic, transcriptom ic, and metabolomic) feature in combination with clinical data to generate the predictive model. Various embodiments utilize a machine learning model to integrate the various clinical and/or omic (e.g., cytomic, proteomic, transcriptom ic, metabolomic, etc.) features to generate a predictive model. In some embodiments, the clinical outcome is the development of POND. A predictive model in accordance with many embodiments can indicate a patient’s risk for developing POND. [0101] Once a classification or prognosis has been made, it can be provided to a patient or caregiver. The classification can provide prognostic information to guide the healthcare provider’s or surgeon’s clinical decision-making, such as delaying or adjusting the timing of surgery, adjusting the surgical approach, changing medical approach (e.g., non-invasive, less invasive, and/or other therapy which avoids surgery), adjusting the type and timing of antibiotic and immune-modulatory regimens, personalizing or adjusting prehabilitation health optimization programs, planning for longer time in the hospital before or after surgery or planning for spending time in a managed care facility, prehabilitative therapies (such as cognitive prehabilitation training, physical exercises, and preoperative geriatric consultation, any other effective intervention, and combinations thereof), and the like. Appropriate care can reduce the rate POND, length of hospital stays, and/or the rate of readmission for patients following surgery.

[0102] As illustrated in Fig. 1 , various embodiments are directed to methods of predicting a clinical outcome for an individual undergoing surgery (e.g., patient). Many embodiments collect a patient sample at 102. Such samples can be collected at any time before surgery or after surgery. In some embodiments, the sample is collected up to a week (7 days) before or after surgery. In certain embodiments, the sample is collected 1 day, 2 days, 3 days, 4 days, 5 days, 6 days, or 7 days before surgery, while some embodiments collect a sample 1 day, 2 days, 3 days, 4 days, 5 days, 6 days, or 7 days after surgery. Additional embodiments collect a sample day of surgery, including before and/or after surgery, including immediately before and/or after surgery. Certain embodiments collect multiple samples before, after, or before and after surgery, anesthesia, and/or any other procedural step included within a particular surgical or operational protocol.

[0103] At 104, many embodiments obtain omic data (e.g., proteomic, cytomic, and/or any other omic data) from the sample. Certain embodiments combine multiple omic data — e.g., plasma proteomics (e.g., analysis of plasma protein expression levels) and single-cell cytomics (e.g., single-cell analysis of circulating immune cell frequency and signaling activities) — as multi-omic data. Certain embodiments obtain clinical data for the individual. Clinical data in accordance with various embodiments includes one or more of medical history, age, weight, body mass index (BMI), sex/gender, current medications/supplements, functional status, emergency case, steroid use for chronic condition, ascites, disseminated cancer, diabetes, hypertension, congestive heart failure, dyspnea, smoking history, history of severe Chronic Obstructive Pulmonary Disease (COPD), dialysis, acute renal failure and/or any other relevant clinical data. Clinical data can also be derived from clinical risk scores such as the American Society of Anesthesiologist (ASA) or the American College of Surgeon (ACS) risk score.

[0104] Additional embodiments generate a predictive model of surgical complications, such as POND, at 106. Many embodiments utilize a machine learning model, such as described herein. Various embodiments operate in a pipelined manner, such that data, obtained or collected, are immediately sent to a machine learning model to generate an integrated risk score for developing POND. Some embodiments house the machine learning model locally, such that the integrated risk score is generated without network communication, while some embodiments operate the machine learning model on a server or other remote device, such that clinical data and multi-omics data are transmitted via a network, and the integrated risk score for developing POND is returned to a medical professional/practitioner at their local institution, clinic, hospital, and/or other medical facility.

[0105] At 108, further embodiments adjust the treatment of the individual based on the integrated risk score for developing POND. In various embodiments, the adjustment can include preoperative (cognitive prehabilitation training, physical exercises, preoperative geriatric consultation, and combinations thereof), peroperative (multimodal pain management, opioid-sparing analgesia, and combinations thereof), and postoperative to compensate for increased risk as identified by the risk score. With this approach, therapeutic regimens can be individualized and tailored according to predicted probability for a patient to develop POND, thereby providing a regimen that is individually appropriate.

[0106] It should be noted that the embodiment illustrated in Fig. 1 is illustrative of various steps, features, and details that can be implemented in various embodiments and is not intended to be exhaustive or limiting on all embodiments. Additionally, various embodiments may include additional steps, which are not described herein and/or fewer steps (e.g., omit certain steps) than illustrated and described. Various embodiments may also repeat certain steps, where additional data, prediction, or procedures can be updated for an individual, such as repeating generating a predictive model 106, to identify whether a risk score or POND is more or less likely to develop in the individual. Further embodiments may also obtain samples or clinical data from a third party, such as a collaborating, subordinate, or other individual and/or obtaining a sample that has been stored or previously collected or obtained. Certain embodiments may even perform certain actions or features in a different order than illustrated or described and/or perform some actions or features simultaneously, relatively simultaneously (e.g., one action may begin or commence before another action has finished or completed).

Definitions

[0107] Most of the words used in this specification have the meaning that would be attributed to those words by one skilled in the art. Words specifically defined in the specification have the meaning provided in the context of the present teachings as a whole, and as are typically understood by those skilled in the art. In the event that a conflict arises between an art-understood definition of a word or phrase and a definition of the word or phrase as specifically taught in this specification, the specification shall control.

[0108] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.

[0109] It must be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.

[0110] The terms "subject," "individual," and "patient" are used interchangeably herein to refer to a vertebrate, preferably a mammal, more preferably a human. Mammalian species that provide samples for analysis include canines; felines; equines; bovines; ovines; etc. and primates, particularly humans. Animal models, particularly small mammals, e.g. murine, lagomorpha, etc. can be used for experimental investigations. The methods of the invention can be applied for veterinary purposes. The terms “biomarker,” “biomarkers,” “marker”, “features”, or “markers” for the purposes of the invention refer to, without limitation, proteins together with their related metabolites, mutations, variants, polymorphisms, phosphorylation, modifications, fragments, subunits, degradation products, elements, and other analytes or sample-derived measures. Markers can include expression levels of an intracellular protein or extracellular protein. Markers can also include combinations of any one or more of the foregoing measurements, including temporal trends and differences. Broadly used, a marker can also refer to an immune cell subset.

[0111] As used herein, the term “omic” or “-omic” data refers to data generated to quantify pools of biological molecules, or processes that translate into the structure, function, and dynamics of an organism or organisms. Examples of omic data include (but are not limited to) genomic, transcriptom ic, proteomic, metabolomic, cytomic data, among others.

[0112] As used herein the term “cytomic” data refers to an omic data generated using a technology or analytical platform that allows quantifying biological molecules or processes at the single-cell level. Examples of cytomic data include (but are not limited to) data generated using flow cytometry, mass cytometry, single-cell RNA sequencing, cell imaging technologies, among others.

[0113] The term "inflammatory" response is the development of a humoral (antibody mediated) and/or a cellular response, in which cellular response may be mediated by innate immune cells (such as neutrophils or monocytes) or by antigen-specific T cells or their secretion products. An "immunogen" is capable of inducing an immunological response against itself on administration to a mammal or due to autoimmune disease.

[0114] To “analyze” includes determining a set of values associated with a sample by measurement of a marker (such as, e.g., presence or absence of a marker or constituent expression levels) in the sample and comparing the measurement against measurement in a sample or set of samples from the same subject or other control subject(s). The markers of the present teachings can be analyzed by any of various conventional methods known in the art. To “analyze” can include performing a statistical analysis, e.g. normalization of data, determination of statistical significance, determination of statistical correlations, clustering algorithms, and the like. [0115] A “sample” in the context of the present teachings refers to any biological sample that is isolated from a subject, generally a blood or plasma sample, which may comprise circulating immune cells. A sample can include, without limitation, an aliquot of body fluid, plasma, serum, whole blood, PBMC (white blood cells or leucocytes), tissue biopsies, dissociated cells from a tissue sample, a urine sample, a saliva sample, synovial fluid, lymphatic fluid, ascites fluid, and interstitial or extracellular fluid. "Blood sample" can refer to whole blood or a fraction thereof, including blood cells, plasma, serum, white blood cells, or leucocytes. Samples can be obtained from a subject by means including but not limited to venipuncture, biopsy, needle aspirate, lavage, scraping, surgical incision, intervention, or other means known in the art.

[0116] A “dataset” is a set of numerical values resulting from the evaluation of a sample (or population of samples) under a desired condition. The values of the dataset can be obtained, for example, by experimentally obtaining measures from a sample and constructing a dataset from these measurements; or alternatively, by obtaining a dataset from a service provider such as a laboratory, or from a database or a server on which the dataset has been stored. Similarly, the term “obtaining a dataset associated with a sample” encompasses obtaining a set of data determined from at least one sample. Obtaining a dataset encompasses obtaining a sample, and processing the sample to experimentally determine the data, e.g., via measuring antibody binding, or other methods of quantitating a signaling response. The phrase also encompasses receiving a set of data, e.g., from a third party that has processed the sample to experimentally determine the dataset.

[0117] “Measuring” or “measurement” in the context of the present teachings refers to determining the presence, absence, quantity, amount, or effective amount of a substance in a clinical or subject-derived sample, including the presence, absence, or concentration levels of such substances, and/or evaluating the values or categorization of a subject's clinical parameters based on a control, e.g. baseline levels of the marker.

[0118] Classification can be made according to predictive modeling methods that set a threshold for determining the probability that a sample belongs to a given class. The probability preferably is at least 50%, or at least 60%, or at least 70% or at least 80% or higher. Classifications also can be made by determining whether a comparison between an obtained dataset and a reference dataset yields a statistically significant difference. If so, then the sample from which the dataset was obtained is classified as not belonging to the reference dataset class. Conversely, if such a comparison is not statistically significantly different from the reference dataset, then the sample from which the dataset was obtained is classified as belonging to the reference dataset class.

[0119] The predictive ability of a model can be evaluated according to its ability to provide a quality metric, e.g. Area Under the Curve (AUC) or accuracy, of a particular value, or range of values. In some embodiments, a desired quality threshold is a predictive model that will classify a sample with an accuracy of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, at least about 0.95, or higher. As an alternative measure, a desired quality threshold can refer to a predictive model that will classify a sample with an AUC of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, or higher.

[0120] As is known in the art, the relative sensitivity and specificity of a predictive model can be “tuned” to favor either the selectivity metric or the sensitivity metric, where the two metrics have an inverse relationship. The limits in a model as described above can be adjusted to provide a selected sensitivity or specificity level, depending on the particular requirements of the test being performed. One or both of sensitivity and specificity can be at least about at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, or higher.

[0121] As used herein, the term "theranosis" refers to the use of results obtained from a prognostic or diagnostic method to direct the selection of, maintenance of, or changes to a therapeutic regimen, including but not limited to the choice of one or more therapeutic agents, changes in dose level, changes in dose schedule, changes in mode of administration, and changes in formulation. Diagnostic methods used to inform a theranosis can include any that provides information on the state of a disease, condition, or symptom.

[0122] The terms "therapeutic agent", "therapeutic capable agent" or "treatment agent" are used interchangeably and refer to a molecule, compound or any non- pharmacological regimen that confers some beneficial effect upon administration to a subject. The beneficial effect includes enablement of diagnostic determinations; amelioration of a disease, symptom, disorder, or pathological condition; reducing or preventing the onset of a disease, symptom, disorder or condition; and generally counteracting a disease, symptom, disorder or pathological condition.

[0123] As used herein, "treatment" or "treating," or "palliating" or "ameliorating" are used interchangeably. These terms refer to an approach for obtaining beneficial or desired results including but not limited to a therapeutic benefit and/or a prophylactic benefit. By therapeutic benefit is meant any therapeutically relevant improvement in or effect on one or more diseases, conditions, or symptoms under treatment. For prophylactic benefit, the compositions may be administered to a subject at risk of developing a particular disease, condition, or symptom, or to a subject reporting one or more of the physiological symptoms of a disease, even though the disease, condition, or symptom may not have yet been manifested.

[0124] The term "effective amount" or "therapeutically effective amount" refers to the amount of an agent that is sufficient to effect beneficial or desired results. The therapeutically effective amount will vary depending upon the subject and disease condition being treated, the weight and age of the subject, the severity of the disease condition, the manner of administration, and the like, which can readily be determined by one of ordinary skill in the art. The term also applies to a dose that will provide an image for detection by any one of the imaging methods described herein. The specific dose will vary depending on the particular agent chosen, the dosing regimen to be followed, whether it is administered in combination with other compounds, timing of administration, the tissue to be imaged, and the physical delivery system in which it is carried.

[0125] "Suitable conditions" shall have a meaning dependent on the context in which this term is used. That is, when used in connection with an antibody, the term shall mean conditions that permit an antibody to bind to its corresponding antigen. When used in connection with contacting an agent to a cell, this term shall mean conditions that permit an agent capable of doing so to enter a cell and perform its intended function. In one embodiment, the term "suitable conditions" as used herein means physiological conditions.

[0126] The term "antibody" includes full length antibodies and antibody fragments, and can refer to a natural antibody from any organism, an engineered antibody, or an antibody generated recombinantly for experimental, therapeutic, or other purposes as further defined below. Examples of antibody fragments, as are known in the art, such as Fab, Fab', F(ab')2, Fv, scFv, or other antigen-binding subsequences of antibodies, either produced by the modification of whole antibodies or those synthesized de novo using recombinant DNA technologies. The term "antibody" comprises monoclonal and polyclonal antibodies. Antibodies can be antagonists, agonists, neutralizing, inhibitory, or stimulatory. They can be humanized, glycosylated, bound to solid supports, and possess other variations.

Machine learning methods for predicting surgical outcomes

[0127] To obtain a predictive model of a clinical outcome after surgery, many embodiments employ a machine learning method that integrates the single-cell analysis of immune cell responses using mass cytometry with the multiplex assessment of inflammatory plasma proteins in blood samples collected from patients before or after surgery. Many embodiments employ a Multi-Omic Bootstrap (MOB) machine learning method to predict the development of POND after surgery. MOB, in accordance with various embodiments integrates one or more omic data categories (e.g., categories described herein) by extracting the most robust features from each data layer before combining these features and ensuring the stability of the features selected during statistical modeling of omic datasets.

[0128] The development of the stability selection method (See e.g., Nicolai Meinshausen. Peter Buhlmann. Ann. Statist. 34 (3) 1436 - 1462, June 2006; the disclosure of which is hereby incorporated by reference herein in its entirety) is a key element in the development of the MOB algorithm. While the problem of variability is inherent and cannot be completely overcome, the stability selection can characterize this variation by considering the frequency at which each feature is chosen when multiple Lasso models are obtained on subsampled data. The selection frequency can offer a quantitative measure of the importance of each feature that is readily interpretable from the biological standpoint. It has been shown that stability selection requires much weaker assumptions for asymptotically consistent variable selection compared to Lasso. Stated differently, stability selection, instead of selecting one model, subsamples data repeatedly and selects stable variables, that is, variables that occur in a large fraction of the resulting models. The chosen stable variables are defined by having a selection frequency above a chosen threshold: where n is the selection frequency of feature k for the regularization parameter A.

[0129] One of the difficulties of the previous method is that it is difficult to assess noise. As the goal is to discriminate noisy variables from predictive ones, the use of negative control features can be an appropriate approach to develop an internal noise filter in the learning process. Negative control features designate synthetically made noisy features. Systems and methods in accordance with embodiments of the invention can adapt the thresholds previously mentioned from the distribution of the artificial features in the stability selection process, thereby incorporating synthetic noises in the learning process. Two ways to generate these artificial features have been considered. Both techniques can extend the initial input, ending up with an input matrix where X is the matrix of synthetic negative controls. In many embodiments, the first technique called ‘decoy’ relies on a stochastic construction. Each synthetic feature may be built by random permutation of its original counterpart (the permutation is independent for each synthetic feature). This process is done before each subsampling of the data. It is then possible to define a threshold from the behavior of the decoy feature in the stability selection, for instance:

Where c is a ratio set by the user and mean max n +p is the mean of the maximum of selection frequency of the decoy features. The other technique uses model-X knockoffs (See e.g., Candes, Emmanuel, et al. "Panning for gold:‘model-X’ knockoffs for high dimensional controlled variable selection." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80.3 (2018): 551-577; the disclosure of which is hereby incorporated by reference herein in its entirety;) to build the synthetic negative controls. The construction allows to replicate the distribution of the original data (notably, the knockoffs correlation mimics the original one) and guarantees that the distribution of X is orthogonal to the distribution of Y knowing X (X 1 T|X). It is then possible to compare each pair of true/knockoffs variables after performing the stability selection and to select the feature k if :

( max Fit k AeA k

Where n and n +p are the selection frequency of the feature k and its knockoff counterpart, and cst is a positive constant defined by the user.

[0130] The machine learning model is typically trained, using among other steps, a bootstrap procedure on a plurality of individual data layers. A process for generating stable features used to train a machine learning model to predict post-operative outcomes in accordance with an embodiment is illustrated in Fig. 2. Process 200 generates (210) artificial features based on real features of an overall participating cohort. The generation of artificial features based on real features may be referred to as spiking of artificial features. In many embodiments, the artificial features are generated using a mathematical operation performed on the feature values of the real or non-artificial features. Process 200 concatenates (220) the artificial features to the real features to create an overall matrix. Process 200 obtains (230) a plurality of subsets of features from the overall matrix. In many embodiments, these subsets are the bootstraps in the bootstrap learning procedure.

[0131] Process 200 computes (240) a plurality of models based on each of the plurality of subsets of features. In numerous embodiments, each of plurality of models is computed based on the values of each feature in each of the obtained subsets of features. The plurality of models can be based on a sparse technique such as the Lasso algorithm or the Elastic Net algorithm. In many embodiments, features that contribute to the evaluation of potential POND development have a non-zero coefficient when fitting a model such as the Lasso. Process 200 selects (250) stable features from each of the plurality of subsets of features. In several embodiments, stable features are selected if they are above a computed threshold for each subset. Process 200 combines (260) the stable features from each of the plurality of subsets into a set of stable features for the type of data. In other words, the training process can extract the most relevant features in each omic and concatenate these features.

[0132] Each data layer may represent one type of data from the plurality of possible features and at least one artificial feature. Each feature is for example chosen among a group consisting of: genomic, transcriptom ic, proteomic, cytomic, metabolomic, clinical and demographic data. Each data layer can include data for a population of individuals, and each feature can include feature value for all individuals in the population of individuals. During machine learning, for each data layer, the obtained feature values for the population of individuals are typically arranged in a matrix X with n rows and p columns, where each row corresponds to a respective individual and each column corresponds to a respective feature. In other words, the matrix X is a concatenation of p vectors, each one being related to a respective feature and containing n feature values, with typically one feature value for each individual.

[0133] For a respective data layer, each artificial feature may be obtained from a nonartificial feature among the plurality of features, via a mathematical operation performed on the feature values of the non-artificial feature. The mathematical operation is for example chosen among the group consisting of: a permutation, a sampling, a combination, a knockoff method and an inference. The permutation is for instance a total permutation without replacement of the feature values. The sampling is typically a sampling with replacement of some of the feature values or a sampling without replacement of the feature values. The combination is for instance a linear combination of the feature values. The knockoff method is for instance a Model-X knockoff applied to the feature values. The inference is typically a fit of a statistical distribution of the feature values, such as a Gaussian distribution, an exponential distribution, a uniform distribution or a Poisson distribution; and then inference sampling at random from it.

[0134] The model can include weights |3i for a set of selected biological and clinical or demographic features, such weights |3i being typically derived from initial weights Wj repeatedly modified during the machine learning of the model. During the machine learning and for each data layer, for every repetition of the bootstrap, the initial weights Wj may be computed for the plurality of features and the at least one artificial feature associated with that data layer, by using an initial statistical learning technique.

[0135] The initial statistical learning technique is typically a sparse technique or a non- sparse technique. The initial statistical learning technique is for example a regression technique or a classification technique. Accordingly, the initial statistical learning technique is preferably chosen from among the group consisting of: a sparse regression technique, a sparse classification technique, a non-sparse regression technique and a non-sparse classification technique.

[0136] As an example, the initial statistical learning technique is therefore chosen from among the group consisting of: a linear or logistic linear regression technique with L1 or L2 regularization, such as the Lasso technique or the Elastic Net technique; (see e.g., Tibshirani and Zou and Hastie; cited above;) a model adapting linear or logistic linear regression techniques with L1 or L2 regularization, such as the Bolasso technique (see e.g., Bach, Francis R. "Bolasso: model consistent lasso estimation through the bootstrap." Proceedings of the 25th international conference on Machine learning. 2008; the disclosure of which is hereby incorporated by reference herein in its entirety), the relaxed Lasso (see e.g., Meinshausen, Nicolai. "Relaxed lasso." Computational Statistics & Data Analysis 52.1 (2007): 374-393; the disclosure of which is hereby incorporated by reference herein in its entirety;) the random-Lasso technique (see e.g., Wang, Sijian, et al. "Random lasso." The annals of applied statistics 5.1 (2011 ): 468; the disclosure of which is hereby incorporated by reference herein in its entirety;) the grouped-Lasso technique (see e.g., Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. Applications of the lasso and grouped lasso to the estimation of sparse graphical models. Technical report, Stanford University, 2010; the disclosure of which is hereby incorporated by reference herein in its entirety;) the LARS technique (see e.g., Eyraud, Remi, Colin De La Higuera, and Jean-Christophe Janodet. "LARS: A learning algorithm for rewriting systems." Machine Learning 66.1 (2007): 7-31 ; the disclosure of which is hereby incorporated by reference herein in its entirety;) a linear or logistic linear regression technique without L1 or L2 regularization; a non-linear regression or classification technique with L1 or L2 regularization; a Decision Tree technique; a Random Forest technique; a Support Vector Machine technique, also called SVM technique; a Neural Network technique; and a Kernel Smoothing technique.

[0137] In many embodiments, at least one selected feature may be determined for each data layer, based on statistical criteria depending on the computed initial weights Wj. The statistical criteria depend on significant weights among the computed initial weights Wj. The significant weights are for example non-zero weights, when the initial statistical learning technique is a sparse regression technique, or weights above a predefined weight threshold, when the initial statistical learning technique is a non-sparse regression technique.

[0138] As an example, the significant weights are non-zero weights, when the initial statistical learning technique is chosen from among the group consisting of: a linear or logistic linear regression technique with L1 or L2 regularization, such as the Lasso technique or the Elastic Net technique; a model adapting linear or logistic linear regression techniques with L1 or L2 regularization, such as the Bolasso technique, the relaxed Lasso, the random-Lasso technique, the grouped-Lasso technique, the LARS technique; a non-linear regression or classification technique with L1 or L2 regularization; and a Kernel Smoothing technique.

[0139] “Non-zero weight” refers to a weight which is in absolute value greater than a predefined very low threshold, such as 10' 5 , also noted 1e-5. Accordingly, “Non-zero weight” typically refers to a weight greater than 10' 5 in absolute value.

[0140] Alternatively, the significant weights are weights above the predefined weight threshold, when the initial statistical learning technique is chosen from among the group consisting of: a linear or logistic linear regression technique without L1 or L2 regularization; a Decision Tree technique; a Random Forest technique; a Support Vector Machine technique; and a Neural Network technique. In the example of the Neural Network technique, the significant weights are weights above the predefined weight threshold on an initial layer of the corresponding neural network.

[0141] The skilled person will observe that the Support Vector Machine technique is considered as a sparse technique with support vectors, and the technique leads to only keeping the support vectors. The skilled person will also note that for the Decision Tree technique, the aforementioned weight corresponds to the feature importance, and accordingly that the significant weights are the features for which the split in the decision tree induces a certain decrease in impurity.

[0142] Optionally, the initial weights Wj are further computed for a plurality of values of a hyperparameter A, the hyperparameter A being a parameter whose value is used to control the learning process. The hyperparameter A is typically a regularization coefficient used according to a respective mathematical norm in the context of a sparse initial technique. The mathematical norm is for example a P-norm, with P being an integer. [0143] As an example, the hyperparameter A is an upper bound of the coefficient of the L1 -norm of the initial weights Wj when the initial statistical learning technique is the Lasso technique, where the L1-norm refers to the sum of all absolute values of the initial weights.

[0144] As another example, the hyperparameter A is an upper bound of the coefficient of the both the L1 -norm sum of the initial weights Wj and the L2-norm sum of the initial weights Wj when the initial statistical learning technique is the Elastic Net technique, where the L1-norm is defined above and the L2-norm refers to the square root of the sum of all squared values of the initial weights.

[0145] While specific processes generating stable features used to train a machine learning model to predict post-operative outcomes are described above, any of a variety of processes can be utilized to generate stable features as appropriate to the requirements of specific applications. In certain embodiments, steps may be executed or performed in any order or sequence not limited to the order and sequence shown and described. In a number of embodiments, some of the above steps may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. In some embodiments, one or more of the above steps may be omitted.

[0146] For the feature selection, the statistical criteria depend for example on an occurrence frequency of the significant weights. A process for selecting stable features in accordance with an embodiment of the invention is illustrated in Fig. 3. Process 300 fits (310) a model on each of the obtained subset of features. Process 300 extracts (320) the non-zero coefficients and associated features of each subset of features based on a set of hyperparameters. Process 300 obtains (330) the occurrence frequency of the extracted non-zero coefficients and associated features. As an example, the statistical criteria are that each feature is selected when its occurrence frequency is greater than a frequency threshold. For each feature, to determine the occurrence frequency, a unitary occurrence frequency may be calculated for each value of the hyperparameter A, the unitary occurrence frequency being equal to a number of the significant weights related to said feature for the successive bootstrap repetitions divided by the number bootstrap repetitions used for said feature. The occurrence frequency is then typically equal to the highest unitary occurrence frequency among the unitary occurrence frequencies calculated for all the values of the hyperparameter A.

[0147] Process 300 estimates (340) a threshold of the occurrence frequency for each subset of features. The frequency threshold is typically computed according to the occurrence frequencies obtained for the artificial features. This frequency threshold is for example 2 standard deviations over the mean or the median of the occurrence frequencies obtained for the artificial features. Alternatively, the frequency threshold is 3 times the mean of the occurrence frequencies obtained for the artificial features. Still alternatively, the frequency threshold is equal to the maximum between one of the aforementioned examples of the calculated frequency threshold and a predefined frequency threshold.

[0148] Process 300 selects (350) the features with occurrence frequencies above the threshold in each subset. In many embodiments, the feature selection can be operated for each layer based on the statistical criteria. For example, the selected feature(s) are the one(s) which have their occurrence frequency greater than the frequency threshold.

[0149] While specific processes selecting stable features are described above, any of a variety of processes can be utilized to select stable features as appropriate to the requirements of specific applications. In certain embodiments, steps may be executed or performed in any order or sequence not limited to the order and sequence shown and described. In a number of embodiments, some of the above steps may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. In some embodiments, one or more of the above steps may be omitted.

[0150] As an example, each value of the hyperparameter A is chosen according to a predefined scheme of values between the lower and upper bounds of the chosen value range for the hyperparameter A. As a variant, the values of the hyperparameter A are evenly distributed between the lower and upper bounds of the chosen value range for the hyperparameter A. The hyperparameter A is typically between 0.5 and 100 when the initial statistical learning technique is the Lasso technique or the Elastic Net technique. [0151] For the bootstrapping process, the number of bootstrap repetitions is typically between 50 and 100,000; preferably between 500 and 10,000; still preferably equal to 10,000.

[0152] During the machine learning, after the feature selection, the weights |3i of the model are further computed using a final statistical learning technique on the data associated to the set of selected features.

[0153] The final statistical learning technique is typically a sparse technique or a non- sparse technique. The final statistical learning technique is for example a regression technique or a classification technique. Accordingly, the final statistical learning technique is preferably chosen from among the group consisting of: a sparse regression technique, a sparse classification technique, a non-sparse regression technique and a non-sparse classification technique.

[0154] As an example, the final statistical learning technique is therefore chosen from among the group consisting of: a linear or logistic linear regression technique with L1 or L2 regularization, such as the Lasso technique or the Elastic Net technique; a model adapting linear or logistic linear regression techniques with L1 or L2 regularization, such as the bo-Lasso technique, the soft-Lasso technique, the random-Lasso technique, the grouped-Lasso technique, the LARS technique; a linear or logistic linear regression technique without L1 or L2 regularization; a non-linear regression or classification technique with L1 or L2 regularization; a Decision Tree technique; a Random Forest technique; a Support Vector Machine technique, also called SVM technique; a Neural Network technique; and a Kernel Smoothing technique.

[0155] During a usage phase subsequent to the machine learning, the risk score for developing POND is computed according to the measured values of the individual for the set of selected features.

[0156] As an example, the risk score for developing POND is a probability calculated according to a weighted sum of the measured values multiplied by the respective weights Pi for the set of selected features, when the final statistical learning technique is a respective classification technique.

[0157] According to this example, the risk score for developing POND is typically calculated with the following equation: where P represents the risk score for developing POND, and Odd is a term depending on the weighted sum.

[0158] As a further example, Odd is an exponential of the weighted sum. Odd is for instance calculated according to the following equation: where exp represents the exponential function, po represents a predefined constant value,

Pi represents the weight associated to a respective feature in the set of selected features, Xi represents the measured value of the individual associated to the respective feature, and i is an index associated to each selected feature, i being an integer between 1 and Pstabie, where pstabie is the number of selected features for the respective layer.

[0159] The skilled person will notice that in the previous equation, the weights Pi and the measured values Xi may be negative values as well as positive values.

[0160] As another example, the risk score for developing POND is a term depending on a weighted sum of the measured values multiplied by the respective weights Pi for the set of selected features, when the final statistical learning technique is a respective regression technique.

[0161] According to this other example, the risk score for developing POND is equal to an exponential of the weighted sum, typically calculated with the previous equation.

[0162] An optional addition, during the machine learning and before obtaining artificial features, additional values of the plurality of non-artificial features are generated based on the obtained values and using a data augmentation technique. According to this optional addition, the artificial features are then obtained according to both the obtained values and the generated additional values.

[0163] According to this optional addition, the data augmentation technique is typically a non-synthetic technique or a synthetic technique. The data augmentation technique is for example chosen among the group consisting of: SMOTE technique, ADASYN technique and SVMSMOTE technique.

[0164] According to this optional addition, for a given non-artificial feature, the less values have been obtained, the more additional values are generated.

[0165] According to this optional addition, this generation of additional values using the data augmentation technique is an optional additional step before the bootstrapping process. According to the above, this generation allows “augmenting” the initial input matrix X and the corresponding output vector Y with the data augmentation algorithm, namely increasing the respective sizes of the matrix X and the vector Y. If the matrix X is of size (n,p) and the vector Y is of size (n). This generation step leads to X augmented of size (n’, p) and Y augmented of size (n’) where n’ > n.

[0166] This generation is preferably more sophisticated than the bootstrapping process. The goal is to ‘augment’ the inputs by creating synthetic samples, built using the obtained ones, and not by random duplication of samples. Indeed, if the non-artificial feature values would simply duplicated, the augmentation would not be fundamentally different from the bootstrapping process where non-artificial feature values may already be oversampled and/or duplicated. In the optional addition of data augmentation, the bootstrapping process will therefore be fed with new data points added to the original ones.

[0167] For classification, the data augmentation technique is for example the SMOTE technique, also called SMOTE algorithm or SMOTE. SMOTE first selects a minority class instance A at random and finds its K nearest minority class neighbors (using K Nearest Neighbor). The synthetic instance is then created by choosing one of the K nearest neighbors B at random and connecting A and B to form a line segment in the feature space. The synthetic instances are generated as a convex combination of the two chosen instances. The skilled person will notice that this technique is also a way of artificially balancing the classes. As a variant, the data augmentation technique is the ADASYN technique or the SVMSMOTE technique.

[0168] In the case of complications, namely when the determined risk is POND, the algorithm is applied to each layer independently. The layers used for determining the risk for developing POND are for example the following ones: the immune cell frequency (containing 24 cell frequency features), the basal signaling activity of each cell subset (312 basal signaling features), the signaling response capacity to each stimulation condition (six data layers containing 312 features each), and the plasma proteomic (276 proteomic features).

[0169] As an example, for each layer, there are 41 samples. In other words, the number n of feature values for each feature is equal to 41 in this example. Accordingly, for the immune frequency layer, the dimensions of the matrix X are 41 samples (n) by 24 features (p). In the case of basal signaling, the matrix X is of dimension 41 x 312. Y is the vector of outcome values, namely the occurrence of POND. This vector Y is in this case a vector of length 41. Accordingly, one respective outcome value, i.e. one POND value, is determined for each sample.

[0170] In this example, M is chosen equal to 10,000, which allows for enough sampling to derive an estimate of the frequency of selection over artificial features.

[0171] The chosen range value for the hyperparameter A is between 0.5 and 100, with the statistical learning technique being the Lasso technique or the Elastic Net technique. [0172] In this example, the frequency threshold is chosen equal to 3 times the mean of the occurrence frequencies obtained for the artificial features, so as to reduce variability and to allow stringent control over the choice of the features.

[0173] A skilled person will notice that the mathematical operation used to obtain artificial features is the permutation or the sampling, and will understand that other mathematical operations would also be applicable, including the other ones mentioned in the above description, namely combination, knockoff and inference. Similarly, the statistical learning techniques used to compute initial weights may include sparse regression techniques, such as the Lasso and the Elastic Net, and the skilled person will also understand that other statistical learning techniques would also be applicable, including the other ones mentioned in the above description, namely non-sparse techniques and classification techniques. In several embodiments, the significant weights are non-zero weights and the skilled person will also understand that other significant weights would also be applicable, such as weights above the predefined weight threshold, in accordance with the type of the initial statistical learning technique, as explained above. [0174] Figs. 4A-C illustrate graphically the MOB algorithm used in accordance with an embodiment of the invention. In such embodiments, at 402, subsets are obtained from an original cohort with a procedure using repeated sampling with or without replacement on individual data layers. In numerous embodiments, artificial features are included by random sampling from the distribution of the original sample or by permutation and added to the original dataset. At 404, on each of the subsets, individual models are computed using, for example, a Lasso algorithm and features are selected based on contribution in the model (in the case of Lasso, non-zero features are selected). At 406, Using the features selected for each model and by hyperparameter, many embodiments obtain stability paths that display the frequency of selection of each contributing feature (artificial or not). The distribution of selection of the artificial features are then used to estimate the distribution of the noise within the dataset. A cutoff for relevant biological or clinical features is computed based on the estimated distribution of the noise in the dataset. The relevant features from each layer are then used and combined in a final model for the prediction of relevant surgical outcomes. At 408, final integration of the model where each of the individual layers are combined with a process of selection similar to the process described in 402-406). In 408, all the top features are combined and used as predictors in a final layer.

[0175] Figs. 5A-B illustrate exemplary pseudo-code for MOB algorithms of various embodiments. In many embodiments, the MOB uses a procedure of multiple resampling with or without replacement, called bootstrap, on individual data layers. In each data layer and for every repetition of the bootstrap, simulated features are spiked in the original dataset to estimate the robustness of selecting a biological feature compared to an artificial feature. An optimal cutoff for biological or clinical features is selected using the distribution of artificial features used to estimate the behavior of noise over biological or clinical features’ robustness from the data layer. Then, the MOB algorithm selects the features above an optimal threshold calculated from the distribution of noise in each layer and builds a final model with the features from each data layer passing the optimal threshold of robustness. In many embodiments, performance is benchmarked, and stability is evaluated of feature selection on simulated data and biological data.

[0176] In the embodiments demonstrated Figs. 5A-B, such embodiments initially obtain subsets from the original cohort with a procedure using repeated sampling with or without replacement on individual data layers. For each bootstrap, artificial features are built by selecting the features (vectors of size p), one-by-one, of the original data matrix. To build an artificial feature, such embodiments either perform a random permutation (equivalent to randomly drawing without replacement all the values of the vector) or a random sampling (build a new vector of size p by randomly drawing with replacement p elements of the original feature). The process is repeated independently on each feature. Such embodiments concatenate the artificial features with the real features and then draw with or without replacement samples from this concatenated dataset.

[0177] Next, for each of the subsets, individual models are computed using for example the Lasso algorithm (Tibshirani, R. (1996). Journal of the Royal Statistical Society: Series B (Methodological), 58(1 ), 267-288.) and features are selected based on contribution in the model (in the case of Lasso, non-zero features are selected). At this stage of the process, a contributing feature has a non-zero coefficient when fitting the Lasso. This would be the same for any other technique inducing sparsity such as Elastic Net. For non-sparse regression techniques, an arbitrary contribution threshold would have to be defined. The algorithm is adaptable to the machine learning technique used. Lasso is a well-known sparse regression technique, but other techniques that select a subset of the original features can be used. For instance, the Elastic Net (EN) as a combination of Lasso and Ridge would also work. (Zou, H., & Hastie, T. (2005). Journal of the royal statistical society: series B (statistical methodology), 67(2), 301-320.)

[0178] Further, using the features selected for each model and by hyperparameter, stability paths can be obtained, which display the frequency of selection of each contributing feature (artificial or real). A stability path is, before any graphical transformation the output matrix of the process. Its size is (p, #{Lambda}). Each value (feature , lambdaj) corresponds to the frequency of selection of the feature using the parameters lambdaj. From this matrix, such embodiments are able to display the path of each feature (e.g., Fig. 4B, 406), where each line corresponds to the frequency of selection of each feature across all lambda tested. The distribution of selection of the artificial features are then used to estimate the distribution of the noise within the dataset. A cutoff for relevant biological or clinical features is computed based on the estimated distribution of the noise in the dataset. Only the relevant features from each layer are then used and combined in a final model for the prediction of relevant surgical outcomes. In the embodiment of Fig. 5B, the final model uses the selected features obtained on each data layer. The input of the final model is therefore of size (n, p_stable), with p_stable being the number of selected features (all layers included). p_stable is significantly lower than the original feature space dimension. This reduced matrix is then trained for the prediction of the outcome.

[0179] The exemplary embodiment illustrated in Fig. 5B provides a broader range of hyperparameters. For example, in the exemplary embodiment illustrated in Fig. 5A, the choice of the optimal parameters is determined based on an optimization of the parameters at each bootstrap by minimizing the loss min_p ||Y - |3X||_2 adding the constraint ||/?|| < A on a Leave-One-Out Cross Validation fit, while the exemplary embodiment illustrated of Fig. 5B samples the results through various values of lambda, hence allowing for the plot of a “stability path.”

[0180] Additionally, the exemplary embodiment of Fig. 5B allows the use of a selection threshold based on the distribution of all artificial features; specifically, the cutoff is defined based on the overall distribution of the artificial features. To define the cutoff, such embodiments take the maximum probability of selection of each artificial features, then take the mean of these maximums. From this mean, such embodiments can build the threshold, (e.g., 3 standard deviations from the mean). In contrast, only the artificial feature with the maximum frequency of selection can be used in the embodiment illustrated in Fig. 5A.

[0181] Furthermore, the exemplary embodiment of Fig. 5B allows the combination of artificial generation and bootstrap procedure to simplify the complexity of the algorithm.

[0182] In more detail, embodiments, such as illustrated in Fig. 5B, provide:

1 . Iteration over the number of bootstrap repetitions to get a proper evaluation of the sampling possibilities of artificial feature selection. Index is tracked to see how sampling from the original distribution or via permutation behaves over multiple trials. This represents the first for loop in the algorithm and yields results in the lines 10-13.

2. The permutation or random sampling is obtained from the original dataset and the matrix generated is a juxtaposition of the original matrix and the new matrix of artificial features computed. The number of artificial features (p’) can vary but typically is chosen to match the number of original features included in the algorithm. For computational purposes, if p is very large, we can choose a smaller number for p’. . In order to properly probe selection behavior over the chosen algorithm hyperparameters, a grid-search-like scheme is employed to evaluate different combinations of hyperparameters, then used to plot a curve of “stability paths” (see Fig. 4B). This step is also a way to avoid missing information, if only a limited amount of hyperparameters is tested. The range of tested hyperparameters can be probed thoroughly to avoid artifacts (e.g., testing lambda = 0 for the lasso will select all features for all bootstrap procedure, leading to the case where the max of frequencies of selection are all equal to 1 ). -6. With a given number of spikes and for each chosen value of hyperparameters, the resampling procedure allows for an estimate of the model fit behavior and to select features that are the most robust to small changes in the dataset. By model fit behavior, the model refers to the assessment of the probability of selection by the Lasso for a given value of the hyperparameters. The bootstrap (resampling procedure) allows to induce little perturbation in the original dataset and only the more robust features will be selected with a high frequency compared to others. The EN or Lasso algorithm tends to be very variable to small changes in the original cohort, especially in the sense that it can easily chooses features that are not very robust, hence making biological interpretation and robustness over new cohorts difficult. In this setting, resampling creates small variations around the original cohort. This procedure can properly probe robustness in the feature selection. . Extraction of the coefficients, with the sparsity induced by L1 regularization, using a simple cutoff of non-zero coefficient (typically 1 e-5 in absolute value) to select top performing feature at each step of the bootstrap procedure. This selection of top performing feature at each iteration of the bootstrap procedure allows the model to derive a frequency of selection for each feature of the dataset. 0-12. Because the model includes spiked artificial features, the model can use the definition of the stability paths to estimate the distribution of typical “noise” in the dataset and use this distribution to compute a cutoff for relevant features. This cutoff is typically 2 standard deviations over the mean or median stability path of artificial features or 3 times the mean of the max probability of selection of artificial features. An arbitrary fix threshold can also be added, to take the maximum between the constructed threshold and the arbitrary fix one. Some embodiments take the maximum of probability of selection for each artificial feature and then take the mean of these maximum to build the threshold (2*, 3* or combination of this and an arbitrary fix threshold).

[0183] Turning to Fig. 6, an exemplary method to generate multi-omic biological data and generating a predictive MOB model for POND that integrates multi-omic biological data and clinical data is illustrated. At 602, certain embodiments obtain biological samples from an individual. While Fig. 6 illustrates blood draws (whole blood and plasma), various embodiments obtain biological samples from other tissues, fluids, and/or another biological source. Biological samples can be obtained before surgery (including day of surgery or “DOS”) and/or after surgery. Pre-surgery samples can be obtained 7 days, 6 days, 5 days, 4 days, 3 days, 2 days, 1 day, and/or 0 days (i.e. , day of surgery and before first incision), while post-surgery samples can be obtained within 24 hours after the surgery, including 0 hours, 1 hour, 3 hours, 6 hours, 8 hours, 10 hours, 12 hours, 16 hours, 18 hours, and/or 24 hours after surgery (i.e. Post-Operative Day 1 , POD1 ). Multi- omic data is obtained from the biological sample at 604 of many embodiments. Such multi-omic data can include cytomic data obtained with mass cytometry and plasma protein expression data. Further embodiments utilize additional forms of omics data to identify cytomic, proteomic, transcriptom ic, and/or genomic data as applicable for a particular embodiment. In certain embodiments, a predictive MOB model is generated based on the omic (including multi-omic) data and/or clinical data is generated at 606, where such models can be generated by the methods as described herein. Methods for generating multi-omic biological data

[0184] In many embodiments, the methods for generating a predictive model of surgical complication, such as POND, relies on the multi-omic analysis of biological samples (e.g. blood-based samples, tumor samples, and/or any other suitable biological sample) obtained from an individual before or after surgery to obtain a determination of changes e.g., in immune cell subset frequencies and signaling activities, and in plasma proteins.

[0185] The biological sample can be any suitable type that allows for the analysis of one or more cells, proteins, preferably a blood sample. Samples can be obtained once or multiple times from an individual. Multiple samples can be obtained from different locations in the individual, at different times from the individual, or any combination thereof.

[0186] According to certain embodiments, at least one biological sample is obtained prior to surgery (including day of surgery or “DOS”). According to certain embodiments, at least one biological sample is obtained after surgery. According to certain embodiments, at least one biological sample is obtained prior to surgery and at least one biological sample is obtained after surgery. Pre-surgery biological samples can be obtained 7 days, 6 days, 5 days, 4 days, 3 days, 2 days, 1 day, and/or 0 day (/.e., on the day of surgery and before first incision). Post-surgery biological samples can be obtained within 24 hours after the surgery, including 0 hours, 1 hour, 3 hours, 6 hours, 8 hours, 10 hours, 12 hours, 16 hours, 18 hours, and/or 24 hours after surgery (/.e. POD1 ).

[0187] The biological samples can be from any source that contains immune cells. In some embodiments the biological sample(s) for analysis of immune cell responses is blood. However, the PBMC fraction of blood samples can also be utilized. In some embodiments the biological sample for proteomic analysis is the plasma fraction of a blood sample, however the serum fraction can also be utilized.

[0188] In some embodiments, samples are activated ex vivo, which, as used herein refers to the contacting of a sample, e.g. a blood sample or cells derived therefrom, outside of the body with a stimulating agent (an example of which is illustrated in Fig. 6 at 604). In some embodiments whole blood is preferred. The sample may be diluted or suspended in a suitable medium that maintains the viability of the cells, e.g. minimal media, PBS, etc. The sample can be fresh or frozen. Stimulating agents of interest include those agents that activate innate or adaptive cells, e.g. one or a combination of a TLR4 agonist such as LPS and/or IL-1 p, IL-2, IL-4, IL-6, TNFa, IFNa, or PMA/ionomycin. Generally, the activation of cells ex vivo is compared to a negative control, e.g. medium only, or an agent that does not elicit activation. The cells are incubated for a period of time sufficient for the activation of immune cells in the biological sample. For example, the time for activation can be up to about 1 hour, up to about 45 minutes, up to about 30 minutes, up to about 15 minutes, and may be up to about 10 minutes or up to about 5 minutes. In some embodiments the period of time is up to about 24 hours, or from about 5 to about 240 minutes. Following activation, the cells are fixed for analysis.

[0189] In many embodiments, cytomic, and proteomic features are detected using affinity reagents. “Affinity reagent”, or “specific binding member” may be used to refer to an affinity reagent, such as an antibody, ligand, etc. that selectively binds to a protein or marker of the invention. The term "affinity reagent" includes any molecule, e.g., peptide, nucleic acid, small organic molecule. For some purposes, an affinity reagent selectively binds to a cell surface or intracellular marker, e.g. CD3, CD4, CD7, CD8, CD11 b, CD11 c, CD14, CD15, CD16, CD19, CD24, CD25, CD27, CD33, CD45, CD45RA, CD56, CD61 , CD66, CD123, CD235ab, HLA-DR, CCR2, CCR7, TCRyS, OLMF4, CRTH2, and CXCR4 and the like. For other purposes an affinity reagent selectively binds to a cellular signaling protein, particularly one which is capable of detecting an activation state of a signaling protein over another activation state of the signaling protein. Signaling proteins of interest include, without limitation, pSTAT3, pSTATI , pCREB, pSTAT6, pPLCy2, pSTAT5, pSTAT4, pERK1/2, pP38, prpS6, pNF-KB (p65), pMAPKAPK2 (pMK2), pP90RSK, IKB, cPARP, FoxP3, and Tbet.

[0190] In some embodiments, proteomic features are measured and comprise measuring circulating extracellular proteins. Accordingly, other affinity reagents of interest bind to plasma proteins. Plasma protein targets of particular interest include IL-1 [3, ALK, VWVOX, HSPH1 , IRF6, CTNNA3, CCL3, STREM1 , ITM2A, TGFa, LIF, ADA, ITGB3, EIF5A, KRT19, and NTproBNP.

[0191] In some embodiments, cytomic features are measured and comprise measuring single cell levels of surface or intracellular proteins in an immune cell subset. Immune cell subsets include for instance neutrophils, granulocytes, basophils, monocytes, dendritic cells (DC) such as myeloid dendritic cells (mDC) or plasmacytoid dendritic cells (pDC), B-Cells or T-cells, such as regulatory ! Cells (Tregs), naive T Cells, memory T cells and NK-T cells. Immune cell subsets include more specifically neutrophils, granulocytes, basophils, CXCR4 + neutrophils, OLMF4 + neutrophils, CD14 + CD16’ classical monocytes (cMC), CD14‘CD16 + nonclassical monocytes (ncMC), CD14 + CD16 + intermediate monocytes (iMC), HLADR + CD11c + myeloid dendritic cells (mDC), HLADR + CD123 + plasmacytoid dendritic cells (pDC), CD14 + HLADR CD11b + monocytic myeloid derived suppressor cells (M-MDSC), CD3 + CD56 + NK-T cells, CD7 + CD19-CD3- NK cells, CD7 + CD56loCD16hi NK cells, CD7 + CD56 hi CD16 l0 NK cells, CD19 + B-Cells, CD19 + CD38 + Plasma Cells, CD19 + CD38- non-plasma B-Cells, CD4 + CD45RA + naive T Cells, CD4 + CD45RA- memory T cells, CD4 + CD161 + Th17 cells, CD4 + Tbet + Th1 cells, CD4 + CRTH2 + Th2 cells, CD3 + TCRy6 + ybT Cells, Th17 CD4 + T cells, CD3 + FoxP3 + CD25 + regulatory T Cells (Tregs), CD8 + CD45RA + naive T Cells, and CD8 + CD45RA- memory T Cells.

[0192] In some embodiments both proteomic features and cytomic features are measured in a biological sample.

[0193] In some embodiments, the affinity reagent is a peptide, polypeptide, oligopeptide or a protein, particularly antibodies, or an oligonucleotide, particularly aptamers and specific binding fragments and variants thereof. The peptide, polypeptide, oligopeptide or protein can be made up of naturally occurring amino acids and peptide bonds, or synthetic peptidomimetic structures. Thus "amino acid", or "peptide residue", as used herein include both naturally occurring and synthetic amino acids. Proteins including non-naturally occurring amino acids can be synthesized or in some cases, made recombinantly; see van Hest et al., FEBS Lett 428:(l-2) 68-70 May 22, 1998 and Tang et al., Abstr. Pap Am. Chem. S218: U138 Part 2 Aug. 22, 1999, both of which are expressly incorporated by reference herein.

[0194] Many antibodies, many of which are commercially available (for example, see Cell Signaling Technology, www.cellsignal.com or Becton Dickinson, www.bd.com) have been produced which specifically bind to the phosphorylated isoform of a protein but do not specifically bind to a non-phosphorylated isoform of a protein. Many such antibodies have been produced for the study of signal transducing proteins which are reversibly phosphorylated. Particularly, many such antibodies have been produced which specifically bind to phosphorylated, activated isoforms of protein and plasma proteins. Examples of proteins that can be analyzed with the methods described herein include, but are not limited to, phospho (p) rpS6, pNF-KB (p65), pMAPKAPK2 (pMK2), pSTAT5, pSTATI , pSTAT3, etc.

[0195] The methods the invention may utilize affinity reagents comprising a label, labeling element, or tag. By label or labeling element is meant a molecule that can be directly (i.e. , a primary label) or indirectly (i.e. , a secondary label) detected; for example, a label can be visualized and/or measured or otherwise identified so that its presence or absence can be known.

[0196] A compound can be directly or indirectly conjugated to a label which provides a detectable signal, e.g. non-radioactive isotopes, radioisotopes, fluorophores, enzymes, antibodies, oligonucleotides, particles such as magnetic particles, chemiluminescent molecules, molecules that can be detected by mass spec, or specific binding molecules, etc. Specific binding molecules include pairs, such as biotin and streptavidin, digoxin and anti-digoxin etc. Examples of labels include, but are not limited to, metal isotopes, optical fluorescent and chromogenic dyes including labels, label enzymes and radioisotopes. In some embodiments of the invention, these labels can be conjugated to the affinity reagents. In some embodiments, one or more affinity reagents are uniquely labeled.

[0197] Labels include optical labels such as fluorescent dyes or moieties. Fluorophores can be either “small molecule” fluors, or proteinaceous fluors (e.g. green fluorescent proteins and all variants thereof). In some embodiments, activation statespecific antibodies are labeled with quantum dots as disclosed by Chattopadhyay et al. (2006) Nat. Med. 12, 972-977. Quantum dot labeled antibodies can be used alone or they can be employed in conjunction with organic fluorochrome — conjugated antibodies to increase the total number of labels available. As the number of labeled antibodies increases, so does the ability for subtyping known cell populations.

[0198] Antibodies can be labeled using chelated or caged lanthanides as disclosed by Erkki et al. (1988) J. Histochemistry Cytochemistry, 36:1449-1451 , and U.S. Patent No. 7,018850. Other labels are tags suitable for Inductively Coupled Plasma Mass Spectrometer (ICP-MS) as disclosed in Tanner et al. (2007) Spectrochimica Acta Part B: Atomic Spectroscopy 62(3): 188-195. Isotope labels suitable for mass cytometry may be used, for example as described in published application US 2012-0178183.

[0199] Alternatively, detection systems based on FRET can be used. FRET find use in the invention, for example, in detecting activation states that involve clustering or multimerization wherein the proximity of two FRET labels is altered due to activation. In some embodiments, at least two fluorescent labels are used which are members of a fluorescence resonance energy transfer (FRET) pair.

[0200] When using fluorescent labeled components in the methods and compositions of the present invention, it will be recognized that different types of fluorescent monitoring systems, e.g., cytometric measurement device systems, can be used to practice the invention. In some embodiments, flow cytometric systems are used or systems dedicated to high throughput screening, e.g. 96 well or greater microtiter plates. Methods of performing assays on fluorescent materials are well known in the art and are described in, e.g., Lakowicz, J. R., Principles of Fluorescence Spectroscopy, New York: Plenum Press (1983); Herman, B., Resonance energy transfer microscopy, in: Fluorescence Microscopy of Living Cells in Culture, Part B, Methods in Cell Biology, vol. 30, ed. Taylor, D. L. & Wang, Y.-L., San Diego:Academic Press (1989), pp. 219-243; Turro, N. J., Modern Molecular Photochemistry, Menlo Park: Benjamin/Cummings Publishing Col, Inc. (1978), pp. 296-361.

[0201] The detecting, sorting, or isolating step of the methods of the present invention can entail fluorescence-activated cell sorting (FACS) techniques, where FACS is used to select cells from the population containing a particular surface marker, or the selection step can entail the use of magnetically responsive particles as retrievable supports for target cell capture and/or background removal. A variety of FACS systems are known in the art and can be used in the methods of the invention (see e.g., W099/54494, filed Apr. 16, 1999; U.S. Ser. No. 20010006787, filed Jul. 5, 2001 , each expressly incorporated herein by reference).

[0202] In some embodiments, a FACS cell sorter (e.g. a FACSVantage™ Cell Sorter, Becton Dickinson Immunocytometry Systems, San Jose, Calif.) is used to sort and collect cells based on their activation profile (positive cells) in the presence or absence of an increase in activation level in an signaling protein in response to a modulator. Other flow cytometers that are commercially available include the LSR II and the Canto II both available from Becton Dickinson. See Shapiro, Howard M., Practical Flow Cytometry, 4 th Ed., John Wiley & Sons, Inc., 2003 for additional information on flow cytometers.

[0203] In some embodiments, the cells are first contacted with labeled activation statespecific affinity reagents (e.g. antibodies) directed against specific activation state of specific signaling proteins. In such an embodiment, the amount of bound affinity reagent on each cell can be measured by passing droplets containing the cells through the cell sorter. By imparting an electromagnetic charge to droplets containing the positive cells, the cells can be separated from other cells. The positively selected cells can then be harvested in sterile collection vessels. These cell-sorting procedures are described in detail, for example, in the FACSVantage™. Training Manual, with particular reference to sections 3-11 to 3-28 and 10-1 to 10-17, which is hereby incorporated by reference in its entirety. See the patents, applications and articles referred to, and incorporated above for detection systems.

[0204] In some embodiments, the activation level of an intracellular protein is measured using Inductively Coupled Plasma Mass Spectrometer (ICP-MS). An affinity reagent that has been labeled with a specific element binds to a marker of interest. When the cell is introduced into the ICP, it is atomized and ionized. The elemental composition of the cell, including the labeled affinity reagent that is bound to the signaling protein, is measured. The presence and intensity of the signals corresponding to the labels on the affinity reagent indicates the level of the signaling protein on that cell (Tanner et al. Spectrochimica Acta Part B: Atomic Spectroscopy, 2007 Mar;62(3): 188-195.).

[0205] Mass cytometry, e.g. as described in the Examples provided herein, finds use on analysis. Mass cytometry, or CyTOF (DVS Sciences), is a variation of flow cytometry in which antibodies are labeled with heavy metal ion tags rather than fluorochromes. Readout is by time-of-flight mass spectrometry. This allows for the combination of many more antibody specificities in single samples, without significant spillover between channels. For example, see Bodenmiller at a. (2012) Nature Biotechnology 30:858-867. [0206] One or more cells or cell types or proteins can be isolated from body samples. The cells can be separated from body samples by red cell lysis, centrifugation, elutriation, density gradient separation, apheresis, affinity selection, panning, FACS, centrifugation with Hypaque, solid supports (magnetic beads, beads in columns, or other surfaces) with attached antibodies, etc. By using antibodies specific for markers identified with particular cell types, a relatively homogeneous population of cells can be obtained. Alternatively, a heterogeneous cell population can be used, e.g. circulating peripheral blood mononuclear cells.

[0207] In some embodiments, a phenotypic profile of a population of cells is determined by measuring the activation level of a signaling protein. The methods and compositions of the invention can be employed to examine and profile the status of any signaling protein in a cellular pathway, or collections of such signaling proteins. Single or multiple distinct pathways can be profiled (sequentially or simultaneously), or subsets of signaling proteins within a single pathway or across multiple pathways can be examined (sequentially or simultaneously).

[0208] In some embodiments, the basis for classifying cells is that the distribution of activation levels for one or more specific signaling proteins will differ among different phenotypes. A certain activation level, or more typically a range of activation levels for one or more signaling proteins seen in a cell or a population of cells, is indicative that that cell or population of cells belongs to a distinctive phenotype. Other measurements, such as cellular levels (e.g., expression levels) of biomolecules that may not contain signaling proteins, can also be used to classify cells in addition to activation levels of signaling proteins; it will be appreciated that these levels also will follow a distribution. Thus, the activation level or levels of one or more signaling proteins, optionally in conjunction with the level of one or more biomolecules that may or may not contain signaling proteins, of a cell or a population of cells can be used to classify a cell or a population of cells into a class. It is understood that activation levels can exist as a distribution and that an activation level of a particular element used to classify a cell can be a particular point on the distribution but more typically can be a portion of the distribution. In addition to activation levels of intracellular signaling proteins, levels of intracellular or extracellular biomolecules, e.g., proteins, can be used alone or in combination with activation states of signaling proteins to classify cells. Further, additional cellular elements, e.g., biomolecules or molecular complexes such as RNA, DNA, carbohydrates, metabolites, and the like, can be used in conjunction with activation states or expression levels in the classification of cells encompassed here.

[0209] In some embodiments of the invention, different gating strategies can be used in order to analyze a specific cell population (e.g., only CD4 + T cells) in a sample of mixed cell population. These gating strategies can be based on the presence of one or more specific surface markers. The following gate can differentiate between dead cells and live cells and the subsequent gating of live cells classifies them into, e.g. myeloid blasts, monocytes and lymphocytes. A clear comparison can be carried out by using two- dimensional contour plot representations, two-dimensional dot plot representations, and/or histograms.

[0210] In numerous embodiments, the immune cells are analyzed for the presence of an activated form of a signaling protein of interest. Signaling proteins of interest include, without limitation, pMAPKAPK2 (pMK2), pP38, prpS6, pNF-KB (p65), IKB, pSTAT3, pSTATI , pCREB, pSTAT6, pSTAT5, pERK. To determine if a change is significant the signal in a patient's baseline sample can be compared to a reference scale from a cohort of patients with known outcomes.

[0211] Samples may be obtained at one or more time points. Where a sample at a single time point is used, comparison is made to a reference “base line” level for the feature, which may be obtained from a normal control, a pre-determined level obtained from one or a population of individuals, from a negative control for ex vivo activation, and the like.

[0212] In some embodiments, the methods include the use of liquid handling components. The liquid handling systems can include robotic systems comprising any number of components. In addition, any or all of the steps outlined herein can be automated; thus, for example, the systems can be completely or partially automated. See LISSN 61/048,657. As will be appreciated by those in the art, there are a wide variety of components which can be used, including, but not limited to, one or more robotic arms; plate handlers for the positioning of microplates; automated lid or cap handlers to remove and replace lids for wells on non-cross contamination plates; tip assemblies for sample distribution with disposable tips; washable tip assemblies for sample distribution; 96-well loading blocks; cooled reagent racks; microtiter plate pipette positions (optionally cooled); stacking towers for plates and tips; and computer systems.

[0213] Fully robotic or microfluidic systems can include automated liquid-, particle-, cell- and organism-handling including high throughput pipetting to perform all steps of screening applications. This includes liquid, particle, cell, and organism manipulations such as aspiration, dispensing, mixing, diluting, washing, accurate volumetric transfers; retrieving, and discarding of pipet tips; and repetitive pipetting of identical volumes for multiple deliveries from a single sample aspiration. These manipulations are cross- contamination- free liquid, particle, cell, and organism transfers. This instrument performs automated replication of microplate samples to filters, membranes, and/or daughter plates, high-density transfers, full-plate serial dilutions, and high capacity operation.

[0214] In some embodiments, platforms for multi-well plates, multi-tubes, holders, cartridges, minitubes, deep-well plates, microfuge tubes, cryovials, square well plates, filters, chips, optic fibers, beads, and other solid-phase matrices or platform with various volumes are accommodated on an upgradable modular platform for additional capacity. This modular platform includes a variable speed orbital shaker, and multi-position work decks for source samples, sample and reagent dilution, assay plates, sample and reagent reservoirs, pipette tips, and an active wash station. In some embodiments, the methods of the invention include the use of a plate reader.

[0215] In some embodiments, interchangeable pipet heads (single or multi-channel) with single or multiple magnetic probes, affinity probes, or pipetters robotically manipulate the liquid, particles, cells, and organisms. Multi-well or multi-tube magnetic separators or platforms manipulate liquid, particles, cells, and organisms in single or multiple sample formats.

[0216] In some embodiments, the instrumentation will include a detector, which can be a wide variety of different detectors, depending on the labels and assay. In some embodiments, useful detectors include a microscope(s) with multiple channels of fluorescence; plate readers to provide fluorescent, ultraviolet and visible spectrophotometric detection with single and dual wavelength endpoint and kinetics capability, fluorescence resonance energy transfer (FRET), luminescence, quenching, two-photon excitation, and intensity redistribution; CCD cameras to capture and transform data and images into quantifiable formats; and a computer workstation.

[0217] In some embodiments, the robotic apparatus includes a central processing unit which communicates with a memory and a set of input/output devices (e.g., keyboard, mouse, monitor, printer, etc.) through a bus. Again, as outlined below, this can be in addition to or in place of the CPU for the multiplexing devices of the invention. The general interaction between a central processing unit, a memory, input/output devices, and a bus is known in the art. Thus, a variety of different procedures, depending on the experiments to be run, are stored in the CPU memory.

[0218] The differential presence of these markers is shown to provide for prognostic evaluations to detect individuals having a time to onset of labor. In general, such prognostic methods involve determining the presence or level of activated signaling proteins in an individual sample of immune cells. Detection can utilize one or a panel of specific binding members, e.g. a panel or cocktail of binding members specific for one, two, three, four, five or more markers.

[0219] The present invention incorporates information disclosed in other applications and texts. The following patent and other publications are hereby incorporated by reference in their entireties: Alberts et al., The Molecular Biology of the Cell, 4th Ed., Garland Science, 2002; Vogelstein and Kinzler, The Genetic Basis of Human Cancer, 2d Ed., McGraw Hill, 2002; Michael, Biochemical Pathways, John Wiley and Sons, 1999; Weinberg, The Biology of Cancer, 2007; Immunobiology, Janeway et al. 7th Ed., Garland, and Leroith and Bondy, Growth Factors and Cytokines in Health and Disease, A Multi Volume Treatise, Volumes 1A and IB, Growth Factors, 1996.

[0220] Unless otherwise apparent from the context, all elements, steps or features described herein can be used in any combination with other elements, steps or features. [0221] General methods in molecular and cellular biochemistry can be found in such standard textbooks as Molecular Cloning: A Laboratory Manual, 3rd Ed. (Sambrook et al., Harbor Laboratory Press 2001 ); Short Protocols in Molecular Biology, 4th Ed. (Ausubel et al. eds., John Wiley & Sons 1999); Protein Methods (Bollag et al., John Wiley & Sons 1996); Nonviral Vectors for Gene Therapy (Wagner et al. eds., Academic Press 1999); Viral Vectors (Kaplift & Loewy eds., Academic Press 1995); Immunology Methods Manual (I. Lefkovits ed., Academic Press 1997); and Cell and Tissue Culture: Laboratory Procedures in Biotechnology (Doyle & Griffiths, John Wiley & Sons 1998). Reagents, cloning vectors, and kits for genetic manipulation referred to in this disclosure are available from commercial vendors such as BioRad, Stratagene, Invitrogen, Sigma- Aldrich, and ClonTech.

Data Analysis

[0222] In many embodiments the methods for generating a predictive model for POND, employs the MOB algorithm herein described that integrates multi-omic biological and/or clinical data. In other embodiments, a predictive model of POND can be generated from a biological sample using any convenient protocol, for example as described below. The readout can be a mean, average, median or the variance or other statistically or mathematically-derived value associated with the measurement. The marker readout information can be further refined by direct comparison with the corresponding reference or control pattern. A binding pattern can be evaluated on a number of points: to determine if there is a statistically significant change at any point in the data matrix relative to a reference value; whether the change is an increase or decrease in the binding; whether the change is specific for one or more physiological states, and the like. The absolute values obtained for each marker under identical conditions will display a variability that is inherent in live biological systems and also reflects the variability inherent between individuals.

[0223] Following obtainment of the signature pattern from the sample being assayed, the signature pattern can be compared with a reference or base line profile to make a prognosis regarding the phenotype of the patient from which the sample was obtained/derived. Additionally, a reference or control signature pattern can be a signature pattern that is obtained from a sample of a patient known to have a normal pregnancy.

[0224] In certain embodiments, the obtained signature pattern is compared to a single reference/control profile to obtain information regarding the phenotype of the patient being assayed. In yet other embodiments, the obtained signature pattern is compared to two or more different reference/control profiles to obtain more in-depth information regarding the phenotype of the patient. For example, the obtained signature pattern can be compared to a positive and negative reference profile to obtain confirmed information regarding whether the patient has the phenotype of interest.

[0225] Samples can be obtained from the tissues or fluids of an individual. For example, samples can be obtained from whole blood, tissue biopsy, serum, etc. Other sources of samples are body fluids such as lymph, cerebrospinal fluid, and the like. Also included in the term are derivatives and fractions of such cells and fluids.

[0226] In order to identify profiles that are indicative of responsiveness, a statistical test can provide a confidence level for a change in the level of markers between the test and reference profiles to be considered significant. The raw data can be initially analyzed by measuring the values for each marker, usually in duplicate, triplicate, quadruplicate or in 5-10 replicate features per marker. A test dataset is considered to be different than a reference dataset if one or more of the parameter values of the profile exceeds the limits that correspond to a predefined level of significance.

[0227] To provide significance ordering, the false discovery rate (FDR) can be determined. First, a set of null distributions of dissimilarity values is generated. In one embodiment, the values of observed profiles are permuted to create a sequence of distributions of correlation coefficients obtained out of chance, thereby creating an appropriate set of null distributions of correlation coefficients (see Tusher et al. (2001 ) PNAS 98, 5116-21 , herein incorporated by reference). This analysis algorithm is currently available as a software “plug-in” for Microsoft Excel know as Significance Analysis of Microarrays (SAM). The set of null distribution is obtained by: permuting the values of each profile for all available profiles; calculating the pairwise correlation coefficients for all profile; calculating the probability density function of the correlation coefficients for this permutation; and repeating the procedure for N times, where N is a large number, usually 300. Using the N distributions, one calculates an appropriate measure (mean, median, etc.) of the count of correlation coefficient values that their values exceed the value (of similarity) that is obtained from the distribution of experimentally observed similarity values at given significance level.

[0228] The FDR is the ratio of the number of the expected falsely significant correlations (estimated from the correlations greater than this selected Pearson correlation in the set of randomized data) to the number of correlations greater than this selected Pearson correlation in the empirical data (significant correlations). This cut-off correlation value can be applied to the correlations between experimental profiles.

[0229] For SAM, Z-scores represent another measure of variance in a dataset, and are equal to a value of X minus the mean of X, divided by the standard deviation. A Z- Score tells how a single data point compares to the normal data distribution. A Z-score demonstrates not only whether a datapoint lies above or below average, but also how unusual the measurement is. The standard deviation is the average distance between each value in the dataset and the mean of the values in the dataset.

[0230] Using the aforementioned distribution, a level of confidence is chosen for significance. This is used to determine the lowest value of the correlation coefficient that exceeds the result that would have been obtained by chance. Using this method, one obtains thresholds for positive correlation, negative correlation or both. Using this threshold(s), the user can filter the observed values of the pairwise correlation coefficients and eliminate those that do not exceed the threshold(s). Furthermore, an estimate of the false positive rate can be obtained for a given threshold. For each of the individual “random correlation” distributions, one can find how many observations fall outside the threshold range. This procedure provides a sequence of counts. The mean and the standard deviation of the sequence provide the average number of potential false positives and its standard deviation. Alternatively, any convenient method of statistical validation can be used.

[0231] The data can be subjected to non-supervised hierarchical clustering to reveal relationships among profiles. For example, hierarchical clustering can be performed, where the Pearson correlation is employed as the clustering metric. One approach is to consider a patient disease dataset as a “learning sample” in a problem of “supervised learning”. CART is a standard in applications to medicine (Singer (1999) Recursive Partitioning in the Health Sciences, Springer), which can be modified by transforming any qualitative features to quantitative features; sorting them by attained significance levels, evaluated by sample reuse methods for Hotelling's T 2 statistic; and suitable application of the lasso method. Problems in prediction are turned into problems in regression without losing sight of prediction, indeed by making suitable use of the Gini criterion for classification in evaluating the quality of regressions. [0232] Other methods of analysis that can be used include logistic regression. One method of logic regression Ruczinski (2003) Journal of Computational and Graphical Statistics 12:475-512. Logic regression resembles CART in that its classifier can be displayed as a binary tree. It is different in that each node has Boolean statements about features that are more general than the simple “and” statements produced by CART.

[0233] Another approach is that of nearest shrunken centroids (Tibshirani (2002) PNAS 99:6567-72). The technology is k-means-like, but has the advantage that by shrinking cluster centers, one automatically selects features (as in the lasso) so as to focus attention on small numbers of those that are informative. The approach is available as Prediction Analysis of Microarrays (PAM) software, a software “plug-in” for Microsoft Excel, and is widely used. Two further sets of algorithms are random forests (Breiman (2001 ) Machine Learning 45:5-32 and MART (Hastie (2001 ) The Elements of Statistical Learning, Springer). These two methods are already “committee methods.” Thus, they involve predictors that “vote” on outcome. Several of these methods are based on the “R” software, developed at Stanford University, which provides a statistical framework that is continuously being improved and updated in an ongoing basis.

[0234] Other statistical analysis approaches including principle components analysis, recursive partitioning, predictive algorithms, Bayesian networks, and neural networks.

[0235] These tools and methods can be applied to several classification problems. For example, methods can be developed from the following comparisons: i) all cases versus all controls, ii) all cases versus nonresponsive controls, Hi) all cases versus responsive controls.

[0236] In a second analytical approach, variables chosen in the cross-sectional analysis are separately employed as predictors. Given the specific outcome, the random lengths of time each patient will be observed, and the selection of proteomic and other features, a parametric approach to analyzing responsiveness can be better than the widely applied semi-parametric Cox model. A Weibull parametric fit of survival permits the hazard rate to be monotonically increasing, decreasing, or constant, and also has a proportional hazard representation (as does the Cox model) and an accelerated failuretime representation. All the standard tools available in obtaining approximate maximum likelihood estimators of regression coefficients and functions of them are available with this model.

[0237] In addition, the Cox models can be used, especially since reductions of numbers of covariates to manageable size with the lasso will significantly simplify the analysis, allowing the possibility of an entirely nonparametric approach to survival.

[0238] The analysis and database storage can be implemented in hardware or software, or a combination of both. In one embodiment of the invention, a machine- readable storage medium is provided, the medium comprising a data storage material encoded with machine readable data which, when using a machine programmed with instructions for using said data, is capable of displaying a any of the datasets and data comparisons of this invention. Such data can be used for a variety of purposes, such as patient monitoring, initial diagnosis, and the like. Preferably, the invention is implemented in computer programs executing on programmable computers, comprising a processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code is applied to input data to perform the functions described above and generate output information. The output information is applied to one or more output devices, in known fashion. The computer can be, for example, a personal computer, microcomputer, or workstation of conventional design.

[0239] Each program is preferably implemented in a high level procedural or object- oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language can be a compiled or interpreted language. Each such computer program is preferably stored on a storage media or device readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. The system can also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein. [0240] A variety of structural formats for the input and output means can be used to input and output the information in the computer-based systems of the present invention. One format for an output means test datasets possessing varying degrees of similarity to a trusted profile. Such presentation provides a skilled artisan with a ranking of similarities and identifies the degree of similarity contained in the test pattern.

[0241] The signature patterns and databases thereof can be provided in a variety of media to facilitate their use. “Media” refers to a manufacture that contains the signature pattern information of the present invention. The databases of the present invention can be recorded on computer readable media, e.g. any medium that can be read and accessed directly by a computer. Such media include, but are not limited to: magnetic storage media, such as floppy discs, hard disc storage medium, and magnetic tape; optical storage media such as CD-ROM; electrical storage media such as RAM and ROM; and hybrids of these categories such as magnetic/optical storage media. One of skill in the art can readily appreciate how any of the presently known computer readable mediums can be used to create a manufacture comprising a recording of the present database information. "Recorded" refers to a process for storing information on computer readable medium, using any such methods as known in the art. Any convenient data storage structure can be chosen, based on the means used to access the stored information. A variety of data processor programs and formats can be used for storage, e.g. word processing text file, database format, etc.

Computer Executed Embodiments

[0242] Processes that provide the methods and systems for generating a risk score for developing POND in accordance with some embodiments are executed by a computing device or computing system, such as a desktop computer, tablet, mobile device, laptop computer, notebook computer, server system, and/or any other device capable of performing one or more features, functions, methods, and/or steps as described herein. The relevant components in a computing device that can perform the processes in accordance with some embodiments are shown in Fig. 7. One skilled in the art will recognize that computing devices or systems may include other components that are omitted for brevity without departing from described embodiments. A computing device 700 in accordance with such embodiments comprises a processor 710 and at least one memory 730. Memory 730 can be a non-volatile memory and/or a volatile memory, and the processor 710 is a processor, microprocessor, controller, or a combination of processors, microprocessor, and/or controllers that performs instructions stored in memory 730. Such instructions stored in the memory 730 may be executed by prediction application 732 and the processor, which can direct the processor to use data stored in patient data memory 734 and model data memory 736 to perform one or more features, functions, methods, and/or steps as described herein. Instructions stored in memory 730 Any input information or data can be stored in the memory 730 — either the same memory or another memory. In accordance with various other embodiments, the computing device 700 may have hardware and/or firmware that can include the instructions and/or perform these processes.

[0243] Certain embodiments can include a network interface 720 to allow communication (wired, wireless, etc.) to another device, such as through a network, nearfield communication, Bluetooth, infrared, radio frequency, and/or any other suitable communication system. Such systems can be beneficial for receiving data, information, or input (e.g., omic and/or clinical data) from another computing device and/or for transmitting data, information, or output (e.g., risk score) to another device.

[0244] Although a specific example of a computing device is illustrated in this figure, any of a variety of computing device can be utilized to make predictions of post-operative outcomes similar to those described herein as appropriate to the requirements of specific applications in accordance with embodiments of the invention.

[0245] Turning to Fig. 8, a network diagram of a distributed system of computing devices in accordance with an embodiment of the invention is illustrated. Such embodiments may be useful where computing power is not possible at a local level, and a central computing device (e.g., server) performs one or more features, functions, methods, and/or steps described herein. In such embodiments, a computing device 802 (e.g., server) is connected to a network 804 (wired and/or wireless), where it can receive inputs from one or more computing devices, including clinical data from a records database or repository 806, omic data provided from a laboratory computing device 808, and/or any other relevant information from one or more other remote devices 810. Once computing device 802 performs one or more features, functions, methods, and/or steps described herein, any outputs can be transmitted to one or more computing devices 806, 808, 810 for entering into records, taking medical action — including (but not limited to) prehabilitation, delaying surgery, providing antibiotics — and/or any other action relevant to a risk score. Such actions can be transmitted directly to a medical professional (e.g., via messaging, such as email, SMS, voice/vocal alert) for such action and/or entered into medical records.

[0246] In accordance with still other embodiments, the instructions for the processes can be stored in any of a variety of non-transitory computer readable media appropriate to a specific application.

EXEMPLARY EMBODIMENTS

[0247] Although the following embodiments provide details on certain embodiments of the inventions, it should be understood that these are only exemplary in nature and are not intended to limit the scope of the invention.

Example 1 : Integrated modeling of multi-omic biological and clinical data before surgery predicts postoperative neurocognitive disorder (POND)

[0248] Background: The ability to identify which patient will (or are likely to) develop POND before surgery (/.e. on DOS) is of utmost clinical interest as it will allow risk stratification prior to surgery and personalization of pre-operative interventions. Based on clinical, biological, and cognitive data from a prospective, multicenter, randomized, double blinded, and superiority clinical trial conducted between 2016 and 2019 (NCT02892916), this example aimed to identify peripheral immunological events associated with POND in patients aged > 60 years undergoing elective major orthopedic surgery.

[0249] Methods: Patients’ cognition was assessed (Montreal Cognitive Assessment, Trail Making Test A/B) at different time points (baseline, 1 day, 7 days and 90 days after surgery) (Figs. 9A-C). An absolute difference of Z-score > 1 in at least one cognitive test (MoCA or TMT A or TMT B or TMT A-B) between the tests done before surgery and postoperative timepoints defined POND. For a reduced number of patients (n= 26), whole blood samples for mass cytometry analysis have been obtained. Blood collected before surgery was either left unstimulated or stimulated with a series of receptor-specific ligands chosen to simulate key signaling responses implicated in the surgical immune response (including CpG/LPS, IL-2/4/12). The distribution and intracellular signaling response (including pSTAT1 ,3,5,6, pERK, pMK2, prpS6, pCREB, pNF-KB, and total IKB) of all major innate and adaptive immune cell subsets were quantified using a high-dimensional mass cytometry assay. In parallel, the concentration of over 1400 plasma inflammatory proteins were quantified using an antibody-based platform (Somalogic). An integrated Stack generalization approach that combined the high-dimensional assessment of mass cytometry, proteomic and clinical data (including demographic and medical history) was applied to derive a multivariate model predicting the occurrence of POND within 7 days of surgery. The statistical significance of the model was established using a leave-one- out cross-validation method to ensure the robustness and reproducibility of the results.

[0250] Results: In the ITT analysis, one hundred and four patients (35.6%) were diagnosed with POND after surgery. Among the 26 patients for whom whole blood was collected, a multivariate model integrating mass cytometry, proteomic, and clinical data collected before surgery accurately predicted the occurrence of POND (AUC = 0.96, p<10e-5, crossvalidation). Model features included 25 single-cell mass cytometry features (10 cell frequencies and 15 intracellular signaling responses, AUC 0.73-0.91 ), 4 proteomic feature (AUC 0.67) and 5 clinical features (AUC 0.63).

[0251] Conclusions: The pre-operative assessment of single-cell and plasma proteomic biomarkers combined with patients’ clinical data provides a novel method for accurately predicting POND in patients undergoing surgery. The integrative model’s predictive power was superior to existing risk assessment tools that are solely based on the assessment of clinical variables. The 34 features identified by the integrative model can be easily measured in patients’ blood using a simple venipuncture and clinically- approved flow cytometry and proteomic platforms, providing a set of non-invasive biomarkers for the prediction of POND. Importantly, these predictive biomarkers pointed at biologically plausible immune mechanisms underlying the pathogenesis of POND, including dysfunctional MyD88 signaling responses in innate immune cells and JAK/STAT signaling responses in adaptive immune cells.

DOCTRINE OF EQUIVALENTS

[0252] Having described several embodiments, it will be recognized by those skilled in the art that various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the invention. Additionally, a number of well- known processes and elements have not been described in order to avoid unnecessarily obscuring the present invention. Accordingly, the above description should not be taken as limiting the scope of the invention.

[0253] Those skilled in the art will appreciate that the foregoing examples and descriptions of various preferred embodiments of the present invention are merely illustrative of the invention as a whole, and that variations in the components or steps of the present invention may be made within the spirit and scope of the invention. Accordingly, the present invention is not limited to the specific embodiments described herein, but, rather, is defined by the scope of the appended claims.