Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR CONTACTLESS RESPIRATORY MONITORING
Document Type and Number:
WIPO Patent Application WO/2022/120017
Kind Code:
A1
Abstract:
The present disclosure provides systems, devices, and methods for contactless respiratory monitoring to assess respiratory events using data collected from sensors. The system for monitoring a subject may comprise: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire multi-mode cardiopulmonary data of the subject; and computer processors operatively coupled to the plurality of sensors, wherein the computer processors are configured to (i) receive the multi-mode cardiopulmonary data of the subject from the plurality of sensors, and (ii) process the multi-mode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

Inventors:
ZHONG ERHENG (US)
LIU NATHAN (US)
DU NAN (US)
Application Number:
PCT/US2021/061559
Publication Date:
June 09, 2022
Filing Date:
December 02, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DAWNLIGHT TECH INC (US)
International Classes:
A61B5/00; A61B5/0205; A61B5/0215
Foreign References:
US20190021607A92019-01-24
US20040015058A12004-01-22
US20170035622A12017-02-09
US20050200486A12005-09-15
US20110245633A12011-10-06
US20110112442A12011-05-12
US20080103403A12008-05-01
US20130123600A12013-05-16
US20120083700A12012-04-05
US20080082018A12008-04-03
US20100198283A12010-08-05
Other References:
RAMI N. KHUSHABA ; SARATH KODAGODA ; DIKAI LIU ; GAMINI DISSANAYAKE: "Electromyogram (EMG) based fingers movement recognition using Neighborhood Preserving Analysis with QR-decomposition", 2011 SEVENTH INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS, AND INFORMATION PROCESSING (ISSNIP 2011) : ADELAIDE, AUSTRALIA, 6 - 9 DECEMBER 2011, IEEE, PISCATAWAY, NJ, 6 December 2011 (2011-12-06), Piscataway, NJ , pages 1 - 6, XP032111813, ISBN: 978-1-4577-0675-2, DOI: 10.1109/ISSNIP.2011.6146512
Attorney, Agent or Firm:
SUPNEKAR, Neil (US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1. A system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire multi-mode cardiopulmonary data of the subject; and one or more computer processors operatively coupled to the plurality of sensors, wherein the one or more computer processors are configured to (i) receive the multi-mode cardiopulmonary data of the subject from the plurality of sensors, and (ii) process the multimode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

2. The system of claim 1, wherein the subject is a human subject.

3. The system of claim 2, wherein the human subject is a newborn or a neonate subject.

4. The system of claim 2, wherein the human subject is an adult subject.

5. The system of claim 4, wherein the adult subject is being provided intensive care.

6. The system of claim 4, wherein the adult subject is an elderly subject.

7. The system of claim 1, wherein the one or more computer processors are configured to further determine a breathing state of the subject, and generate the output indicative of the respiratory event of the subject based at least in part on the breathing state.

8. The system of claim 7, wherein the breathing state is related to a cardiopulmonary function of the subject.

9. The system of claim 8, wherein the multi-mode cardiopulmonary data comprises at least one of a heart rate, a respiratory rate, and a breath sound.

10. The system of claim 7, wherein the breathing state comprises at least one of a frequency of breathing, a magnitude of breathing, and any combination thereof.

11. The system of claim 1, wherein the multi-mode cardiopulmonary data comprises one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpCh), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.

12. The system of claim 11, wherein the multi-mode cardiopulmonary data comprises at least one of the heart rate, the systolic blood pressure, the diastolic blood pressure, the respiratory rate, and any combination thereof.

13. The system of claim 1, wherein the plurality of contactless sensors comprises at least one of: audio sensors configured to acquire audio data, image sensors configured to acquire image data, video sensors configured to acquire video data, radar sensors configured to acquire radar data, and any combination thereof.

14. The system of claim 13, wherein the plurality of contactless sensors comprises the audio sensors configured to acquire the audio data.

15. The system of claim 14, wherein the audio sensors comprise one or more members selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.

16. The system of claim 13, wherein the plurality of contactless sensors comprises the image sensors configured to acquire the image data.

17. The system of claim 16, wherein the image sensors comprise one or more members selected from the group consisting of a camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

18. The system of claim 13, wherein the plurality of contactless sensors comprises the video sensors configured to acquire the video data.

19. The system of claim 18, wherein the video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

20. The system of claim 13, wherein the plurality of contactless sensors further includes contactless multi-mode cardiopulmonary sensors configured to acquire the multi-mode cardiopulmonary data.

21. The system of claim 20, wherein the contactless multi-mode cardiopulmonary sensors comprise one or more members selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, an electromyography machine, and any combination thereof.

22. The system of claim 21, wherein the contactless multi-mode cardiopulmonary sensors comprise at least one of the heart rate monitor, the blood pressure monitor, the respiratory rate monitor, and any combination thereof.

23. The system of claim 1, wherein generating the output indicative of the respiratory event comprises identifying early signs of the respiratory event.

24. The system of claim 23, wherein identifying the early signs comprises determining a likelihood that the subject is experiencing at least one of a seizure, difficulty feeding, difficulty breathing, hypotonia, organ damage, organ failure, acidemia, an abnormal response to light, and any combination thereof.

25. The system of claim 1, wherein the respiratory event comprises an adverse event.

26. The system of claim 1, wherein the respiratory event comprises a neonatal hypoxia or a sleep apnea.

27. The system of claim 1, further comprising a transceiver operatively coupled to the one or more computer processors, wherein the transceiver is configured to transmit at least one of the breathing state of the subject and the output indicative of the respiratory event of the subject over a network.

28. The system of claim 27, wherein the transceiver comprises a wireless transceiver.

29. The system of claim 28, wherein the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.

30. The system of claim 1, wherein the one or more computer processors are configured to further store the acquired multi-mode cardiopulmonary data in a database.

31. The system of claim 30, wherein the database comprises a cloud-based database.

32. The system of claim 1, wherein the one or more computer processors are configured to further generate an alert based at least in part on the generated output.

33. The system of claim 32, wherein the one or more computer processors are configured to further transmit the alert over a network to a health care provider or caretaker of the subject.

34. The system of claim 32 or 33, wherein the alert comprises instructions to administer care or treatment to the subject.

35. The system of claim 34, wherein administering the treatment comprises providing a medication to the subject.

36. The system of claim 32 or 33, wherein the alarm comprises an audible alarm.

37. The system of claim 32 or 33, wherein the alarm comprises a visible alarm.

38. The system of claim 37, wherein the visible alarm is produced by lights.

39. The system of claim 37, wherein the visible alarm is displayed on an electronic display.

40. The system of claim 33, wherein the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.

41. The system of claim 1, wherein (ii) comprises fusing the multi-mode cardiopulmonary data.

42. The system of claim 1, wherein (ii) comprises signal processing the multi-mode cardiopulmonary data.

43. The system of claim 1, wherein the trained algorithm comprises a machine learningbased classifier configured to process the multi-mode cardiopulmonary data to determine the breathing state of the subject.

44. The system of claim 43, wherein the machine learning-based classifier comprises one or more members selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) neural network, and any combination thereof.

45. The system of claim 1, wherein the subject has received a clinical treatment or procedure.

46. The system of claim 45, wherein the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.

47. The system of claim 46, wherein the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.

48. The system of any one of claims 1-47, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 70%.

49. The system of any one of claims 1-47, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 70%.

50. The system of any one of claims 1-47, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 70%.

51. The system of any one of claims 1-47, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 70%.

52. The system of any one of claims 1-47, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.70.

53. The system of any one of claims 1-52, wherein the one or more computer processors are configured to perform (i), (ii), and (iii) in real time or substantially in real time.

54. A method for monitoring a subject, comprising:

(a) obtaining multi-mode cardiopulmonary data of the subject acquired from a plurality of sensors, which plurality of sensors comprises a plurality of contactless sensors; and

(b) computer processing the multi-mode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

55. The method of claim 54, wherein the subject is a human subject.

56. The method of claim 55, wherein the human subject is a newborn or a neonate subject.

57. The method of claim 55, wherein the human subject is an adult subject.

58. The method of claim 57, wherein the adult subject is being provided intensive care.

59. The method of claim 57, wherein the adult subject is an elderly subject.

60. The method of claim 54, further comprising determining a breathing state of the subject, and generating the output indicative of the respiratory event of the subject based at least in part on the breathing state.

61. The method of claim 60, wherein the breathing state is related to a cardiopulmonary function of the subject.

62. The method of claim 61, wherein the multi-mode cardiopulmonary data comprises at least one of a heart rate, a respiratory rate, and a breath sound.

63. The system of claim 60, wherein the breathing state comprises at least one of a frequency of breathing, a magnitude of breathing, and any combination thereof.

64. The method of claim 54, wherein the multi-mode cardiopulmonary data comprises one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpCh), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.

65. The method of claim 64, wherein the multi-mode cardiopulmonary data comprises at least one of the heart rate, the systolic blood pressure, the diastolic blood pressure, the respiratory rate, and any combination thereof.

66. The method of claim 54, wherein the plurality of contactless sensors comprises at least one of: audio sensors configured to acquire audio data, image sensors configured to acquire image data, video sensors configured to acquire video data, radar sensors configured to acquire radar data, and any combination thereof.

67. The method of claim 66, wherein the plurality of contactless sensors comprises the audio sensors configured to acquire the audio data.

68. The method of claim 67, wherein the audio sensors comprise one or more members selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.

69. The method of claim 66, wherein the plurality of contactless sensors comprises the image sensors configured to acquire the image data.

70. The method of claim 69, wherein the image sensors comprise one or more members selected from the group consisting of a camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

71. The method of claim 66, wherein the plurality of contactless sensors comprises the video sensors configured to acquire the video data.

72. The method of claim 71, wherein the video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

73. The method of claim 66, wherein the plurality of contactless sensors further comprises contactless multi-mode cardiopulmonary data configured to acquire the multimode cardiopulmonary data.

74. The method of claim 73, wherein the contactless multi-mode cardiopulmonary data sensors comprise one or more members selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, an electromyography machine, and any combination thereof.

75. The method of claim 74, wherein the contactless multi-mode cardiopulmonary sensors comprise at least one of the heart rate monitor, the blood pressure monitor, the respiratory rate monitor, and any combination thereof.

76. The method of claim 54, wherein generating the output indicative of the respiratory event comprises identifying early signs of the respiratory event.

77. The method of claim 76, wherein identifying the early signs comprises determining a likelihood that the subject is experiencing at least one of a seizure, difficulty feeding, difficulty breathing, hypotonia, organ damage, organ failure, acidemia, an abnormal response to light, and any combination thereof.

78. The method of claim 54, wherein the respiratory event comprises an adverse event.

79. The method of claim 54, wherein the respiratory event comprises a neonatal hypoxia or a sleep apnea.

80. The method of claim 54, further comprising using a transceiver to transmit at least one of the breathing state of the subject and the output indicative of the respiratory event of the subject over a network.

81. The method of claim 80, wherein the transceiver comprises a wireless transceiver.

82. The method of claim 81, wherein the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.

83. The method of claim 54, further comprising storing the acquired multi-mode cardiopulmonary data in a database.

84. The method of claim 83, wherein the database comprises a cloud-based database.

85. The method of claim 54, further comprising an alert based at least in part on the generated output.

86. The method of claim 85, further comprising transmitting the alert over a network to a health care provider or caretaker of the subject.

87. The method of claim 85 or 86, wherein the alert comprises instructions to administer care or treatment to the subject.

88. The method of claim 87, wherein administering the treatment comprises providing a medication to the subject.

89. The method of claim 84 or 85, wherein the alarm comprises an audible alarm.

90. The method of claim 84 or 85, wherein the alarm comprises a visible alarm.

91. The method of claim 90, wherein the visible alarm is produced by lights.

92. The method of claim 90, wherein the visible alarm is displayed on an electronic display.

93. The method of claim 86, wherein the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.

94. The method of claim 54, wherein (ii) comprises fusing the multi-mode cardiopulmonary data

95. The method of claim 54, wherein (ii) comprises signal processing the multi-mode cardiopulmonary data.

96. The method of claim 54, wherein the trained algorithm comprises a machine learningbased classifier configured to process the multi-mode cardiopulmonary data to determine the breathing state of the subject.

97. The method of claim 96, wherein the machine learning-based classifier comprises one or more members selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) neural network, and any combination thereof.

98. The method of claim 54, wherein the subject has received a clinical treatment or procedure.

99. The method of claim 98, wherein the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.

100. The method of claim 99, wherein the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.

101. The method of any one of claims 54-100, further comprising processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 70%.

102. The method of any one of claims 54-100, further comprising processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 70%.

103. The method of any one of claims 54-100, further comprising processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 70%.

104. The method of any one of claims 54-100, further comprising processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 70%.

105. The method of any one of claims 54-100, further comprising processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.70.

106. The method of any one of claims 54-105, further comprising performing (i), (ii), and (iii) in real time or substantially in real time.

107. A non-transitory computer-readable medium comprising machine-executable instructions that, upon execution by one or more computer processors, implements a method for monitoring a subject, the method comprising:

(a) obtaining multi-mode cardiopulmonary data of the subject acquired from a plurality of sensors, which plurality of sensors comprises a plurality of contactless sensors;

(b) processing the multi-mode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

Description:
SYSTEMS AND METHODS FOR CONTACTLESS RESPIRATORY MONITORING

CROSS-REFERENCE

[0001] This application claims the benefit of U.S. Provisional Application No. 63/120,817, filed on December 3, 2020; the contents of which is incorporated by reference herein in its entirety.

BACKGROUND

[0002] In the neonatal ward, continuous and effective monitoring (e.g., cardiopulmonary function monitoring) may be vital to the lives and health of newborns. Devices and systems for neonatal cardiopulmonary function monitoring may comprise sensor devices such as ECG or pulse oximeter.

SUMMARY

[0003] In current clinical practice, there may be a generally insufficient number of medical staff in medical institutions. Further, devices and systems for neonatal cardiopulmonary function monitoring may present challenges. For example, monitoring devices and systems may provide data from only one source, which may be not enough to cover all the complicated clinical situations and may miss some critical events. As another example, contact breathing monitoring devices and systems may be susceptible to interference from sources such as neonatal crying, cable pulling, limb twisting, and other limb movements, which may affect the quality of data collection. It may be difficult to monitor neonatal breathing, resulting in a large number of false alarms. As another example, in these devices and systems, the alarm may be triggered, only when the abnormal breathing causes the heart rate to slow down and the blood oxygen saturation to drop, which produces an inevitable hysteresis. Therefore, continuous neonatal monitoring may be difficult, as contact cardiopulmonary function monitoring equipment may be technically limited and may produce hysteresis, and collected data may be susceptible to interference.

[0004] Recognized herein is a need for systems, devices, and methods for contactless respiratory monitoring. The present disclosure provides contactless respiratory monitoring systems, devices, and methods that may adopt a non-contact collection of multiple signals and neural network-based respiratory analysis algorithms to provide real-time and effective continuous neonatal cardiopulmonary function monitoring. Such contactless respiratory monitoring systems, devices, and methods may be based on artificial intelligence and intelligent perception sensing techniques.

[0005] Systems, devices, and methods of the present disclosure may use intelligent early warning to detect neonatal hypoxic adverse events. When the system detects early characteristics of the adverse event, a healthcare provider or caretaker may be alerted in time to intervene early to avoid neonatal hypoxic brain injury. Systems, devices, and methods of the present disclosure may advantageously provide improved timeliness, accuracy, and standardization of neonatal respiratory management.

[0006] Systems, devices, and methods of the present disclosure may perform real-time dynamic collection of neonatal cardiopulmonary related multi-dimensional data, including heart rate, respiration rate and breath sound. Based on this collected data, the system may use a trained artificial intelligence (Al) model to provide cardiopulmonary function monitoring. The system may perform signal processing on the data to increase the signal strength, produce representations of the data, fuse the data, and then implement one or more neural network layers to produce a prediction of an adverse respiratory event.

[0007] In an aspect, the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire multi-mode cardiopulmonary data of the subject; and one or more computer processors operatively coupled to the plurality of sensors, wherein the one or more computer processors are configured to (i) receive the multi-mode cardiopulmonary data of the subject from the plurality of sensors, (ii) process the multi-mode cardiopulmonary data using a trained algorithm to determine a breathing state of the subject, and (iii) generate an output indicative of a respiratory event of the subject based at least in part on the breathing state determined in (ii).

[0008] In some embodiments, the subject is a human subject. In some embodiments, the human subject is a newborn or a neonate subject. In some embodiments, the human subject is an adult subject. In some embodiments, the adult subject is being provided intensive care. In some embodiments, the adult subject is an elderly subject.

[0009] In some embodiments, the one or more computer processors are configured to further determine a breathing state of the subject, and generate the output indicative of the respiratory event of the subject based at least in part on the breathing state. In some embodiments, the breathing state is related to a cardiopulmonary function of the subject. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of a heart rate, a respiratory rate, and a breath sound. In some embodiments, the breathing state comprises at least one of a frequency of breathing, a magnitude of breathing, and any combination thereof.

[0010] In some embodiments, the multi-mode cardiopulmonary data comprises one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of the heart rate, the systolic blood pressure, the diastolic blood pressure, the respiratory rate, and any combination thereof.

[0011] In some embodiments, the plurality of contactless sensors comprises at least one of: audio sensors configured to acquire audio data, image sensors configured to acquire image data, video sensors configured to acquire video data, radar sensors configured to acquire radar data, and any combination thereof.

[0012] In some embodiments, the plurality of contactless sensors comprises the audio sensors configured to acquire the audio data. In some embodiments, the audio sensors comprise one or more members selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.

[0013] In some embodiments, the plurality of contactless sensors comprises the image sensors configured to acquire the image data. In some embodiments, the image sensors comprise one or more members selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0014] In some embodiments, the plurality of contactless sensors comprises the video sensors configured to acquire the video data. In some embodiments, the video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0015] In some embodiments, the plurality of contactless sensors further includes contactless multi-mode cardiopulmonary sensors configured to acquire the multi-mode cardiopulmonary data. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise one or more members selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, an electromyography machine, and any combination thereof. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise at least one of the heart rate monitor, the blood pressure monitor, the respiratory rate monitor, and any combination thereof.

[0016] In some embodiments, generating the output indicative of the respiratory event comprises identifying early signs of the respiratory event. In some embodiments, identifying the early signs comprises determining a likelihood that the subject is experiencing at least one of a seizure, difficulty feeding, difficulty breathing, hypotonia, organ damage, organ failure, acidemia, an abnormal response to light, and any combination thereof. In some embodiments, the respiratory event comprises an adverse event. In some embodiments, the respiratory event comprises a neonatal hypoxia or a sleep apnea.

[0017] In some embodiments, the system further comprises a transceiver operatively coupled to the one or more computer processors, wherein the transceiver is configured to transmit at least one of the breathing state of the subject and the output indicative of the respiratory event of the subject over a network. In some embodiments, the transceiver comprises a wireless transceiver. In some embodiments, the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.

[0018] In some embodiments, the one or more computer processors are configured to further store the acquired multi-mode cardiopulmonary data in a database. In some embodiments, the database comprises a cloud-based database.

[0019] In some embodiments, the one or more computer processors are configured to further generate an alert based at least in part on the generated output. In some embodiments, the one or more computer processors are configured to further transmit the alert over a network to a health care provider or caretaker of the subject. In some embodiments, the alert comprises instructions to administer care or treatment to the subject. In some embodiments, administering the treatment comprises providing a medication to the subject. In some embodiments, the alarm comprises an audible alarm. In some embodiments, the alarm comprises a visible alarm. In some embodiments, the visible alarm is produced by lights. In some embodiments, the visible alarm is displayed on an electronic display. In some embodiments, the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.

[0020] In some embodiments, (ii) comprises fusing the multi-mode cardiopulmonary data. In some embodiments, (ii) comprises signal processing the multi-mode cardiopulmonary data.

[0021] In some embodiments, the trained algorithm comprises a machine learning-based classifier configured to process the multi-mode cardiopulmonary data to determine the breathing state of the subject. In some embodiments, the machine learning-based classifier comprises one or more members selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) neural network, and any combination thereof.

[0022] In some embodiments, the subject has received a clinical treatment or procedure. In some embodiments, the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof. In some embodiments, the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.

[0023] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 70%.

[0024] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 70%.

[0025] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 70%.

[0026] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 70%.

[0027] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99. In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.70. [0028] In some embodiments, the one or more computer processors are configured to perform (i), (ii), and (iii) in real time or substantially in real time.

[0029] In another aspect, the present disclosure provides a method for monitoring a subject, comprising: (a) obtaining multi-mode cardiopulmonary data of the subject acquired from a plurality of sensors, which plurality of sensors comprises a plurality of contactless sensors; and (b) computer processing the multi-mode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

[0030] In some embodiments, the subject is a human subject. In some embodiments, the human subject is a newborn or a neonate subject. In some embodiments, the human subject is an adult subject. In some embodiments, the adult subject is being provided intensive care. In some embodiments, the adult subject is an elderly subject.

[0031] In some embodiments, the method further comprises determining a breathing state of the subject, and generating the output indicative of the respiratory event of the subject based at least in part on the breathing state. In some embodiments, the breathing state is related to a cardiopulmonary function of the subject. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of a heart rate, a respiratory rate, and a breath sound. In some embodiments, the breathing state comprises at least one of a frequency of breathing, a magnitude of breathing, and any combination thereof.

[0032] In some embodiments, the multi-mode cardiopulmonary data comprises one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of the heart rate, the systolic blood pressure, the diastolic blood pressure, the respiratory rate, and any combination thereof.

[0033] In some embodiments, the plurality of contactless sensors comprises at least one of audio sensors configured to acquire audio data, image sensors configured to acquire image data, video sensors configured to acquire video data, radar sensors configured to acquire radar data, and any combination thereof.

[0034] In some embodiments, the plurality of contactless sensors comprises the audio sensors configured to acquire the audio data. In some embodiments, the audio sensors comprise one or more members selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.

[0035] In some embodiments, the plurality of contactless sensors comprises the image sensors configured to acquire the image data. In some embodiments, the image sensors comprise one or more members selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0036] In some embodiments, the plurality of contactless sensors comprises the video sensors configured to acquire the video data. In some embodiments, the video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0037] In some embodiments, the plurality of contactless sensors further includes contactless multi-mode cardiopulmonary sensors configured to acquire the multi-mode cardiopulmonary data. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise one or more members selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, an electromyography machine, and any combination thereof. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise at least one of the heart rate monitor, the blood pressure monitor, the respiratory rate monitor, and any combination thereof.

[0038] In some embodiments, generating the output indicative of the respiratory event comprises identifying early signs of the respiratory event. In some embodiments, identifying the early signs comprises determining a likelihood that the subject is experiencing at least one of a seizure, difficulty feeding, difficulty breathing, hypotonia, organ damage, organ failure, acidemia, an abnormal response to light, and any combination thereof. In some embodiments, the respiratory event comprises an adverse event. In some embodiments, the respiratory event comprises a neonatal hypoxia or a sleep apnea.

[0039] In some embodiments, the method further comprises using a transceiver to transmit at least one of the breathing state of the subject and the output indicative of the respiratory event of the subject over a network. In some embodiments, the transceiver comprises a wireless transceiver. In some embodiments, the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.

[0040] In some embodiments, the method further comprises storing the acquired multimode cardiopulmonary data in a database. In some embodiments, the database comprises a cloud-based database.

[0041] In some embodiments, the method further comprises generating an alert based at least in part on the generated output. In some embodiments, the one or more computer processors are configured to further transmit the alert over a network to a health care provider or caretaker of the subject. In some embodiments, the alert comprises instructions to administer care or treatment to the subject. In some embodiments, administering the treatment comprises providing a medication to the subject. In some embodiments, the alarm comprises an audible alarm. In some embodiments, the alarm comprises a visible alarm. In some embodiments, the visible alarm is produced by lights. In some embodiments, the visible alarm is displayed on an electronic display. In some embodiments, the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloudbased network.

[0042] In some embodiments, (ii) comprises fusing the multi-mode cardiopulmonary data. In some embodiments, (ii) comprises signal processing the multi-mode cardiopulmonary data.

[0043] In some embodiments, the trained algorithm comprises a machine learning-based classifier configured to process the multi-mode cardiopulmonary data to determine the breathing state of the subject. In some embodiments, the machine learning-based classifier comprises one or more members selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) neural network, and any combination thereof.

[0044] In some embodiments, the subject has received a clinical treatment or procedure. In some embodiments, the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof. In some embodiments, the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.

[0045] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 70%.

[0046] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 70%.

[0047] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 70%.

[0048] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 70%.

[0049] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.70.

[0050] In some embodiments, the method further comprises performing (i), (ii), and (iii) in real time or substantially in real time.

[0051] In another aspect, the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements a method for monitoring a subject, the method comprising: (a) obtaining multi-mode cardiopulmonary data of the subject acquired from a plurality of sensors, which plurality of sensors comprises a plurality of contactless sensors; and (b) computer processing the multi-mode cardiopulmonary data using a trained algorithm to generate an output indicative of a respiratory event of the subject.

[0052] In some embodiments, the subject is a human subject. In some embodiments, the human subject is a newborn or a neonate subject. In some embodiments, the human subject is an adult subject. In some embodiments, the adult subject is being provided intensive care. In some embodiments, the adult subject is an elderly subject.

[0053] In some embodiments, the method further comprises determining a breathing state of the subject, and generating the output indicative of the respiratory event of the subject based at least in part on the breathing state. In some embodiments, the breathing state is related to a cardiopulmonary function of the subject. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of a heart rate, a respiratory rate, and a breath sound. In some embodiments, the breathing state comprises at least one of a frequency of breathing, a magnitude of breathing, and any combination thereof.

[0054] In some embodiments, the multi-mode cardiopulmonary data comprises one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals. In some embodiments, the multi-mode cardiopulmonary data comprises at least one of the heart rate, the systolic blood pressure, the diastolic blood pressure, the respiratory rate, and any combination thereof.

[0055] In some embodiments, the plurality of contactless sensors comprises at least one of: audio sensors configured to acquire audio data, image sensors configured to acquire image data, video sensors configured to acquire video data, radar sensors configured to acquire radar data, and any combination thereof.

[0056] In some embodiments, the plurality of contactless sensors comprises the audio sensors configured to acquire the audio data. In some embodiments, the audio sensors comprise one or more members selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.

[0057] In some embodiments, the plurality of contactless sensors comprises the image sensors configured to acquire the image data. In some embodiments, the image sensors comprise one or more members selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0058] In some embodiments, the plurality of contactless sensors comprises the video sensors configured to acquire the video data. In some embodiments, the video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, a Quanta Image Sensor (QIS), and any combination thereof.

[0059] In some embodiments, the plurality of contactless sensors further includes contactless multi-mode cardiopulmonary sensors configured to acquire the multi-mode cardiopulmonary data. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise one or more members selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, an electromyography machine, and any combination thereof. In some embodiments, the contactless multi-mode cardiopulmonary sensors comprise at least one of the heart rate monitor, the blood pressure monitor, the respiratory rate monitor, and any combination thereof.

[0060] In some embodiments, generating the output indicative of the respiratory event comprises identifying early signs of the respiratory event. In some embodiments, identifying the early signs comprises determining a likelihood that the subject is experiencing at least one of a seizure, difficulty feeding, difficulty breathing, hypotonia, organ damage, organ failure, acidemia, an abnormal response to light, and any combination thereof. In some embodiments, the respiratory event comprises an adverse event. In some embodiments, the respiratory event comprises a neonatal hypoxia or a sleep apnea.

[0061] In some embodiments, the method further comprises using a transceiver to transmit at least one of the breathing state of the subject and the output indicative of the respiratory event of the subject over a network. In some embodiments, the transceiver comprises a wireless transceiver. In some embodiments, the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.

[0062] In some embodiments, the method further comprises storing the acquired multimode cardiopulmonary data in a database. In some embodiments, the database comprises a cloud-based database.

[0063] In some embodiments, the method further comprises generating an alert based at least in part on the generated output. In some embodiments, the one or more computer processors are configured to further transmit the alert over a network to a health care provider or caretaker of the subject. In some embodiments, the alert comprises instructions to administer care or treatment to the subject. In some embodiments, administering the treatment comprises providing a medication to the subject. In some embodiments, the alarm comprises an audible alarm. In some embodiments, the alarm comprises a visible alarm. In some embodiments, the visible alarm is produced by lights. In some embodiments, the visible alarm is displayed on an electronic display. In some embodiments, the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloudbased network.

[0064] In some embodiments, (ii) comprises fusing the multi-mode cardiopulmonary data. In some embodiments, (ii) comprises signal processing the multi-mode cardiopulmonary data.

[0065] In some embodiments, the trained algorithm comprises a machine learning-based classifier configured to process the multi-mode cardiopulmonary data to determine the breathing state of the subject. In some embodiments, the machine learning-based classifier comprises one or more members selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) neural network, and any combination thereof.

[0066] In some embodiments, the subject has received a clinical treatment or procedure. In some embodiments, the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof. In some embodiments, the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.

[0067] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a sensitivity of at least about 70%.

[0068] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a specificity of at least about 70%.

[0069] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a positive predictive value of at least about 70%.

[0070] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with a negative predictive value of at least about 70%.

[0071] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99. In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the respiratory event of the subject with an Area-Under-the-Curve (AUC) of at least about 0.70.

[0072] In some embodiments, the method further comprises performing (i), (ii), and (iii) in real time or substantially in real time. [0073] Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.

[0074] Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.

[0075] Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.

INCORPORATION BY REFERENCE

[0076] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.

BRIEF DESCRIPTION OF THE DRAWINGS

[0077] The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings (also “Figure” and “FIG.” herein), of which:

[0078] FIG. 1 illustrates an example of a contactless respiratory monitoring system 100.

[0079] FIG. 2 illustrates an example of a sequence 200 of functions of a contactless respiratory monitoring system.

[0080] FIG. 3 illustrates an example of a radar processing layer 300.

[0081] FIG. 4 illustrates an example of an audio processing layer 400. [0082] FIG. 5 illustrates an example of a sensor fusion layer 500.

[0083] FIG. 6 illustrates an example of a block diagram 600 for detection of a respiratory event using data fusion.

[0084] FIG. 7 shows a computer system that is programmed or otherwise configured to implement methods provided herein.

DETAILED DESCRIPTION

[0085] While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.

[0086] Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred” over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description.

[0087] Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.

[0088] Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1. [0089] The term “subject,” as used herein, generally refers to a human such as a patient. The subject may be a person (e.g., a patient) with a disease or disorder, or a person that has been treated for a disease, disorder, or condition; or a person that is being monitored for recurrence of a disease, disorder, or condition; or a person that is suspected of having the disease, disorder, or condition; or a person that does not have or is not suspected of having the disease, disorder, or condition. The disease or disorder may be an infectious disease, an immune disorder or disease, a cancer, a genetic disease, a degenerative disease, a lifestyle disease, an injury, a rare disease, or an age related disease. The infectious disease may be caused by bacteria, viruses, fungi and/or parasites. For example, the disease, disorder, or condition may comprise adverse respiratory conditions, atrial fibrillation, stroke, heart attack, and other preventable outpatient illnesses. For example, the disease, disorder, or condition may comprise deterioration or recurrence of a disease, disorder, or condition for which the subject has previously been treated.

[0090] In the neonatal ward, continuous and effective monitoring (e.g., cardiopulmonary function monitoring) may be vital to the lives and health of newborns. Devices and systems for neonatal cardiopulmonary function monitoring may comprise sensor devices such as ECG or pulse oximeter.

[0091] In current clinical practice, there may be a generally insufficient number of medical staff in medical institutions. Further, devices and systems for neonatal cardiopulmonary function monitoring may present challenges. For example, monitoring devices and systems may provide data from only one source, which may be not enough to cover all the complicated clinical situations and may miss some critical events. As another example, contact breathing monitoring devices and systems may be susceptible to interference from sources such as neonatal crying, cable pulling, limb twisting, and other limb movements, which may affect the quality of data collection. It may be difficult to monitor neonatal breathing, resulting in a large number of false alarms. As another example, in these devices and systems, the alarm may be triggered, only when the abnormal breathing causes the heart rate to slow down and the blood oxygen saturation to drop, which produces an inevitable hysteresis. Therefore, continuous neonatal monitoring may be difficult, as contact cardiopulmonary function monitoring equipment may be technically limited and may produce hysteresis, and collected data may be susceptible to interference.

[0092] Recognized herein is a need for systems, devices, and methods for contactless respiratory monitoring. The present disclosure provides contactless respiratory monitoring systems, devices, and methods that may adopt a non-contact collection of multiple signals and neural network-based respiratory analysis algorithms to provide real-time and effective continuous neonatal cardiopulmonary function monitoring. Such contactless respiratory monitoring systems, devices, and methods may be based on artificial intelligence and intelligent perception sensing techniques.

[0093] Systems, devices, and methods of the present disclosure may use intelligent early warning to detect neonatal hypoxic adverse events. When the system detects early characteristics of the adverse event, a healthcare provider or caretaker may be alerted in time to intervene early to avoid neonatal hypoxic brain injury. Systems, devices, and methods of the present disclosure may advantageously provide improved timeliness, accuracy, and standardization of neonatal respiratory management.

[0094] Systems, devices, and methods of the present disclosure may perform real-time dynamic collection of neonatal cardiopulmonary related multi-dimensional data, including heart rate, respiration rate and breath sound. Based on this collected data, the system may use a trained artificial intelligence (Al) model to provide cardiopulmonary function monitoring. The system may perform signal processing on the data to increase the signal strength, produce representations of the data, fuse the data, and then implement one or more neural network layers to produce a prediction of an adverse respiratory event.

[0095] Systems, devices, and methods of the present disclosure may allow patients with elevated risk of a disease, disorder, or condition to be accurately monitored inside or outside of a clinical setting, thereby improving the accuracy of detection, reducing mortality rates and clinical health care costs, and improving patients’ quality of life. For example, such systems, devices, and methods may produce accurate detections or predictions of likelihood of occurrence or recurrence of a disease, disorder, or complication that are clinically actionable by physicians (or other health care workers) toward deciding whether to discharge patients from a hospital for monitoring in a home setting, thereby reducing clinical health care costs. As another example, such systems, devices, and methods may enable in-home patient monitoring, thereby increasing patients’ quality of life compared to remaining hospitalized or making frequent visits to clinical care sites. A goal of patient monitoring (e.g., in-home) may include preventing hospital re-admissions for a discharged patient.

[0096] The collected and transmitted health data may be aggregated, for example, by batching and uploading to a computer server (e.g., a secure cloud database), where artificially intelligent algorithms may analyze the data in a continuous or real-time manner. If an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease, disorder, or condition, or occurrence of a complication) is detected or predicted, the computer server may send a real-time alert to a health care provider (e.g., a general practitioner and/or treating physician). The health care provider may subsequently perform follow-up care. In some embodiments, the follow-up care may be administered to the subject in a clinical setting. In some embodiments, the follow-up care may comprise alerting the patient to receive further treatment or clinical inspection (e.g., monitoring, diagnosis, or prognosis). Alternatively or in combination, the health care provider may prescribe a treatment or a clinical procedure to be administered to the patient based on the real-time alert. [0097] In an aspect, the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire multi-mode cardiopulmonary data of the subject; and one or more computer processors operatively coupled to the plurality of sensors, wherein the one or more computer processors are configured to (i) receive the multi-mode cardiopulmonary data of the subject from the plurality of sensors, (ii) process the multi-mode cardiopulmonary data and using a trained algorithm to determine a breathing state of the subject, and (iii) generate an output indicative of a respiratory event of the subject based at least in part on the breathing state determined in (ii).

[0098] Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.

[0099] Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.

[00100] Systems, devices, and methods of the present disclosure may collect real-time dynamic collection of neonatal cardiopulmonary related multi-dimensional data, including heart rate, respiration rate and breath sound. Based on this collected data, the disclosed system may use a trained artificial intelligence (Al) model to predict adverse respiratory events.

[00101] Systems, devices, and methods of the present disclosure may use data collected from many sensor sources rather than from one source. This may provide more comprehensive information for the model training, increasing the accuracy of predictions produced from the monitoring. Sensors may include radar and audio sensors (e.g., microphones). Further, systems, devices, and methods of the present disclosure may collect data in a contactless manner that is more comfortable for the subject. This may greatly improve subject compliance and lower data noise from subject movements. The system may be placed in many environments, including in patient care facilities and subjects’ homes (e.g., in a subject’s bedroom). Further, systems, devices, and methods of the present disclosure may use a trained model to early detect adverse events, such as neonatal hypoxic events. The system may construct an intelligent recognition algorithm is constructed to predict early signs of adverse events based on dynamic real-time multi-respiration signals collected by intelligent hardware. In some embodiments, when the system detects early signs of hypoxia, it may issue an alert or alarm to observing medical personnel.

[00102] The system may proceed to detect respiratory events using the following process. The system may collect multi-mode cardiopulmonary data using the sensors. The system may perform signal processing on the multi-mode data. The system may then produce latent representations of the signal processed cardiopulmonary data. The system may then fuse the multi-mode cardiopulmonary data.

[00103] FIG. 1 illustrates an example of a contactless respiratory monitoring system 100 for monitoring a subject. The system 100 comprises a plurality of sensors 120, a link 140, and computing devices 160. The subject may be a human subject. The subject may be a neonate or newborn. The subject may be an adult subject performing an activity, such as driving. The subject may be an elderly subject. The sensors may comprise contactless sensors configured to collect data from a human subject without contacting the subject’s body. In some embodiments, the sensors 120 may comprise one or more audio sensors and/or one or more radar sensors and radio-frequency identification (RFID) sensors. The audio sensors may comprise a microphone array.

[00104] The link 140 may connect the sensors 120 to the computing devices 160. The link 140 may be a wired link, such as a USB or serial connection, or wireless link, such as an Ethernet link, a Bluetooth link, a WiFi link, a radio frequency (RF) link, or a Zigbee link. The link 140 may be implemented using a wireless transceiver, such as a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver. The link 140 may be part of a network, such as a local area network (LAN), a wide area network (WAN), or a cloud-based network. The computing devices may perform analysis of captured sensor data. The computing devices 160 may comprise one or more computing devices. The computing devices may comprise an edge device used to detect vital signs and abnormal events, and a server containing a database and components for data analysis. In some embodiments, these functions may be implemented on one computing device.

[00105] FIG. 2 illustrates an example of a sequence 200 of functions of the contactless respiratory monitoring system 100. The contactless respiratory monitoring system 100 may continuously perform data collection 210 from the human subjects using the sensors. The data may include multimodal respiratory data. The multimodal respiratory data may include heart rates, respiration rates, and breath sounds, as well as respiratory rates, heart rates, body temperatures, and blood pressures. The contactless respiratory monitoring system may perform pre-processing 220 (e.g., signal processing 310, 410) on the collected sensor data. Pre-processing 220 may include signal processing, representation learning, and data fusion. [00106] The data collection may be performed in real-time or substantially real-time. The data collection may be performed in regular intervals. For example, the regular intervals may be about 1 second, about 5 seconds, about 10 seconds, about 15 seconds, about 20 seconds, about 30 seconds, about 1 minute, about 2 minutes, about 5 minutes, about 10 minutes, about 20 minutes, about 30 minutes, about 60 minutes, about 90 minutes, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, or about 24 hours, thereby providing real-time or near real-time updates of data collected from the subject.

[00107] The regular intervals may be adjustable by the user or in response to battery consumption requirements. For example, intervals may be extended in order to decrease battery consumption. The data may be localized without leaving the user’s device. The local database may be encrypted, to prevent the exposure of sensitive data (e.g., in the event that the user’s phone becomes lost). The local database may require authentication (e.g., by password or biometry) by the user to grant access to the clinical health data and profiles. [00108] The system or device may comprise a software application configured to allow a user to pair with, control, and view collected data. For example, the software application may be configured to allow a user to use a computer processor or a mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses) to pair with the contactless monitoring device (e.g., through a wireless transceiver such as a Bluetooth transceiver) for transmission of data and/or control signals. The software application may comprise a graphical user interface (GUI) to allow the user to view trends, statistics, and/or alerts generated based on their measured, collected, or recorded data (e.g., currently measured data, previously collected or recorded data, or a combination thereof). For example, the GUI may allow the user to view historical or average trends of a set of data over a period of time (e.g., on an hourly basis, on a daily basis, on a weekly basis, or on a monthly basis). The software application may further communicate with a web-based software application, which may be configured to store and analyze the recorded data. For example, the recorded data may be stored in a database (e.g., a computer server or on a cloud network) for real-time or future processing and analysis.

[00109] Signal processing algorithms may increase the signal strengths of data collected by the sensors. The signal processing algorithms used may include adaptive filtering, spectral estimation, and beamforming algorithms. Representation learning may transform sensor data into compressed formats which retain features learned from the data. Data fusion may include concatenating representations of the vital sign and the multimode respiratory data.

[00110] The contactless respiratory monitoring system 100 may perform intelligent learning 230 on the collected data. The intelligent learning 230 may be machine learning analysis (e.g., representation learning 320/420, data fusion 510, detection 520, multilayer perceptron analysis 630A-C). The machine learning analysis may be implemented using a neural network, such as a convolutional neural network, recurrent neural network, a transformer, or combinations thereof. The analysis may be used to detect 250 respiratory events (e.g., hypoxia events) or monitor 240 breathing.

[00111] Health care providers, such as physicians and treating teams of a patient may have access to patient alerts, data (e.g., respiratory data), and/or predictions or assessments generated from such data. Such access may be provided by a web-based dashboard (e.g., a GUI). The web-based dashboard may be configured to display, for example, patient metrics, recent alerts, and/or prediction of health outcomes (e.g., likelihood of adverse respiratory events). Using the web-based dashboard, health care providers may determine clinical decisions or outcomes based at least in part on such displayed alerts, data, and/or predictions or assessments generated from such data.

[00112] For example, a physician may instruct that patient undergo one or more clinical tests at the hospital or other clinical site, based at least in part on patient metrics or on alerts detecting or predicting an adverse respiratory condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication) in the subject over a period of time. The monitoring system may generate and transmit such alerts to health care providers when a certain pre-determined criterion is met (e.g., a minimum threshold for a likelihood of deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication such as adverse respiratory events).

[00113] Such a minimum threshold may be, for example, at least about a 5% likelihood, at least about a 10% likelihood, at least about a 20% likelihood, at least about a 25% likelihood, at least about a 30% likelihood, at least about a 35% likelihood, at least about a 40% likelihood, at least about a 45% likelihood, at least about a 50% likelihood, at least about a 55% likelihood, at least about a 60% likelihood, at least about a 65% likelihood, at least about a 70% likelihood, at least about a 75% likelihood, at least about an 80% likelihood, at least about a 85% likelihood, at least about a 90% likelihood, at least about a 95% likelihood, at least about a 96% likelihood, at least about a 97% likelihood, at least about a 98% likelihood, or at least about a 99% likelihood.

[00114] As another example, a physician may prescribe a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing to be administered to the patient based at least in part on patient metrics or on alerts detecting or predicting an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication such as adverse respiratory events) in the subject over a period of time. For example, the physician may prescribe an antiinflammatory therapeutic in response to an indication of inflammation in the patient, or an analgesic therapeutic in response to an indication of pain in the patient. Such a prescription of a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing may be determined without requiring an in-person clinical appointment with the prescribing physician.

[00115] FIG. 3 illustrates an example of a radar processing layer 300, in accordance with an embodiment. The radar processing layer 300 receives input data from a radar sensor, performs signal processing 310 to produce additional inputs for data fusion, and creates a radar representation for fusion with a thermal representation, an audio representation, or both. [00116] As shown in FIG. 3, a signal processing layer may perform signal processing 310 in the following sequence: clutter removal, beamforming, phase unwrapping, and adaptive filtering. As used herein, a layer generally refers to a set of related processes executing on the processor. For example, a signal processing layer may include various filtering methods, while a machine learning layer may include several machine learning algorithms executed in sequence. Following adaptive filtering, the system 100 may estimate a heart rate and a respiration rate from the processed sensor data by after performing bandpass filtering and spectrum estimation following adaptive filtering. The adaptively filtered signal may be further processed by a representation learning 320 for radar data block to create a radar latent representation 330 of the radar data. The radar processing layer 300 may perform phase unwrapping to overcome phase discontinuities, enabling the system to perform additional signal processing operations (e.g., bandpass filtering).

[00117] In some embodiments, processing data generated by radar comprises performing one or more signal processing operations. Processing data generated by radar may involve background modeling and removal. As shown in FIG. 3, background clutter may be mostly static and can be detected and removed using, for example, a moving average. The moving average may be produced by averaging signal strengths over successive time periods. Clutter removal may remove a direct current (DC) offset from the signal. Multiple radar antennas in a radar sensor may be arranged in such a configuration to enable beamforming, when radar signals transmitted from individual radar antennae constructively interfere to enhance the generated radar signal from the radar sensor configuration. The system 100 may remove random body motions using adaptive filters, such as a Kalman filter. The system 100 may use bandpass filtering to separate heartbeat and respiration components from the radar sensor data. The system 100 may perform time frequency analysis on the sensor data using a wavelet transform and a short-time Fourier transform to produce a spectrogram. Spectrum estimation enables the system 100 to determine bodily functions, such as heart rate and respiration rate, by forming a representation of the power spectral density of the reflected radar signals and extracting feature information from this alternate representation of the signal.

[00118] Machine learning algorithms may process the spectrogram to predict the heart rate and respiratory rate from the radar sensor data. In some embodiments, the machine learning algorithms include any combination of a neural network, a linear regression, a support vector machine, and any other machine learning algorithm(s).

[00119] The representation learning 320 for radar data may use machine learning to create a latent radar representation 330, reconfiguring the processed sensor data into a form that preserves the unique features of the data and enables it to be fused with either the thermal data or the audio data, or both. Representation learning may include removing information about extraneous attributes of the data that are not features analyzed by the machine learning algorithms (compression).

[00120] FIG. 4 illustrates an example of an audio processing layer 400. The audio processing layer 400 receives input data from one or more audio sensors 140, performs signal processing 410 to produce additional inputs for data fusion, and creates a latent audio representation.

[00121] The audio processing layer 400 may perform signal processing 410 on the audio signal received through the microphone. The audio signal may be a sound waveform. The system 100 may perform resampling (to reduce the processing cost of computation), bandpass filtering, and a mel-spectrum transform to process the signal. The mel-spectrum transform may make auditory features more prominent, as performing mel-spectrum transforms closely approximates a human’s auditory system 100 response. Bandpass filtering may be performed to better isolate sounds associated with sleep states (e.g., coughing, wheezing, and snoring). The signal-processed audio data may be analyzed by a representation learning 420 for audio data algorithm. The latent audio representation 430 may be processed to determine cough amplitude and frequency using a cough detection algorithm, and snoring amplitude and duration may be predicted using a snoring detection algorithm.

[00122] The representation learning 420 for audio data stage may use machine learning to create a latent space representation 430, reconfiguring the processed sensor data into a form that preserves the unique features of the data and enables it to be fused with either the radar data or the thermal data, or both. Following representation learning, the system may perform breathing sound detection, predicting amplitudes (sound volumes of breaths) and frequencies (paces of breaths).

[00123] FIG. 5 illustrates an example of a sensor fusion layer 500. The sensor fusion layer 500 combines the audio and radar representations into fused data. Then, the sensor fusion layer 500 uses machine learning to detect the presence of hypoxia in neonates.

[00124] The data fusion layer 510 processes a combination of representations from the sensors 120. The data fusion layer 510 may merge the representations together, for example, by concatenation, pooling, computing a product, or by another method, train classifiers on the concatenated representations, and produce predictions using the trained classifiers. The fusion layer may include a classifier configured to receive the audio latent representation 430 and the radar latent representation 330. In some embodiments, outputs produced by the sensors are processed in real time in order to provide real time respiratory event alerts. In still other embodiments, the contactless respiratory monitoring system 100 is configured to use a combination of real-time data and historical data generated by the sensors to predict the sleep states. Additionally, the data fusion layer 510 may incorporate and analyze physiology data 540, which may include vital sign measurements collected by the sensors as well as intermediate predictions made (e.g., motion, position, and audio event data). The physiology data 540 may also be placed in a representation before being incorporated in the data fusion layer 510.

[00125] Using a sensor fusion approach may enable a greater confidence level in detecting sleep states associated with a user. Using a single sensor may increase a probability associated with incorrect predictions, especially when there is an occlusion, a blind spot, a long range or multiple people in a scene as observed by the sensor. Using multiple sensors in combination and combining data processing results from processing discrete sets of quantitative data generated by the various sensors may produce a more accurate prediction, as different sensing modalities may complement each other in their capabilities.

[00126] The system 100 may perform the data fusion and detection processes on one or more computing devices. A computing unit (e.g., a single board computer) may perform the representation learning, data fusion, and detection tasks. The detection result (e.g., there is the presence of hypoxia) may be stored on database (e.g., a server, such as a cloud-based server) and provide it to additional computing units or to health care providers via a user interface or alerting system. The alerting system may provide an audible alarm or a visible alarm (e.g., using lights such as light-emitting diodes).

[00127] The detection layer 520 uses machine learning to predict the presence of a respiratory condition (e.g., hypoxia or sleep apnea). The detection layer may predict the presence of a respiratory condition by predicting the presence of one or more early signs of the respiratory condition. Such early signs may include seizures, difficulty feeding, breathing problems, hypotonia, organ damage, organ failure, acidemia, and an abnormal response to light. The classifier may be a binary classifier. The classification algorithm may be trained by analyzing ground truth data from sleep measurement devices (e.g., polysomnography (PSG) devices) collecting data from a control group (neonates without hypoxia) and an experimental group (e.g., neonates with hypoxia). Classification algorithms may include decision trees, support vector machines, neural networks (including convolutional and recurrent neural networks (CNNs and RNNs), such as long short-term memory (LSTM networks), logistic regressions, or a combination thereof. [00128] FIG. 6 an example of a block diagram 600 for detection of a respiratory event using data fusion. The system concatenates the radar latent representation and the audio latent representation into concatenation 610A. The system 100 may implement multilayer perceptron (MLP) algorithms 630A-C or other neural network algorithms (e.g., convolutional neural networks, transformers, or recurrent neural networks or combinations of neural networks) on the concatenation and on physiological data 620. Then, the system may concatenate the outputs of the MLPs into concatenation 610B. Implementing fusion in this manner may provide increased dimensionality, which may enhance the detector’s predictive ability. The system may process the concatenation using one or more additional MLP layers 630C before making a prediction of a respiratory condition 640 (e.g., hypoxia).

[00129] Machine learning classifiers

[00130] The contactless monitoring system may utilize or access external capabilities of artificial intelligence techniques to develop signatures for various respiratory states. The webbased software may further use these signatures to accurately predict adverse respiratory states (e.g., minutes, hours, or days earlier than with traditional clinical care). Using such a predictive capability, health care providers (e.g., physicians) may be able to make informed, accurate risk-based decisions, thereby improving quality of care and monitoring provided to patients.

[00131] The contactless monitoring system may analyze acquired health data from a subject (patient) to generate a likelihood of the subject having an adverse respiratory condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication). For example, the system may apply a trained (e.g., prediction) algorithm to the acquired health data to generate the likelihood of the subject having an adverse respiratory condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication). The trained algorithm may comprise an artificial intelligence based classifier, such as a machine learning based classifier, configured to process the acquired health data to generate the likelihood of the subject having the adverse respiratory condition. The machine learning classifier may be trained using clinical datasets from one or more cohorts of patients, e.g., using clinical health data of the patients (e.g., respiratory data) as inputs and known clinical health outcomes (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication) of the patients as outputs to the machine learning classifier. [00132] The machine learning classifier may comprise one or more machine learning algorithms. Examples of machine learning algorithms may include a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network (such as a deep neural network (DNN), a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) recurrent neural network (RNN), or a gated recurrent unit (GRU) recurrent neural network (RNN)), deep learning, or other supervised learning algorithm or unsupervised learning algorithm for classification and regression. The machine learning classifier may be trained using one or more training datasets corresponding to patient data.

[00133] Training datasets may be generated from, for example, one or more cohorts of patients having common clinical characteristics (features) and clinical outcomes (labels). Training datasets may comprise a set of features and labels corresponding to the features. Features may correspond to algorithm inputs comprising patient demographic information derived from electronic medical records (EMR) and medical observations. Features may comprise clinical characteristics such as, for example, certain ranges or categories of multimode cardiopulmonary data or health data measurements, such as heart rate, heart rate variability, blood pressure (e.g., systolic and diastolic), respiratory rate, blood oxygen concentration (SpO2), carbon dioxide concentration in respiratory gases, a hormone level, sweat analysis, blood glucose, body temperature, impedance (e.g., bioimpedance), conductivity, capacitance, resistivity, electromyography, galvanic skin response, neurological signals (e.g., electroencephalography), immunology markers, and other physiological measurements. Features may comprise patient information such as patient age, patient medical history, other medical conditions, current or past medications, and time since the last observation. For example, a set of features collected from a given patient at a given time point may collectively serve as a signature, which may be indicative of a health state or status of the patient at the given time point.

[00134] For example, ranges of multi-mode cardiopulmonary data and other health measurements may be expressed as a plurality of disjoint continuous ranges of continuous measurement values, and categories of vital sign and other health measurements may be expressed as a plurality of disjoint sets of measurement values (e.g., {“high”, “low”}, {“high”, “normal”}, {“low”, “normal”}, {“high”, “borderline high”, “normal”, “low”}, etc.). Clinical characteristics may also include clinical labels indicating the patient’s health history, such as a diagnosis of a disease or disorder, a previous administration of a clinical treatment (e.g., a drug, a surgical treatment, chemotherapy, radiotherapy, immunotherapy, etc.), behavioral factors, or other health status (e.g., hypertension or high blood pressure, hyperglycemia or high blood glucose, hypercholesterolemia or high blood cholesterol, history of allergic reaction or other adverse reaction, etc.).

[00135] Labels may comprise clinical outcomes such as, for example, a presence, absence, diagnosis, or prognosis of an adverse respiratory condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication) in the subject (e.g., patient). Clinical outcomes may include a temporal characteristic associated with the presence, absence, diagnosis, or prognosis of the adverse respiratory condition in the patient. For example, temporal characteristics may be indicative of the patient having had an occurrence of the adverse respiratory condition within a certain period of time after a previous clinical outcome (e.g., being discharged from the hospital, undergoing an organ transplantation or other surgical operation, undergoing a clinical procedure, etc.). Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year.

[00136] Input features may be structured by aggregating the data into bins or alternatively using a one-hot encoding with the time since the last observation included. Inputs may also include feature values or vectors derived from the previously mentioned inputs, such as crosscorrelations calculated between separate cardiopulmonary or other measurements over a fixed period of time, and the discrete derivative or the finite difference between successive measurements. Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year. [00137] Training records may be constructed from sequences of observations. Such sequences may comprise a fixed length for ease of data processing. For example, sequences may be zero-padded or selected as independent subsets of a single patient’s records.

[00138] The machine learning classifier algorithm may process the input features to generate output values comprising one or more classifications, one or more predictions, or a combination thereof. For example, such classifications or predictions may include a binary classification of a healthy/normal respiratory state or adverse respiratory state, a classification between a group of categorical labels (e.g., ‘no adverse respiratory condition’, ‘apparent adverse respiratory condition’, and ‘likely adverse respiratory condition’), a likelihood (e.g., relative likelihood or probability) of developing a particular adverse respiratory condition, a score indicative of a presence of adverse respiratory condition, a score indicative of a level of systemic inflammation experienced by the patient, a ‘risk factor’ for the likelihood of mortality of the patient, a prediction of the time at which the patient is expected to have developed the adverse respiratory condition, and a confidence interval for any numeric predictions. Various machine learning techniques may be cascaded such that the output of a machine learning technique may also be used as input features to subsequent layers or subsections of the machine learning classifier.

[00139] In order to train the machine learning classifier model (e.g., by determining weights and correlations of the model) to generate real-time classifications or predictions, the model can be trained using datasets. Such datasets may be sufficiently large to generate statistically significant classifications or predictions. For example, datasets may comprise: databases of de-identified data including vital sign and other measurements, cardiopulmonary and other measurements from a hospital or other clinical setting, cardiopulmonary and other measurements collected using an FDA-approved monitoring device, fitness tracker, or other monitoring device, and vital sign and other measurements collected using contactless monitoring systems and devices of the present disclosure.

[00140] Datasets may be split into subsets (e.g., discrete or overlapping), such as a training dataset, a development dataset, and a test dataset. For example, a dataset may be split into a training dataset comprising 80% of the dataset, a development dataset comprising 10% of the dataset, and a test dataset comprising 10% of the dataset. The training dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. The development dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. The test dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. Training sets (e.g., training datasets) may be selected by random sampling of a set of data corresponding to one or more patient cohorts to ensure independence of sampling. Alternatively, training sets (e.g., training datasets) may be selected by proportionate sampling of a set of data corresponding to one or more patient cohorts to ensure independence of sampling.

[00141] To improve the accuracy of model predictions and reduce overfitting of the model, the datasets may be augmented to increase the number of samples within the training set. For example, data augmentation may comprise rearranging the order of observations in a training record. To accommodate datasets having missing observations, methods to impute missing data may be used, such as forward-filling, back-filling, linear interpolation, and multi-task Gaussian processes. Datasets may be filtered to remove confounding factors. For example, within a database, a subset of patients may be excluded.

[00142] The machine learning classifier may comprise one or more neural networks, such as a neural network, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a transformer, or a deep RNN. The recurrent neural network may comprise units which can be long short-term memory (LSTM) units or gated recurrent units (GRU). For example, the machine learning classifier may comprise an algorithm architecture comprising a neural network with a set of input features such as vital sign and other measurements, patient medical history, and/or patient demographics. Neural network techniques, such as dropout or regularization, may be used during training the machine learning classifier to prevent overfitting.

[00143] When the machine learning classifier generates a classification or a prediction of an adverse respiratory condition, an alert or alarm may be generated and transmitted to a health care provider, such as a physician, nurse, or other member of the patient’s treating team within a hospital. Alerts may be transmitted via an automated phone call, a short message service (SMS) or multimedia message service (MMS) message, an e-mail, or an alert within a dashboard. The alert may comprise output information such as a prediction of an adverse respiratory condition, a likelihood of the predicted adverse respiratory condition, a time until an expected onset of the adverse respiratory condition, a confidence interval of the likelihood or time, or a recommended course of treatment for the adverse respiratory condition. The neural network may comprise a plurality of sub-networks, each of which is configured to generate a classification or prediction of a different type of output information (e.g., which may be combined to form an overall output of the neural network).

[00144] To validate the performance of the machine learning classifier model, different performance metrics may be generated. For example, an area under the receiver-operating curve (AUROC) may be used to determine the diagnostic capability of the machine learning classifier. For example, the machine learning classifier may use classification thresholds which are adjustable, such that specificity and sensitivity are tunable, and the receiveroperating curve (ROC) can be used to identify the different operating points corresponding to different values of specificity and sensitivity.

[00145] In some cases, such as when datasets are not sufficiently large, cross-validation may be performed to assess the robustness of a machine learning classifier model across different training and testing datasets.

[00146] In some cases, while a machine learning classifier model may be trained using a dataset of records which are a subset of a single patient’s observations, the performance of the classifier model’s discrimination ability (e.g., as assessed using an AUROC) is calculated using the entire record for a patient. To calculate performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), AUPRC, AUROC, or similar, the following definitions may be used. A “false positive” may refer to an outcome in which if an alert or alarm has been incorrectly or prematurely activated (e.g., before the actual onset of, or without any onset of, an adverse respiratory condition). A “true positive” may refer to an outcome in which an alert or alarm has been activated at the correct time (within a predetermined buffer or tolerance), and the patient’s record indicates the adverse respiratory condition. A “false negative” may refer to an outcome in which no alert or alarm has been activated, but the patient’s record indicates the adverse respiratory condition. A “true negative” may refer to an outcome in which no alert or alarm has been activated, and the patient’s record does not indicate the adverse respiratory condition.

[00147] The machine learning classifier may be trained until certain pre-determined conditions for accuracy or performance are satisfied, such as having minimum desired values corresponding to diagnostic accuracy measures. For example, the diagnostic accuracy measure may correspond to prediction of a likelihood of occurrence of an adverse respiratory condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a respiratory complication) in the subject. As another example, the diagnostic accuracy measure may correspond to prediction of a likelihood of deterioration or recurrence of an adverse respiratory condition for which the subject has previously been treated. For example, a diagnostic accuracy measure may correspond to prediction of likelihood of recurrence of a breathing difficulty in a subject who has previously been treated for the breathing difficulty. Examples of diagnostic accuracy measures may include sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, area under the precision-recall curve (AUPRC), and area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) corresponding to the diagnostic accuracy of detecting or predicting an adverse respiratory condition.

[00148] For example, such a pre-determined condition may be that the sensitivity of predicting the adverse respiratory condition comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00149] As another example, such a pre-determined condition may be that the specificity of predicting the adverse respiratory condition comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00150] As another example, such a pre-determined condition may be that the positive predictive value (PPV) of predicting the adverse respiratory condition comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00151] As another example, such a pre-determined condition may be that the negative predictive value (NPV) of predicting the adverse respiratory condition comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00152] As another example, such a pre-determined condition may be that the area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of predicting the adverse respiratory condition comprises a value of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.

[00153] As another example, such a pre-determined condition may be that the area under the precision-recall curve (AUPRC) of predicting the adverse respiratory condition comprises a value of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.

[00154] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00155] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00156] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with a positive predictive value (PPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00157] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with a negative predictive value (NPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.

[00158] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with an area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.

[00159] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition with an area under the precision-recall curve (AUPRC) of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.

[00160] In some embodiments, the trained classifier may be trained or configured to predict the adverse respiratory condition over a period of time before the actual occurrence or recurrence of the adverse respiratory condition (e.g., a period of time including a window beginning about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 7 hours, about 8 hours, about 9 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 36 hours, about 48 hours, about 72 hours, about 96 hours, about 120 hours, about 6 days, or about 7 days prior to the onset of the adverse respiratory condition, and ending at the onset of the adverse respiratory condition).

[00161] Computer systems

[00162] The present disclosure provides computer systems that are programmed to implement methods of the disclosure. FIG. 7 shows a computer system 701 that is programmed or otherwise configured to perform machine learning analysis on collected data from sensors. The computer system 701 can regulate various aspects of sensor data analysis of the present disclosure, such as, for example, receiving multi-mode cardiopulmonary data of a subject from a plurality of sensors, processing the multi-mode cardiopulmonary data using a trained algorithm to determine a breathing state of the subject, generating an output indicative of a respiratory event of the subject based at least in part on the breathing state, processing data using algorithms, and predicting a respiratory event. The computer system 701 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device. The electronic device can be a mobile electronic device.

[00163] The computer system 701 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 705, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The computer system 701 also includes memory or memory location 710 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 715 (e.g., hard disk), communication interface 720 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 725, such as cache, other memory, data storage and/or electronic display adapters. The memory 710, storage unit 715, interface 720 and peripheral devices 725 are in communication with the CPU 705 through a communication bus (solid lines), such as a motherboard. The storage unit 715 can be a data storage unit (or data repository) for storing data. The computer system 701 can be operatively coupled to a computer network (“network”) 730 with the aid of the communication interface 720. The network 730 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet. The network 730 in some cases is a telecommunication and/or data network. The network 730 can include one or more computer servers, which can enable distributed computing, such as cloud computing. The network 730, in some cases with the aid of the computer system 701, can implement a peer-to-peer network, which may enable devices coupled to the computer system 701 to behave as a client or a server.

[00164] The CPU 705 can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory 710. The instructions can be directed to the CPU 705, which can subsequently program or otherwise configure the CPU 705 to implement methods of the present disclosure. Examples of operations performed by the CPU 705 can include fetch, decode, execute, and writeback.

[00165] The CPU 705 can be part of a circuit, such as an integrated circuit. One or more other components of the system 701 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC).

[00166] The storage unit 715 can store files, such as drivers, libraries and saved programs. The storage unit 715 can store user data, e.g., user preferences and user programs. The computer system 701 in some cases can include one or more additional data storage units that are external to the computer system 701, such as located on a remote server that is in communication with the computer system 701 through an intranet or the Internet.

[00167] The computer system 701 can communicate with one or more remote computer systems through the network 730. For instance, the computer system 701 can communicate with a remote computer system of a user (e.g., a health care provider). Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PC’s (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants. The user can access the computer system 701 via the network 730.

[00168] Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 701, such as, for example, on the memory 710 or electronic storage unit 715. The machine executable or machine readable code can be provided in the form of software. During use, the code can be executed by the processor 705. In some cases, the code can be retrieved from the storage unit 715 and stored on the memory 710 for ready access by the processor 705. In some situations, the electronic storage unit 715 can be precluded, and machine-executable instructions are stored on memory 710.

[00169] The code can be pre-compiled and configured for use with a machine having a processer adapted to execute the code, or can be compiled during runtime. The code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion.

[00170] Aspects of the systems and methods provided herein, such as the computer system 701, can be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.

[00171] Hence, a machine readable medium, such as computer-executable code, may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD- ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.

[00172] The computer system 701 can include or be in communication with an electronic display 735 that comprises a user interface (LT) 740 for providing, for example, respiratory event alerts, multi-mode cardiopulmonary data, and breathing states of a subject. Examples of UI’s include, without limitation, a graphical user interface (GUI) and web-based user interface.

[00173] Methods and systems of the present disclosure can be implemented by way of one or more algorithms. An algorithm can be implemented by way of software upon execution by the central processing unit 705. The algorithm can, for example, receive multi-mode cardiopulmonary data of a subject from a plurality of sensors, process the multi-mode cardiopulmonary data using a trained algorithm to determine a breathing state of the subject, generate an output indicative of a respiratory event of the subject based at least in part on the breathing state, process data using algorithms, and predict a respiratory event.

[00174] While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.