Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LEARNING OF TEMPORAL DYNAMICS FOR FEATURE EXTRACTION AND COMPRESSION OF PHYSIOLOGICAL WAVEFORMS
Document Type and Number:
WIPO Patent Application WO/2020/148163
Kind Code:
A1
Abstract:
A method for determining the parameters of a sparse coding method wherein the parameters include a set of filters, the method including: defining an objective function that produces the set of filters and a set of associated weights based upon a set of input signals, a set of white-noise processes, and a regularization parameter; determining the set of associated weights and set of filters that produce an optimal solution of the objective function by iterating the following steps until convergence: optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals; and optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals, wherein the input signal is reconstructed based upon the set of associated weights, the set of filters, and the set of white noise processes.

Inventors:
CONROY BRYAN (NL)
RAHMAN ASIF (NL)
Application Number:
PCT/EP2020/050473
Publication Date:
July 23, 2020
Filing Date:
January 10, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKLIJKE PHILIPS NV (NL)
International Classes:
H03M7/30; A61B5/00; A61B5/04; G16H50/20
Foreign References:
US20160242690A12016-08-25
EP3132742A12017-02-22
US20160335224A12016-11-17
KR20160098960A2016-08-19
Other References:
BRIAN BOOTH: "Sparse Coding: An Overview", 12 November 2013 (2013-11-12), XP055678733, Retrieved from the Internet [retrieved on 20200323]
Attorney, Agent or Firm:
PHILIPS INTELLECTUAL PROPERTY & STANDARDS (NL)
Download PDF:
Claims:
What is claimed is:

1. A method for determining the parameters of a sparse coding method wherein the parameters include a set of filters, the method comprising:

defining an objective function that produces the set of filters and a set of associated weights based upon a set of input signals, a set of white-noise processes, and a regularization parameter;

determining the set of associated weights and set of filters that produce an optimal solution of the objective function by iterating the following steps until convergence:

optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals; and

optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals,

wherein the input signal is reconstructed based upon the set of associated weights, the set of filters, and the set of white noise processes.

2. The method of claim 1 , wherein the set of filters are an inverse fast Fourier transform (IFFT) of the power spectral density of a stationary process.

3. The method of claim 1, wherein optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals includes solving a set of decoupled sparse regression problems using a convex optimization algorithm.

4. The method of claim 1, wherein optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

5. The method of claim 1, further comprising computing a specific set of weights for a specific input by solving a sparse regression problem based upon the specific input, the optimized set of filters, and the set of white noise processes.

6. A method for determining the parameters of a sparse coding method wherein the parameters include a set of filters sx, s2, ... , sK, the method comprising:

defining an objective function that produces the set of filters sx, s2, ... , sK and a set of associated weights a , a2, . . . , aN based upon a set of input signals XL, white-noise processes wik, and a regularization parameter l;

determining the set of associated weights alr a2, . . . , aN and set of filters s1, s2, sK that produce an optimal solution of the objective function by iterating the following steps until convergence:

optimizing set of the associated weights a , a2, . . . , aN while holding the set of filters s1, s2, sK fixed based upon the set of input signals XL:

optimizing for the set of filters sx, s2, ... , sK while holding associated set of weights a , a2, . . . , aN fixed based upon the set of input signals XL,

wherein the input signal is reconstructed as:

7. The method of claim 6, wherein the set of filters s±, s2, sK are sk = IFFT(sk), where IFFT is the inverse fast Fourier transform (IFFT) and sk is spectral density of a stationary process.

8. The method of claim 6, wherein optimizing the set of associated weights a , a2, . . . , aN while holding the set of filters sl s2 , ... , sK fixed based upon the set of input signals XL includes solving a set of decoupled sparse regression problems using a convex optimization algorithm.

9. The method of claim 6, wherein optimizing the set of filters sl s2, sK while holding set of associated weights a , a2, . . . , aN fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

10. The method of claim 9, wherein optimizing the set of filters sl s2 , , sK includes optimizing the function:

11. The method of claim 10, further comprising iteratively minimizing for each of the K filters sl s2, sK until convergence the function:

where

12. The method of claim 11, further comprising solving the function for Sj by marginalizing out white-noise processes wLJ by equating it to the Fourier transform of the phase response of Xij and then updating the filter Sj by solving a quadratic optimization problem.

13. The method of claim 6, wherein the objective function is

14. The method of claim 6, further comprising computing a specific set of weights a for a specific input X by solving a sparse regression problem defined as:

15. A non-transitory machine-readable storage medium encoded with instructions for determining the parameters of a sparse coding method wherein the parameters include a set of filters, comprising:

instructions for defining an objective function that produces the set of filters and a set of associated weights based upon a set of input signals, a set of white-noise processes, and a regularization parameter;

instructions for determining the set of associated weights and set of filters that produce an optimal solution of the objective function by iterating the following steps until convergence: instructions for optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals; and

instructions for optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals,

wherein the input signal is reconstructed based upon the set of associated weights, the set of filters, and the set of white noise processes.

16. The non-transitory machine-readable storage medium of claim 15, wherein the set of filters are an inverse fast Fourier transform (IFFT) of the power spectral density of a stationary process.

17. The non-transitory machine-readable storage medium of claim 15, wherein instructions for optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals includes instructions for solving a set of decoupled sparse regression problems using a convex optimization algorithm.

18. The non-transitory machine-readable storage medium of claim 15, wherein instructions for optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

19. The non-transitory machine-readable storage medium of claim 15, further comprising instructions for computing a specific set of weights for a specific input by solving a sparse regression problem based upon the specific input, the optimized set of filters, and the set of white noise processes.

20. A non-transitory machine-readable storage medium encoded with instructions for determining the parameters of a sparse coding method wherein the parameters include a set of filters sx, s2, ... , %, comprising:

instructions for defining an objective function that produces the set of filters s1, s2, sK and a set of associated weights a , based upon a set of input signals XL, white- noise processes wik, and a regularization parameter l;

instructions for determining the set of associated weights and set of filters sx, s2 , ... , sK that produce an optimal solution of the objective function by iterating the following steps until convergence: instructions for optimizing set of the associated weights ai a2, . . . , aN while holding the set of filters sx, s2, sK fixed based upon the set of input signals XL:

instructions for optimizing for the set of filters s , s2, sK while holding associated set of weights a , a2, . . . , aN fixed based upon the set of input signals XL,

wherein the input signal is reconstructed as:

21. The non-transitory machine-readable storage medium of claim 20, wherein the set of filters s , s2, sK are sk = IFFT (sfe), where IFFT is the inverse fast Fourier transform (IFFT) and sk is spectral density of a stationary process.

22. The non-transitory machine-readable storage medium of claim 20, wherein instructions for optimizing the set of associated weights alr a2, . . . , aN while holding the set of filters s1, s2, sK fixed based upon the set of input signals XL includes instructions for solving a set of decoupled sparse regression problems using a convex optimization algorithm.

23. The non-transitory machine-readable storage medium of claim 20, wherein instructions for optimizing the set of filters sx, s2, sK while holding set of associated weights alr a2, . . . , aN fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

24. The non-transitory machine-readable storage medium of claim 23, wherein instructions for optimizing the set of filters sx, s2, sK includes instructions for optimizing the function:

25. The non-transitory machine-readable storage medium of claim 24, further comprising instructions for iteratively minimizing for each of the K filters sx, s2, sK until convergence the function:

where

26. The non-transitory machine-readable storage medium of claim 25, further comprising instructions for solving the function for Sj by marginalizing out white-noise processes wLj by equating it to the Fourier transform of the phase response of Xij and then updating the filter Sj by solving a quadratic optimization problem.

27. The non-transitory machine-readable storage medium of claim 20, wherein the objective function is

28. The non-transitory machine-readable storage medium of claim 20, further comprising instructions for computing a specific set of weights a for a specific input X by solving a sparse regression problem defined as:

Description:
LEARNING OF TEMPORAL DYNAMICS FOR FEATURE EXTRACTION AND COMPRESSION OF PHYSIOLOGICAL WAVEFORMS

TECHNICAL FIELD

[0001] Various exemplary embodiments disclosed herein relate generally to a system and method for learning of temporal dynamics for feature extraction and compression of physiological waveforms.

BACKGROUND

[0002] Continuous-time physiological signals, such as the electrocardiogram (EKG), photoplethysmogram (PPG), and arterial blood pressure (ABP) signals, may be very useful in characterizing patient condition, particularly as part of a larger clinical decision support algorithm. An important challenge is how to extract meaningful features from these waveforms for various down-stream tasks that assist the clinician in better identifying and managing acute patient conditions.

SUMMARY

[0003] A summary of various exemplary embodiments is presented below. Some simplifications and omissions may be made in the following summary, which is intended to highlight and introduce some aspects of the various exemplary embodiments, but not to limit the scope of the invention. Detailed descriptions of an exemplary embodiment adequate to allow those of ordinary skill in the art to make and use the inventive concepts will follow in later sections.

[0004] Various embodiments relate to a method for determining the parameters of a sparse coding method wherein the parameters include a set of filters, the method including: defining an objective function that produces the set of filters and a set of associated weights based upon a set of input signals, a set of white-noise processes, and a regularization parameter; determining the set of associated weights and set of filters that produce an optimal solution of the objective function by iterating the following steps until convergence: optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals; and optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals, wherein the input signal is reconstructed based upon the set of associated weights, the set of filters, and the set of white noise processes.

[0005] Various embodiments are described, wherein the set of filters are an inverse fast Fourier transform (IFFT) of the power spectral density of a stationary process.

[0006] Various embodiments are described, wherein optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals includes solving a set of decoupled sparse regression problems using a convex optimization algorithm.

[0007] Various embodiments are described, wherein optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

[0008] Various embodiments are described, further including computing a specific set of weights for a specific input by solving a sparse regression problem based upon the specific input, the optimized set of filters, and the set of white noise processes.

[0009] Further various embodiments relate to a method for determining the parameters of a sparse coding method wherein the parameters include a set of filters s x , s 2 , ... , s K , the method including: defining an objective function that produces the set of filters s x , s 2 , ... , s K and a set of associated weights a lr a 2 , . . . , a N based upon a set of input signals X L , white-noise processes w ik , and a regularization parameter l; determining the set of associated weights a, a 2 ,...,a N and set of filters s x , s 2 , ..., s K that produce an optimal solution of the objective function by iterating the following steps until convergence: optimizing set of the associated weights a, a 2 ,...,a N while holding the set of filters s 1 , s 2 , s K fixed based upon the set of input signals Xi ; optimizing for the set of filters s 1 , s 2 , s K while holding associated set of weights a, a 2 ,...,a N fixed based upon the set of input signals X L , wherein the input signal is reconstructed as:

[0010] Various embodiments are described, wherein the set of filters s , s 2 , ..., s K are s k = IFFT(s k ), where IFFT is the inverse fast Fourier transform (IFFT) and s k is spectral density of a stationary process.

[0011] Various embodiments are described, wherein optimizing the set of associated weights a, a 2 ,...,a N while holding the set of filters s 1 , s 2 , s K fixed based upon the set of input signals X L includes solving a set of decoupled sparse regression problems using a convex optimization algorithm.

[0012] Various embodiments are described, wherein optimizing the set of filters s x , s 2 , s K while holding set of associated weights a, a 2 ,...,a N fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

[0013] Various embodiments are described, wherein optimizing the set of filters s x , s 2 , s K includes optimizing the function: (S- L ,

[0014] Various embodiments are described, further including iteratively minimizing for each of the K filters s x , s 2 , s K until convergence the function:

where

[0015] Various embodiments are described, further including solving the function for S j by marginalizing out white-noise processes w tj by equating it to the Fourier transform of the phase response of Xi j and then updating the filter S j by solving a quadratic optimization problem.

[0016] Various embodiments are described, wherein the objective function is

[0017] Various embodiments are described, further including computing a specific set of weights a for a specific input X by solving a sparse regression problem defined as:

[0018] Further various embodiments relate to a non -transitory machine-readable storage medium encoded with instructions for determining the parameters of a sparse coding method wherein the parameters include a set of filters, including: instructions for defining an objective function that produces the set of filters and a set of associated weights based upon a set of input signals, a set of white-noise processes, and a regularization parameter; instructions for determining the set of associated weights and set of filters that produce an optimal solution of the objective function by iterating the following steps until convergence: instructions for optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals; and instructions for optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals, wherein the input signal is reconstructed based upon the set of associated weights, the set of filters, and the set of white noise processes.

[0019] Various embodiments are described, wherein the set of filters are an inverse fast Fourier transform (IFFT) of the power spectral density of a stationary process.

[0020] Various embodiments are described, wherein instructions for optimizing the set of associated weights while holding the set of filters fixed based upon the set of input signals includes instructions for solving a set of decoupled sparse regression problems using a convex optimization algorithm.

[0021] Various embodiments are described, wherein instructions for optimizing the set of filters while holding set of associated weights fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

[0022] Various embodiments are described, further including instructions for computing a specific set of weights for a specific input by solving a sparse regression problem based upon the specific input, the optimized set of filters, and the set of white noise processes.

[0023] Further various embodiments relate to a non-transitory machine-readable storage medium encoded with instructions for determining the parameters of a sparse coding method wherein the parameters include a set of filters s x , s 2 , ... , s K , including: instructions for defining an objective function that produces the set of filters s x , s 2 , ... , s K and a set of associated weights a lr a 2 , . . . , a N based upon a set of input signals X L , white-noise processes W jfe , and a regularization parameter l; instructions for determining the set of associated weights a 1 a 2 , . . . , a N and set of filters s x , s 2 , s K that produce an optimal solution of the objective function by iterating the following steps until convergence: instructions for optimizing set of the associated weights a lr a 2 , . . . , a N while holding the set of filters s 1 , s 2 , s K fixed based upon the set of input signals X L : instructions for optimizing for the set of filters s 1 , s 2 , s K while holding associated set of weights a lr a 2 , . . . , a N fixed based upon the set of input signals X t , wherein the input signal is reconstructed as:

[0024] Various embodiments are described, wherein the set of filters s l s 2 , s K are s k = IFFT(s k ), where IFFT is the inverse fast Fourier transform (IFFT) and s k is spectral density of a stationary process.

[0025] Various embodiments are described, wherein instructions for optimizing the set of associated weights a lr a 2 , . . . , a N while holding the set of filters s 1 , s 2 , ... , s K fixed based upon the set of input signals X L includes instructions for solving a set of decoupled sparse regression problems using a convex optimization algorithm.

[0026] Various embodiments are described, wherein instructions for optimizing the set of filters s x , s 2 , ... , s K while holding set of associated weights a lr a 2 , . . . , a N fixed based upon the set of input signals includes using a cyclical block-coordinate decent algorithm.

[0027] Various embodiments are described, wherein instructions for optimizing the set of filters s x , s 2 , ... , s K includes instructions for optimizing the function: [0028] Various embodiments are described, further including instructions for iteratively minimizing for each of the K filters s x , s 2 , s K until convergence the function:

where

[0029] Various embodiments are described, further including instructions for solving the function for S j by marginalizing out white-noise processes w Lj by equating it to the Fourier transform of the phase response of Xi j and then updating the filter S j by solving a quadratic optimization problem.

[0030] Various embodiments are described, wherein the objective function is

[0031] Various embodiments are described, further including instructions for computing a specific set of weights a for a specific input X by solving a sparse regression problem defined as: BRIEF DESCRIPTION OF THE DRAWINGS

[0032] In order to better understand various exemplary embodiments, reference is made to the accompanying drawings, wherein:

[0033] FIG. 1 illustrates a flow chart of the method of producing the parameters for the sparse encoding method; and

[0034] FIG. 2 illustrates feature extraction from an input signal.

[0035] To facilitate understanding, identical reference numerals have been used to designate elements having substantially the same or similar structure and/or substantially the same or similar function.

DETAILED DESCRIPTION

[0036] The description and drawings illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Additionally, the term,“or,” as used herein, refers to a non-exclusive or (i.e., and/or), unless otherwise indicated ( e.g .,“or else” or“or in the alternative”). Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. [0037] Continuous-time physiological signals, such as the electrocardiogram (EKG), photoplethysmogram (PPG), and arterial blood pressure (ABP) signals, may be very useful in characterizing patient condition, particularly as part of a larger clinical decision support algorithm. An important challenge is how to extract meaningful features from these waveforms for various down-stream tasks that assist the clinician in better identifying and managing acute patient conditions.

[0038] Embodiments of a sparse coding method that learns a representation of a class of physiological signals ( e.g ., EKG) by learning a dictionary of temporal dynamic processes that are expressive of the physiological signal examples from a patient dataset will be describe herein. Each dictionary element is an auto-regressive model, and each example signal is expressed as a sparse linear combination of autoregressive models. The linear combination weighting assigned to each signal may be used as a feature vector for downstream machine learning tasks. The representation is useful in that it provides invariance to phase distortions (including translation invariance), and is less sensitive to the particular type of EKG lead.

[0039] The sparse encoding method seeks to learn a representation for continuous -time physiological signals such as, for example, an EKG. A common approach to such a problem is to apply a neural network architecture (e.g., convolutional neural network) to extract features from temporal filters applied to the signals. Convolutional networks provide shift invariance to the representation.

[0040] The sparse encoding method described herein provides additional invariances to the derived representation by allowing each temporal filter to contour to the phase response profile of the time-series signal. In addition to shift invariance, the representation allows for nonlinear phase distortions. One benefit of this is that the representation may be common to different types of EKG leads or variations in other sorts of measurement sensors. This is because it is known that the temporal dynamics (power spectral density) is quite stable across different EKG leads. This may be true of other sorts of measurement sensors as well.

[0041] In addition, each element of the representation may be associated with a distinct power spectral density, which may provide additional interpretability for a clinician.

[0042] The sparse coding method includes an algorithm that may be implemented as a software package that may integrate with a patient monitoring system collecting continuous-time physiological signals. The sparse coding method includes multiple components, which are briefly mentioned here and then described in more detail in the following sections: 1) a representation learning method; and 2) feature extraction and downstream machine learning.

[0043] The representation learning method involves learning a representation of the physiological time-series data that will enable features to be extracted for downstream processing. The representation seeks to learn the temporal dynamics of time-series by decomposing it into a sum of auto-regressive processes.

[0044] Specifically, let ^ X 2 , ... ¾ denote a collection of N time-series signals, which are allowed to be of different lengths. In most applications, each X L will include one or more segmented beats from an EKG signal. While EKG signals are used as an example, herein, the sparse encoding method may be applied to other measured physiological signals as well. The goal of the representation learning is to learn a dictionary of K stationary processes that may be used to reconstruct the original collection of the time-series. In order for the representation to be meaningful, usually K « N. [0045] Each stationary process may be a wide-sense stationary process that may be fully characterized by its magnitude response (or power spectral density), and to each dictionary element a canonical filter s k = IFFT (s fe ) is assigned, where s k is the power spectral density of the stationary process (as a function of frequency), and IFFT denotes the inverse discrete Fourier transform. Given this representation, the original time-series X , X 2 , ... X N is to be reconstructed by expressing their temporal dynamics as a sparse linear combination of dictionary stationary processes. Specifically, Xi is to be reconstructed as: where w ik is a realization from a white-noise stationary process, * denotes convolution, and a ik is the associated weighting. Intuitively, a ik characterizes the extent to which the temporal dynamics of X L resemble the temporal dynamics of the kth stationary process. Also note that the white-noise process w ik allows the phase response of the filter to be contoured to each individual example without altering the magnitude spectrum. This gives the a ik coefficients invariance to a wide range of phase distortions (including linear phase distortions which yields translation invariance). This is in contrast to a typical convolutional network, which uses a fixed template filter and matches it along temporal shifts of the input example.

[0046] FIG. 1 illustrates a flow chart of the method of producing the parameters for the sparse encoding method 100.

[0047] The goal of the representation learning is to simultaneously learn the K global dictionary filters s x , s 2 , ... , s K and the process weights (a Ll, a i2 , ... , a iK ) for each input example X L. This may be accomplished by defining a variant of sparse coding, which formulates the overall objective function 110 as:

The first term of the optimization is a model-fitting term that measures how well each example is reconstructed from the dictionary of stationary processes. The second term is an 11 -norm on the coefficient weighting vectors, which biases the solution to produce sparse solutions ( e.g ., each CLi contains many zero entries). The two terms are balanced by a regularization parameter l which may be tuned by a variety of means (e.g., cross-validation).

[0048] The above optimization problem may be solved by an alternating minimization strategy which iterates between two steps until convergence 125: 1) the canonical filters are held fixed while optimizing for the process coefficient vectors (this results in N decoupled sparse regression problems which may be solved via a variety of convex optimization algorithms) 115; and 2) holding the process coefficient vectors fixed and optimizing for the canonical filters 120. The second step 120 may be achieved by a cyclical block-coordinate descent algorithm. In detail, when the process coefficient vectors are held fixed, the optimization for the canonical filters simplifies to: The above may optimized by iteratively minimizing for each of the K canonical filters until convergence. When optimizing for the canonical filter j 6 (1, . . . , K}, a minimization of the following results:

is the current approximation residual for the ith example when the reconstruction excludes the jth canonical filter:

[0049] Equation (1) may then be solved by first marginalizing out the white noise process by equating it to the inverse Fourier Transform of the phase response of X tj . The canonical filter may then be updated by solving a quadratic optimization problem.

[0050] The result of the representation learning stage is the set of canonical filters s 1 . . . , s K , whose power spectral densities characterize the temporal dynamics of the K dictionary stationary processes. These will be used for feature extraction in the next stage of the method.

[0051] Feature extraction and downstream machine learning focuses on using the learned representation from the previous section for feature selection for a variety of downstream tasks. FIG. 2 illustrates feature extraction from an input signal. An input signal 205 is input to the representation learning feature extraction model 210 and a set of features a , 215 describing the input signal are output. In this stage, the learned dictionary stationary processes are held fixed (as learned by previous training and optimization) and the representation is sought for a new example time-series X. The corresponding feature vector to extract is the coefficient vector“a” which weights the contribution of each stationary process to the reconstruction. This may be achieved by, after marginalizing out the white noise process analogously to that described above with respect to equation (1), solving a sparse regression problem of the form:

[0052] The result of the above is a K-dimensional feature vector“a”, where each component measures how closely the time-series X resembles the dynamics of each dictionary stationary process.

[0053] This results in a transformation from a time-series signal X to a fixed-length (K- dimensional) feature vector“a”. In a supervised learning setting, each signal X will often be associated with a clinical variable of interest (for example, whether or not the patient is suffering from an acute condition). Thus, standard machine learning regression/classification algorithms may be applied to predict the clinical variable of interest from the extracted feature vector“a”.

[0054] The sparse coding method may be useful for a wide range of clinical decision support algorithms related to physiological signals. The representation learning stage may be used as a first-stage feature extraction engine for training predictive clinical or therapy decision support algorithms.

[0055] The sparse coding method described herein solves various technological problems. The sparse coding method representation may be particularly useful in cases in which the EKG lead types are unknown, or there is a wide variability in the EKG leads collected across a population of patients. This may apply to other sorts of measurement sensors as well. The sparse coding method provides a representation of an input signal that provides invariance to phase distortions (including translation invariance). The sparse coding method also provides a representation of an input signal that allows for nonlinear phase distortions.

[0056] In addition, because the representation learning stage learns a compressed representation of the physiological signal, the sparse coding method may also be used for lossy/lossless compression of physiological signals. Another potential application of the sparse coding method is denoising.

[0057] The embodiments described herein may be implemented as software running on a processor with an associated memory and storage. The processor may be any hardware device capable of executing instructions stored in memory or storage or otherwise processing data. As such, the processor may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), graphics processing units (GPU), specialized neural network processors, cloud computing systems, or other similar devices.

[0058] The memory may include various memories such as, for example LI, L2, or L3 cache or system memory. As such, the memory may include static random-access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices.

[0059] The storage may include one or more machine-readable storage media such as read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media. In various embodiments, the storage may store instructions for execution by the processor or data upon with the processor may operate. This software may implement the various embodiments described above.

[0060] Further such embodiments may be implemented on multiprocessor computer systems, distributed computer systems, and cloud computing systems. For example, the embodiments may be implemented as software on a server, a specific computer, on a cloud computing, or other computing platform.

[0061] Any combination of specific software running on a processor to implement the embodiments of the invention, constitute a specific dedicated machine.

[0062] As used herein, the term“non-transitory machine-readable storage medium” will be understood to exclude a transitory propagation signal but to include all forms of volatile and non-volatile memory.

[0063] Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be affected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.