Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TEMPLATE ADAPTION METHOD FOR ANALYSING 2D QUASI-PERIODIC BIOMEDICAL SIGNALS
Document Type and Number:
WIPO Patent Application WO/2022/120427
Kind Code:
A1
Abstract:
A method for the adaptation (matching) of arbitrary quasi-periodic time series data, such as ECG and PPG signals is described. An iterative deterministic annealing approach is used to estimate the transformation of a template signal to a target signal using an energy function in which the error term is weighted by a band matrix and the correspondence matrix is a double stochastic matrix which provides correspondence probabilities between the points in the template signal and target signal and which allows non-binary correspondence between the points and omits outlier slack variables. The iterative method uses an alternating approach, under which a correspondence matrix is updated and then the subsequent deformation is obtained to adapt the template data to the target data.

Inventors:
KARISIK FILIP (AU)
BAUMERT MATHIAS (AU)
Application Number:
PCT/AU2021/051469
Publication Date:
June 16, 2022
Filing Date:
December 09, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV ADELAIDE (AU)
International Classes:
G06K9/00; A61B5/35; A61B5/36; G06F17/17; G06K9/62; G06T3/00; G06T7/38
Other References:
CHUI, H ET AL.: "A new point matching algorithm for non-rigid registration", COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 89, 2003, pages 114 - 141, XP055156326, DOI: 10.1016/S1077-3142(03)00009-2
KARISIK, F ET AL.: "Inhomogeneous template adaptation of temporal quasi-periodic three- dimensional signals", IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 67, no. 23, 2019, pages 6067 - 6077, XP011753214, DOI: 10.1109/TSP.2019.2951229
RANGARAJAN ANAND, GOLD STEVEN, MJOLSNESS ERIC: "A novel optimizing network architecture with applications", NEURAL COMPUTATION, vol. 8, no. 5, 1 July 1996 (1996-07-01), US , pages 1041 - 1060, XP009537616, ISSN: 0899-7667, DOI: 10.1162/neco.1996.8.5.1041
GOLD, S ET AL.: "New algorithms for 2D and 3D point matching: pose estimation and correspondence", PATTERN RECOGNITION, vol. 31, 1998, pages 1019 - 1031, XP004123268, DOI: 10.1016/S0031-3203(98)80010-1
MA, J ET AL.: "Non-rigid point set registration by preserving global and local structures", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 25, no. 1, 2015, pages 53 - 64, XP011590583, DOI: 10.1109/TIP.2015.2467217
Attorney, Agent or Firm:
MADDERNS PTY LTD (AU)
Download PDF:
Claims:
36

CLAIMS

1. A method for estimating a transformation of a template time-series dataset to a target time-series dataset for use with a target time-series dataset comprising a plurality of quasi -periodic features, the method comprising: normalizing a template dataset and a target dataset to a first scale from an original scale; obtaining an estimate of a transformation of a lattice of control points in a local coordinate system upon which the template dataset is embedded by using deterministic annealing to iteratively minimize an energy function comprising: an error measure term which estimates a difference between the target dataset and the template dataset which is weighted by a band matrix; an overfitting reduction term; an entropy barrier function to constrain the values of a correspondence matrix in a predetermined range; and a convergence term to prevent zero matches in the correspondence matrix to satisfy a Sinkhom-Knopp algorithm condition, and each iteration of the deterministic annealing comprises: a free form deformation parameterisation step in which an input template dataset is embedded onto the lattice of control points wherein in a first iteration the input template dataset is the normalised template dataset and in subsequent iterations the input template dataset is an adapted template output during the previous iteration; and an alternating update step comprising alternating between estimation of a correspondence matrix and estimation of the transformation of the control points, wherein the correspondence matrix is a double stochastic matrix which provides correspondence probabilities between the points in the template dataset and the target dataset and which allows non-binary correspondence between the points and omits outlier slack variables, and de-normalising the template dataset using the updated transformation to obtain an adapted template dataset in the original scale.

2. The method as claimed in claim 1, wherein the free form deformation parameterisation is formulated as an incremental deformation process comprising an initial parameterised template obtained by embedding the input template dataset onto the lattice of control points, and an incremental deformation term corresponding to a shift of the control points.

3. The method as claimed in claim 2 wherein the free form deformation parameterisation is formulated as: where Y is an initial parameterised template and Θ is an incremental deformation of control points where and

4. The method as claimed in claim 3, wherein free form deformation parameterisation is based on Bernstein polynomials.

5. The method as claimed in any one of claims 1 to 4, wherein the energy function has the form: where the correspondence matrix is is a normalized target data, Y is a normalized template data, za,b is a band matrix, Θ is an incremental deformation of control points, ԏ is a multiplier, is a positive non zero value, and ma,b fulfils

6. The method as claimed in claim 5, wherein the band matrix is a tridiagonal band matrix with the form:

7. The method as claimed in claim 5 or 6, wherein estimation of the correspondence matrix is performed by differentiating the energy function with respect to ma,b and setting the result to zero.

8. The method as claimed in claim 7, wherein after estimation of the correspondence matrix is performed by solving: where the value of is set as the minimum value of ma,b with set to zero.

9. The method as claimed in any one of claims 5 to 8, wherein estimation of the transformation of the control points is performed by taking the derivative of the energy function with respect to Θ and solving using an adaptive gradient descent method.

10. The method as claimed in claim 9, wherein the gradient update for any given parameter Θi,j is obtained using where η denotes a general learning rate, ε is a small constant to prevent division by zero, step is the iteration number of the gradient descent method, vi,jstep is the exponentially decaying average of the previous squared gradients given by: where a denotes a momentum value, and 9i,jstep. is a partial derivative at iteration stepgiven by:

11. The method as claimed in claim 9 or 10 wherein the adaptive gradient descent method is the RMSprop adaptive gradient descent method.

12. The method as claimed in any one of claims 5 to 11, wherein the method further comprises an initialisation step prior to normalisation, comprising initialising parameters ԏ, M, δ and a bandwidth.

13. The method as claimed in claim 12, wherein the bandwidth of the band matrix is set as a percentage of a length of the target dataset.

14. The method as claimed in any one of claims 5 to 13, wherein the method further comprises updating ԏ and M after the alternating update step.

15. The method as claimed in any one of claims 1 to 14, wherein the band matrix is a binary matrix where a bandwidth determines a range of template samples evaluated against the target samples.

16. The method as claimed in any one of claims 1 to 15 wherein the overfitting reduction term is an L2 ridge regular! sation term.

17. The method as claimed in any one of claims 1 to 16 wherein the entropy term constrains the values of the convergence matrix to the range [0,1],

18. The method as claimed in any one of claims 1 to 17 wherein the convergence term comprises a positive non-zero convergence value which ensures that each element of the correspondence matrix has a positive non zero value to satisfy a Sinkhom-Knopp algorithm condition.

19. The method as claimed in any one of claims 1 to 18 further receiving one or more biomedical signals obtained from a biomedical sensor and obtaining a target time-series dataset for each of the one or more biomedical signals and using the method of any one of claims 1 to 18 to obtain adapted template dataset, and analysing the one or more adapted template datasets to identify if a pathophysiological variation of the one or more biomedical signals exists, and if a pathophysiological variation exists then generating an alarm or report.

20. A computer program product, comprising computer readable instructions for causing a processor to perform the method of any one of claims 1 to 19.

21. A computing apparatus comprising at least one processor operatively coupled to at least one memory, wherein the at least one processor is configured to perform the method of any one of claims 1 to 19.

22. The computing apparatus as claimed in claim 21, further comprising a biomedical sensor for measuring one or more biomedical signals, and the memory stores a set of rules or thresholds indicative of pathophysiological variations of the biomedical signal and the at least one processor is further configured to obtain a target time-series dataset for each of the one or more biomedical signals and use the method of any one of claims 1 to 19 to obtain adapted template dataset, and to analyse the one or more adapted template datasets to identify if a pathophysiological variation of the one or more biomedical signals exists using the stored set of rules or thresholds, and if a pathophysiological variation exists then generating an alarm or report.

Description:
TEMPLATE ADAPTION METHOD FOR ANALYSING 2D QUASI-PERIODIC BIOMEDICAL SIGNALS

PRIORITY DOCUMENT

[0001] The present application claims priority from Australian Provisional Patent Application No. 2020904577 titled “TEMPLATE ADAPTION METHOD FOR ANALYSING 2D QUASIPERIODIC BIOMEDICAL SIGNALS” and filed on 9 December 2020, the content of which is hereby incorporated by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure relates to signal analysis of biomedical signals. In a particular form the present disclosure relates to methods for analysing quasi-periodic signals such as electrocardiogram (ECG) and photoplethysmogram (PPG) signals.

BACKGROUND

[0003] Many biomedical signals, such as electrocardiogram (ECG) and photoplethysmogram (PPG) exhibit quasi-periodicity. The ECG and PPG are amongst the most extensively studied physiological signals [45], [27], [13], and [14], The ECG represents variations in the summed electrical potential generated by heart muscles, whilst the PPG describes volumetric changes of blood circulation using a photodetector at the surface of the skin. An important ECG feature is the QT interval which represents the period of ventricular depolarization and repolarization of a cardiac cycle. QT interval variability (QTV) is a non-linear process from beat-to-beat and is algorithmically difficult to quantify. Tracking of beat-to- beat variability is essential for the robust study of cardiac control, abnormalities and diseases, as elevated QT variability has been found to be a predictor of heart disease and mortality[19], [3], Additionally, elevated QTV has been observed in sleep apnoea [4], [42], demonstrating the broader importance of this feature. There is thus a significant need for QT interval measurement techniques to robustly capture pathophysiological variations. Similarly, robust tracking of the dicrotic notch in PPG is of importance to assess properties of the arterial vascular system [32] and thus is also of clinical importance.

[0004] Under normal conditions, beat-to-beat QT interval changes are minimal, detectable by computerized high-resolution ECG. However accurate delineation of T wave end (T end ) is challenging and most commercial systems measure the average, rate-corrected QT interval, and QT dynamicity utilising simple tangent and derivative -threshold methods. See for example Porta et al “Performance assessment of standard algorithms for dynamic R-T interval measurement: Comparison between R-T(apex) and R- T(end) approach,” Med. Biol. Eng. Comput., vol. 36, no. 1, pp. 35-42, Jan. 1998) which examined several conventional derivative based methods. Although such techniques have been used for QTV analysis their accuracy appears insufficient and it has been recommended that other QT analysis methods be developed.

[0005] In particular the use of robust semi -automated or automated template matching methods have been recommended. Template adaption (or template matching) is a signal processing method used to match two patterns and aims to deform a parameterized template signal to a target signal by optimizing a cost function. One approach referred to as Template Stretch was proposed by Berger et. al. R. D. Berger, E. K. Kasper, K. L. Baughman, E. Marban, H. Calkins, and G. F. Tomaselli, “Beat-to-beat QT interval variability: Novel evidence for repolarization lability in ischemic and nonischemic dilated cardiomyopathy,” Circulation, vol. 96, no. 5, pp. 1557-1565, Sep. 1997). Template Stretch matches the stretched or compressed ST-T segments of consecutive beats with a user-defined template, obtaining ST- T segment duration changes relative to the template duration. The QRS interval is assumed constant. Naturally, this is not always fully accurate as rate -dependent changes in activation sequence also exist. Variation in the metrics used for template matching might thus also be erroneously interpreted as primary repolarization variation when in fact secondary variations due to the activation sequence modulations should also be considered. Another approach referred to as Template Shift was developed by Stare and Schlegel (V. Stare and T. T. Schlegel, “Real-time multichannel system for beat to-beat QT interval variability,” J. Electrocardiol., vol. 39, no. 4, pp. 358-367, Oct. 2006). Template Shift performs time matching of a template of the T wave descending part within consecutive beats together with beat-to-beat Q onset detection. Further details may be found at Baumert M, Stare V, Porta A (2012) “Conventional QT Variability Measurement vs. Template Matching Techniques: Comparison of Performance Using Simulated and Real ECG”. PLoS ONE 7(7): e41920. https://doi.org/10.1371/joumal.pone.0041920

[0006] More recently an approach known as two dimensional signal warping (2DSW) has been proposed by Schmidt et al. (M. Schmidt, M. Baumert, A. Porta, H. Malberg, and S. Zaunseder, “Two-dimensional warping for one-dimensional signals — conceptual framework and application to ECG processing,” IEEE Transactions on Signal Processing, vol. 62, no. 21, pp. 5577-5588, 2014.). This is a local template adaption method which seeks to obtain deformations capturing subtle variations unique to subsets of the data and proposes two-dimensional warping of the entire QT interval based on dynamic time warping and correlation optimised warping.

[0007] In signal processing warping is understood as a method to match two patterns. It originated in the field of audio processing for the comparison of variable speed time series, for example to account for varying lengths in the pronunciation of identical phoneme need to be accounted for. By allowing certain variations to one pattern’s shape, warping accounts for temporal shifts to occur within patterns. A set of predefined rules determines which variations are allowed. A cost function guides the search for the optimal variation, i.e. the one that results in closely matched patterns. Dynamic time warping (DTW) is probably the most commonly employed warping algorithm. Data are compared sample to sample between a reference and target sequence. A rectangular matrix map is generated by analysing a Euclidean distance cost function. The algorithm is then tasked with the dynamic programming problem of obtaining the least cost path. Many variations and constraints have been applied to DTW, with the Sakoe-Chiba band being amongst the most popular. The Sakoe-Chiba band limits the permissible warping path across the rectangular matrix. Further variants include correlation optimized warping, derivative DTW and FastDTW.

[0008] DTW and related algorithms all linked by a common target: to capture one-dimensional shifts between time-series and warping algorithms are still commonly employed in time series analysis. As such standard time warping methods are all incapable of capturing features in two dimensions and are therefore of limited use in quasi-periodic time series such as those found in physiological systems. Two dimensional signal warping (2DSW) addressed this by using two dimensional piece-wise stretching of a warping grid on which the waveform is superimposed, thus allowing for temporal shifts and changes in magnitude at the same time. In particular the algorithm use sequential execution, passive shifting and local cost functions to estimate the optimal warping of a candidate point in the warping grid. Sequential execution defines the order of processing of the warping points. Once an optimal optimum warping of the current point is found, the subsequent remaining points in the sequence (to be processed) are each passively shifted by the optimum warping of the current point. Searching for the optimum warping of the current point is guided by minimising a cost functions which maximizes the similarity between the waveform to be adapted and its reference. An iterative version of this method referred to as 2DSWi was also developed.

[0009] However a problem with 2DSW and 2DSWi is that they have difficulty capturing features of 2D data. Most previous research addressed the problem of template adaptation by pre-aligning an established and prominent feature of the quasi-periodic pattern. By imposing an initial alignment on the template, prior works have been limited by assuming a correspondence between the remaining samples of the template and target signals with respect to the reference feature. While this approximation is robust when a singular quasi-periodic feature is of concern, it is of limited value for quasi-periodic signals. For instance, where physiological signals such as ECG are of concern, in excess of five features may manifest for any given cycle.

[0010] There is thus a need to provide improved methods for template matching of 2D quasi-periodic signals, such as for robustly estimating QT interval changes in ECG signals, or to at least provide a useful alternative to existing methods. SUMMARY

[0011] According to a first aspect, there is provided a method for estimating a transformation of a template time-series dataset to a target time-series dataset for use with a target time-series dataset comprising a plurality of quasi-periodic features, the method comprising: normalizing a template dataset and a target dataset to a first scale from an original scale; obtaining an estimate of a transformation of a lattice of control points in a local coordinate system upon which the template dataset is embedded by using deterministic annealing to iteratively minimize an energy function comprising: an error measure term which estimates a difference between the target dataset and the template dataset which is weighted by a band matrix; an overfitting reduction term; an entropy barrier function to constrain the values of a correspondence matrix in a predetermined range; and a convergence term to prevent zero matches in the correspondence matrix to satisfy a Sinkhom-Knopp algorithm condition, and each iteration of the deterministic annealing comprises: a free form deformation parameterisation step in which an input template dataset is embedded onto the lattice of control points wherein in a first iteration the input template dataset is the normalised template dataset and in subsequent iterations the input template dataset is an adapted template output during the previous iteration; and an alternating update step comprising alternating between estimation of a correspondence matrix and estimation of the transformation of the control points, wherein the correspondence matrix is a double stochastic matrix which provides correspondence probabilities between the points in the template dataset and the target dataset and which allows non-binary correspondence between the points and omits outlier slack variables, and de-normalising the template dataset using the updated transformation to obtain an adapted template dataset in the original scale.

[0012] Each of the target time-series datasets may be obtained from a biomedical signal obtained from a biomedical sensor, and which is analysed using the method of the first aspect to obtain adapted template dataset. The adapted template datasets may then be analysed to identify if a pathophysiological variation of the one or more biomedical signal exists, and if a pathophysiological variation exists then an alarm or report may be generated. In some embodiments the biomedical sensor is an electrocardiogram (ECG) sensor (or sensor apparatus) and the method is used to analyse QT interval variability (QTV) of the patient. In another embodiment the biomedical sensor is a photoplethysmogram (PPG) sensor (or sensor apparatus) and the method is used to track the dicrotic notch in a PPG signal. [0013] In one form, the free form deformation parameterisation is formulated as an incremental deformation process comprising an initial parametrised template obtained by embedding the input template dataset onto the lattice of control points, and an incremental deformation term corresponding to a shift of the control points.

[0014] In a further form, the free form deformation parameterisation is formulated as: where Y is an initial parameterized template and Θ is an incremental deformation of control points where

[0015] In a further form, the free form deformation parameterisation is based on Bernstein polynomials.

[0016] In one form, the energy function has the form: where the correspondence matrix is is a normalized target data, Y is a normalized template data, z a,b is a band matrix, Θ is an incremental deformation of control points, ԏ is a multiplier, is a positive non zero value, and m a,b fulfils

[0017] In a further form, the band matrix is a tridiagonal band matrix with the form: [0018] In a further form, the estimation of the correspondence matrix is performed by differentiating the energy function with respect to m a,b and setting the result to zero.

[0019] In a further form, after estimation of the correspondence matrix is performed by solving: where the value of is set as the minimum value of m a,b with set to zero.

[0020] In a further form, estimation of the transformation of the control points is performed by taking the derivative of the energy function with respect to Θ and solving using an adaptive gradient descent method.

[0021] In a further form, the gradient update for any given parameter Θ,j is obtained using where η denotes a general learning rate, ε is a small constant to prevent division by zero, step is the iteration number of the gradient descent method, v i,j step is the exponentially decaying average of the previous squared gradients given by: where a denotes a momentum value, and 9i,j step . is a partial derivative at iteration stepgiven by:

[0022] In a further form, the adaptive gradient descent method is the RMSprop adaptive gradient descent method.

[0023] In a further form, the method further comprises an initialisation step prior to normalisation, comprising initialising parameters ԏ, M, δ and a bandwidth. [0024] In a further form, the bandwidth of the band matrix is set as a percentage of a length of the target dataset.

[0025] In a further form, the method further comprises updating ԏ and M after the alternating update step.

[0026] In one form, the band matrix is a binary matrix where a bandwidth determines a range of template samples evaluated against the target samples.

[0027] In one form, the overfitting reduction term is an L2 ridge regularisation term.

[0028] In one form, the entropy term constrains the values of the convergence matrix to the range [0,1],

[0029] In one form, the convergence term comprises a positive non-zero convergence value which ensures that each element of the correspondence matrix has a positive non zero value to satisfy a Sinkhom-Knopp algorithm condition.

[0030] According to a second aspect, there is provided a computer program product, comprising computer readable instructions for causing a processor to perform the method of the first aspect.

[0031] According to a third aspect, there is provided a computing apparatus comprising at least one processor operatively coupled to at least one memory, wherein the processor is configured to perform the method of the first aspect. In a further form the computing apparatus further comprises a biomedical sensor for measuring one or more biomedical signals and the memory may stores a set of rules or thresholds indicative of pathophysiological variations of the biomedical signal. The biomedical signals are analysed using the method of the first aspect and the stored set of rules or thresholds indicative of pathophysiological variations of the biomedical signal.

BRIEF DESCRIPTION OF DRAWINGS

[0032] Embodiments of the present disclosure will be discussed with reference to the accompanying drawings wherein:

[0033] Figure 1A is a flowchart of a method for estimating a transformation of a template time-series dataset to a target time-series dataset according to an embodiment;

[0034] Figure IB is a schematic illustration of an embodiment of a method for estimating a transformation of a template time-series dataset to a target time-series dataset; [0035] Figure 2 is plot of an ECG template (dashed fill line) and beat (solid fill line) parameterized to an Free Form Deformation (FFD) lattice of 6 X 6 control points according to an embodiment;

[0036] Figure 3 A is a plot of an ECG template (dashed fill line) adapted (dotted fill line) to a noisy beat (solid black line) on the first iteration according to an embodiment;

[0037] Figure 3B is a plot of an ECG template (dashed fill line) adapted (dotted fill line) to a noisy beat (solid black line) on the second iteration according to an embodiment;

[0038] Figure 3C is a plot of an ECG template (dashed fill line) adapted (dotted fill line) to a noisy beat (solid black line) on the fifth iteration according to an embodiment;

[0039] Figure 4A plots Gaussian Noise across the zero QTV simulated ECG data using an embodiment of method and five other prior art published techniques;

[0040] Figure 4B plots baseline wander across the zero QTV simulated ECG data using an embodiment of method and five other prior art published techniques;

[0041] Figure 4C plots amplitude modulation across the zero QTV simulated ECG data using an embodiment of method and five other prior art published techniques;

[0042] Figure 5 plots QTV results obtained across the PTB database using an embodiment of the proposed method and three other prior art methods, with the results summarized across healthy subjects (grey) and MI patients (black) in the form of the mean (bar) plus standard deviation (line segment).

[0043] Figure 6A plots five beats of a first subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment;

[0044] Figure 6B plots five beats of a second subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment;

[0045] Figure 6C plots five beats of a third subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment; [0046] Figure 6D plots five beats of a fourth subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment;

[0047] Figure 6E plots five beats of a fifth subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment;

[0048] Figure 6F plots five beats of a sixth subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment;

[0049] Figure 6G plots five beats of a seventh subject in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line) according to an embodiment; and

[0050] Figure 7 is a schematic diagram of a computing apparatus according to an embodiment.

[0051] In the following description, like reference characters designate like or corresponding parts throughout the figures.

DESCRIPTION OF EMBODIMENTS

[0052] Referring now to Figure 1A, there is shown a flowchart of a method 100 for estimating a transformation 120 of a template signal (i.e. a time-series dataset) to a target signal (i.e. a time-series dataset) according to an embodiment. The method 100 begins with normalizing the template dataset and the target dataset to a first (normalised) scale from an original scale 110. We then obtain an estimate of a transformation of a lattice of control points in a local coordinate system upon which the template dataset is embedded by using deterministic annealing to iteratively minimize an energy function. At step 130 we de-normalise the template dataset using the updated transformation to obtain an adapted template dataset in the original scale.

[0053] The template adaptation method 100 described herein is inspired by registration techniques but specifically adapted to the time series deformations. As will be explained below the method has particular application to target signals (time-series) comprising a plurality of quasi-periodic features such as ECG and PPG signals, and as outlined below, embodiments of the method are able to capture subtle variations in quasi-periodic signals such as ECG and PPG signals. Thus in some embodiments the template signal is a biomedical signal obtained from a biomedical sensor (or sensor apparatus) such as an ECG or PPG. [0054] Registration is the process of aligning two sets of data in two-dimensional and three-dimensional applications. In point set registration and image registration (where a prior correspondence may exist, but is increasingly distorted) a broad field of research exists. Our work primarily draws inspiration from point set registration techniques due to their general reliance on Cartesian coordinates; image registration methods hold a preference toward image intensities. Specifically, the registration problem seeks to establish a bijective or strictly injective (not bijective) mapping between two sets of data, permitting sample rejection. In time series data, focusing on injective mappings would be a theoretically flawed approach, that is, permitting the rejection of samples would undermine the sampling process. Similarly, focusing on bijective mappings, that is, a one to one correspondence between samples, may yield insufficient adaptations due to inhomogeneous temporal variations. Consequently, insufficient samples would exist in local regions to amply match features between the template and target signals. Thus the problem of template adaptation in time series data needs to permit for data interpolation (non-binary correspondences). As such, our work seeks to combine assumptions specific to the adaptation of time series data with the foundations of traditional registration techniques. We ensue to describe related registration techniques and their relevant features.

[0055] Based on global deformations, the iterative closest point (ICP) algorithm is amongst the simplest and earliest point set registration methods proposed [7] . In ICP the model dataset is iteratively rotated and translated to match the target dataset; the process involves the evolving estimation of a correspondence variable and universal transformations. ICP is commonly employed in the registration domain due to its ease of implementation and performance on simple problems. Many variants of ICP have been proposed to improve robustness [35], However, even amongst these variants the subpar performance of ICP in the presence of noise limits its applicability in time series analysis. Although ICP is inadequate for the problem of time series adaptation due to its reliance on global adaptation and sensitivity to noise, our method draws motivation from the iterative nature of the process.

[0056] To address the need for inhomogeneous adaptations we explore a family of algorithms for the local parameterization of data. In point set registration, one such popular parameterization technique is thin-plate splines (TPS) [8], TPS and related methods require explicitly determining two sets of corresponding points to retrieve the deformations pertaining to the rest of the shape. TPS has an inherent analogy to the physical bending energy of a thin metal plate whilst providing smooth deformations. The most popular TPS based registration method is that of Chui et al . [11], based on earlier works [38], [17], The technique jointly estimates the correspondence matrix and thin plate spline parameters by an alternating and iterative process. Optimization is performed by deterministic annealing and demonstrates strong performance in the presence of few outliers (e.g. points in one dataset that have no corresponding matching point in the other dataset). Although TPS registration methods are not appropriate for time series adaptations due to their underlying bijective mapping assumption, we draw inspiration from the deterministic annealing optimization inherent to the process.

[0057] Another family of registrations pertaining to local parametrizations are free form deformations (FFD) [44], [23], Under FFD, data are embedded by a lattice of control points where consequent shifting of a control point results in locally weighted deformations. Free form deformations similarly provide smoothness guarantees. Traditional FFD offers global support across control points, meaning the entire lattice can be shifted. Contemporary FFD techniques provide local support which is sufficient for minute variations in shape, but cumbersome for a unitary lattice offset. In contrast to TPS, FFD parameterizations provide no capacity guarantee to retrieve the exact target shape by deformation of the source shape under noisy conditions. In signal processing, where signal to noise components are often indeterminate, exact shape retrieval would be akin to overfitting. Thus, a deformation model with a strong mapping prior, such as cubic or Bernstein polynomial based methods, are suitable in meeting this balance between obtaining a correspondence matrix (with probabilities which aren't necessarily binary) and shape fitting. As such, our method employs FFD to achieve non-binary probabilities.

[0058] Whilst the method draws upon techniques used in the registration field, we propose a novel methodology with assumptions and adaptions suited to time series deformations. First and foremost, this is motivated by our efforts to propose a solution to the incomplete correspondence assumption observed in template adaptation literature. Furthermore, we address the problem utilizing a well-defined geometric parameterization to achieve guaranteed mathematical smoothness and a continuous cost function. The intent of these secondary contributions is to address further limitations observed in signal processing literature related to template adaptation. Lastly, and perhaps most importantly, our technique aims to illustrate its potential as a general framework for the adaptation of arbitrarily shaped quasi -periodic data by extending the analysis to the application of PPG. Although our method is not exclusive to the biomedical domain, we focus on physiological applications due to the prevalence of quasi-periodic data in the field.

[0059] As shown in Figure 1A, the method 100 begins with normalizing the template dataset and the target dataset to a first (normalised) scale from an original scale 110. In one embodiment the data is first mapped into a unity box by min-max normalization. This is performed to generalize parameter selection across the model. Consider Y to be the length N template signal and Q to be the length N target signal. We denote the normalized template data, , where and denotes the a th sample value across the x (temporal) and y (amplitude) axes, respectively. Similarly, we denote the normalized target data, , where and denotes the b th sample value across the x (temporal) and y (amplitude) axes, respectively. [0060] For example each sample of a dataset, a, can be normalized across its respective x and y dimension in the following manner:

[0061] Similarly consider Q b to be the b th sample of the N x 2 target data matrix Q. therefore:

[0062] As illustrated in Figure IB, the input 112 to the normalisation step 110 is the target signal (i.e. time series dataset) and the template signal (i.e. time series dataset), and the output 114 is the normalised target signal (i.e. time series dataset) and the normalised template signal (i.e. time series dataset) 122. It will be understood by the person of ordinary skill in the art that a signal received from a measuring device can be represented as a time-series dataset. Typically the time-series dataset is obtained from digitising a sensor signal, for example by microprocessor or similar signal processing hardware, or is directly output by a sensor module. Thus both the template signal and target signal can be represented by time-series datasets, and in the following discussion the terms signal, “time series dataset”, and simply “dataset” will be used interchangeably. Note that often the time series dataset will be obtained by sampling/digitising the signal at a constant rate so there is a constant time interval between adjacent points. However it will be understood that the method is not limited to this case and non-uniform sampling may be used, as well as non-uniform intervals between data points.

[0063] With the normalised template and target signals, we then obtain an estimate of a transformation of a lattice of control points in a local coordinate system upon which the template dataset is embedded by using deterministic annealing to iteratively minimize an energy function 120.

[0064] The method employs traditional free form deformations (FFD) [44] to place a prior on the method such that adaptations yield constrained results modelled by a product of Bernstein polynomials; in turn reducing the effects of noise. Under this process the template signal 20 is embedded to a lattice of control points and subsequent shifting of control points then results in locally weighted deformations of the data.

[0065] In FFD, a local coordinate system is first imposed on the data: (1) where X o denotes the origin of an I x m lattice of control points, , and {S, T} the embedding lengths of the lattice along the x-axis and y-axis, respectively. The (s, t) coordinates of sample point can be obtained in the following manner:

(2)

(3) where denotes the set containing the minimum and maximum control point values along the x-axis and y-axis, respectively. In this coordinate system, the resultant deformation of shifting of control points P 0 to P 1 is defined by the tensor product of Bernstein polynomials: where Given Eq. 4, FFD can be reformulated as an incremental process, i.e. , where and Thus, the deformation process can be re-written as:

[0066] Under this formulation the first term of the deformation process, returns the initial parameterized template, based on the linear precision of Bernstein polynomials. The second term, δP ij . corresponds to a shift of the control points and the resultant shape deformations they produce. Thus, we can reformulate Eq. 5 to: (6)

[0067] This is illustrated in Figure 2 which shows an ECG signal parameterized onto a 6 X 6 FFD lattice. The target beat signal 10 (solid black line) and ECG template signal 20 (dashed fill line) are ploted on a FFD latice 30 comprising 6 x 6 control points (example point 32 illustrated), with the x axis representing normalised time and the y axis representing normalised amplitude.

[0068] We will now explain the formulation of the adaptation energy (cost) function used in deterministic annealing and demonstrate how to estimate the incremental deformation Θ.

[0069] Correspondence Weighting & Deformation

[0070] To account for localized temporal shifts in quasi periodic data, we introduce a correspondence matrix M. We relax the traditional restrictions placed on the correspondence matrix where the assignment problem seeks to obtain a strictly injective (not bijective) or bijective mapping; instead we permit the solution to interpolate temporal and amplitude values by allowing for a non-binary correspondence between template and target data. Furthermore, we abstain from incorporating outlier slack variables in the correspondence matrix. That is we omit any outlier slack variables. In one embodiment the correspondence matrix M is defined (or constructed) such that there is no additional row and column used to store outlier slack variables (see Chui et.al. [11] for an example of a correspondence matrix incorporating a row and column for outlier slack variables). In point set registration problems, outliers related to injective mappings manifest when inappropriate samples have been drawn in the pre-processing feature extraction step (and thus there is no matching point). In signal processing, each time step corresponds to a deterministic interval, thus proposing an algorithm permiting for injective mappings where certain time stamps can be discarded would be a naive approach under which the removal of information would be permited.

[0071] The correspondence matrix formulation is based on the work of Rangarajan et al. [38] on an optimising network architecture developed for vision, learning, patern recognition and combinatorial optimisations applications. This approach has been specifically adapted to the signal processing domain by removing the binary constraint between template and target samples, and introducing a band matrix to minimize the permissible search space by the optimization algorithm. The reduced search space aims to prevent the minimization process returning a solution corresponding to an undesirable deformation. The energy function we seek to minimize in this work has the form: (7) where m a,b fulfils and and M = [m a,b ]. M hold the correspondence values between each template and target sample (i.e. is the correspondence matrix). The first term corresponds to the least squares error measure between all samples of the template and the target with a band matrix incorporated. In the below example of a tridiagonal band matrix, z has the form: where the notion of a band matrix similarly extends to a matrix with a bandwidth greater than one (greater than the tridiagonal case). The matrix, z a,b , serves as a binary matrix where the bandwidth determines the range of template samples evaluated against the target samples. This helps to reduce the solution space. In one embodiment the bandwidth is set as a percentage of the signal length. The second term is the standard L2 ridge regularization term observed in statistical modelling and is incorporated to reduce overfitting. The third term is an entropy barrier function serving to constrain the values of m a,b in the range: [0,1], The multiplier, ԏ, enforced onto the barrier function is employed to permit for the process of annealing in the optimization step of the algorithm. Annealing, specifically simulated annealing (SA), is a commonly employed optimization technique that treats the objective function as a system energy and is analogous to the annealing process of solids. SA searches for the function minimum by decreasing the system temperature; the search is more stochastic at higher temperatures and gradually becomes more deterministic as the temperature parameter is lowered.

[0072] Deterministic annealing (DA) is a closely related derivative of simulated annealing under which the minimization of the objective function is treated as the minimization of the free energy of the system. DA considers the minimum energy at each temperature and in turn deterministically optimizes the objective function (i.e. Eq. 7) [33] . Annealing is incorporated into the network to ensure that through an iterative process an improved local suboptimal solution can be obtained. An in-depth discussion relating to the motivation for annealing based optimization is provided by Rangarajan et al. [38], The fourth term is incorporated to prevent zero matches in the correspondence matrix; the value is set close zero in order to satisfy the Sinkhom-Knopp algorithm condition: m a,b > 0. The Sinkhom-Knopp method is utilized to meet the doubly stochastic constraints imposed on the cost function in Eq. 7 and is justified in [38]; it operates by alternating between row and column normalizations until a guaranteed convergence transpires [26], [0073] Similar to the work of Chui et al. [11] the proposed technique operates on the principle of alternating estimations between the correspondence matrix and transformation (control point shifts); this is performed repeatedly and in conjunction with deterministic annealing. We again note that in this embodiment the correspondence matrix allows for a non-binary correspondence between template and target data and the matrix does not contain a row and column for outlier slack variables (unlike Chui et al [ 11]). In the first step of the alternating technique we take the derivative of Eq. 7 with respect to m a,b and equate it to zero, from this we can obtain the following analytic result: (8)

[0074] The value of ξ; is set using a heuristic under which we take the minimum value obtained in Eq. 8 with ξ excluded. The intuition behind assigning this specific positive non-zero value is that it represents the minimum observable correspondence between the template and target; thus, serving as the lowest data-driven (observable) value which could be employed. We set the value in this fashion to ensure numerical stability and prevent the overestimation of this constant incorrectly influencing the Sinkhom- Knopp normalization. To obtain the optimal control point shift estimation (second step), we differentiate Eq. 7 with respect to Θ, that is:

(9)

[0075] Eq. 9 is solved utilizing an adaptive gradient descent method due to the substantially varying magnitude of gradient sizes between different control points and directional derivatives. This technique is employed to obtain the transformation estimate in a timely manner and to discourage the parameter solution settling at a saddle point for extended iterations. In one embodiment we utilize the RMSprop adaptive gradient descent method to achieve faster convergence compared to traditional gradient descent [21], This may be implemented for example using the RMSprop implementation provided in the Keras API which is a deep learning API written in Python running on top of the machine learning platform TensorFIow (https://keras.io/api/optimizers/rmsprop/). In RMSprop the learning rate for a weight is obtained by dividing the weight by a running average of the magnitudes of recent gradients for that weight. For parameter, Θi,j , the related partial derivative (i.e. - left-hand side of Eq. 9) at iteration, step, of the gradient descent is: (10) [0076] Therefore, the gradient update for any given parameter Θi,j can be obtained by the following memory based two-step process:

(11) where a denotes the momentum value and v i ,j step the exponentially decaying average of the previous squared gradients. Thus the (i, j) th parameter is updated by:

(12) where η denotes the general learning rate and ε a small constant to prevent division by zero. The update rule accumulates the previous gradient in some proportion which prevents rapid growth in v t and encourages the optimizer to continue converging.

[0077] The derivatives from Eq. 9 with respect to the FFD control point parameters can be easily obtained. For an 1 x m grid of control points, each parameter is denoted by: where i = 1, • • • , I and j = 1, • • • , m. Under this notation, consistent with the methodology section, for the control point, the following holds:

[0078] De-normalisation

[0079] To obtain the adapted template in the original scale we perform de-normalisation which is the inverse of the normalization step 110 across the x-axis and y-axis, respectively:

(13) [0080] In the current work we determined values to suffice for a coarse optimization utilizing deterministic annealing and RMSprop across an external ECG database. In doing so we set, I o = 3. / x = 5 and I 2 = 20. These values were selected under the condition that no gradient magnitude was larger than 0.001 normalized units across a subset of the external ECG database. M was initialized by solving Eq. 8 under the starting pose of the data being matched and the incremental control point shifts, δ. being initialized to zero. Annealing followed a schedule of an approximately tenfold reduction in the magnitude of ԏ at each update. This was determined to be an amply low (yet coarse) reduction rate through trial and error. The band matrix was initialized with a bandwidth of 0.75% of the template length and updated at the same rate upon each iteration of I o , as observed in line 11 of Algorithm 1. The intent of widening the matrix bandwidth in this manner was to permit for the incremental adaptation of the template signal. The penalty factor λ was set to 0.05. To tune the A and bandwidth hyperparameters, we used a grid-search method to minimize the error of the cost function across a subset of the external database. Following standard optimization values η , α and ε were respectively set to 0.001, 0.9 and 1 x 10 -7 [47], Additionally, the template and target signals were normalized to the maximum temporal length and amplitude, respectively, to permit for generalized hyper-parameter estimation.

[0081] Figures 3A to 3C are plots of an ECG template 20 (dashed fill line) adapted 40 (dotted fill line) to a noisy beat 10 (solid black line) on the first (a), second (b) and fifth (c) iteration. Figures 3A to 3C also illustrate warping of grid 30 with each iteration. The first two iterations are illustrated under the data being normalized whilst the final iteration (c) demonstrates the data scaled back to pre-normalization values.

[0082] Using the above, we can thus define a general method for estimating a transformation of a template time-series dataset to a target time-series dataset for use with target time-series dataset comprising a plurality of quasi-periodic features. A flowchart of this method is illustrated in Figure 1A and is further illustrated in Figure IB. The method 100 begins with normalizing the template dataset and the target dataset to a first (normalised) scale from an original scale 110. Following normalisation 110, the method proceeds to obtain an updated estimate of the transformation of a lattice of control points using deterministic annealing to iteratively minimize an energy function 120.

[0083] As outlined above the energy function comprises at least the following four terms. The first term is an error measure term which estimates a difference between the target dataset and the template dataset which is weighted by a band matrix. The second term is an overfitting reduction term (i.e. ridge regularization). The third term is an entropy barrier function to constrain the values of the correspondence matrix in a predetermined range, such as over the range [0,1], The fourth term is a convergence term to prevent zero matches in the correspondence matrix to satisfy the Sinkhom-Knopp algorithm requirement. As outlined above each iteration of the deterministic annealing comprises a free form deformation parameterisation step 122 and an alternative update step 124. In the free form deformation (FFD) parameterisation step 122 an input template dataset is embedded onto a lattice of control points in a local coordinate system. In the first iteration the input template dataset is the normalised template dataset 114 and in subsequent steps the input template dataset is the adapted template 128 output during the previous iteration. In the alternating update step 124 we alternate between estimation of a correspondence matrix 125 and estimation of the FFD transformation of the control points 127. The correspondence matrix is a double stochastic matrix which provides correspondence probabilities between the points in the template dataset and target dataset and which allows non-binary correspondence between the points and notably omits any outlier slack variables. Estimation of the correspondence matrix uses the Sinkhom-Knopp Normalisation method 126 which operates by alternating between row and column normalisations until convergence occurs.

[0084] At step 130 we de-normalise the template dataset using the updated transformation to obtain an adapted template dataset in the original scale. The adapted template dataset can then be analysed, for example to identify the existence of a pathophysiological variation of a biomedical signal (or set of signals) such as ECG or PPG signals.

[0085] Table 1 depicts pseudo code describing the method illustrated in Figures 1A and IB for template adaptation given a template signal and target signal.

TABLE 1

Algorithm 1 : Pseudo code for an embodiment of a method for template adaptation.

[0086] EXAMPLES

[0087] A qualitative study of embodiments of the method described above compared to previously published algorithms is presented below, along with a graphic gallery of adaptations. It should be noted, for each instance that a feature is evaluated (Q-onset, T-end and dicrotic notch), the template is manually annotated. The subsequent temporal location of the annotated index/indices denotes the adapted template location for the given feature. In addition, template generation is performed via detection of the signal signature peaks and a predefined number of selected samples on either side of these apexes.

[0088] The first example focuses on the analysis of simulated ECG Data.

[0089] To determine the performance of embodiments of the proposed method with respect to common ECG artefacts known to corrupt the QT interval [3], we employed data previously described by Porta et al. [37] and evaluated in several related studies. In summary, a single ECG beat of a healthy 26 year old subject was extracted from lead II at a sample rate of 1000 Hz and 12-bit amplitude resolution. Subsequently, the T-wave amplitude, W T , of the reference beat was lowered from W T to 0.1W T in decrements of 0.1W T . The ten cardiac beats were replicated 500 times, forming ten synthetic recordings consisting of 500 beats each. Thus, each recording was characterized by varying T-wave amplitude and maintaining a QTV of zero. Additive white Gaussian noise (AWGN), baseline wander (BW) or sinusoidal amplitude modulation (AM) were introduced to model further distortions; in turn producing 30 recordings overall. The synthetic ECG data is characterized by a constant QT interval, therefore an ideal detection system would yield zero QTV.

TABLE 2

Summary of the QTV across an embodiment of the present method and five other prior art methods.

[0090] Table 2 and Figures 4A to 4C illustrate the QTV results across the synthetic ECG dataset for the proposed algorithm and five other methods. High accuracy performance is indicated by embodiments of the method with QTV closer to zero. QTV across Gaussian noise (Figure 4A), baseline wander (Figure 4B) and amplitude modulation (Figure 4C) is illustrated. Observing Table 2, it is evident that the two most simplistic algorithms, Conventional and Template Stretch, yield a significantly higher QTV in contrast to their counterparts. Particularly, the two algorithms present rather porous results across the baseline wander test, indicated by the high standard deviation in Table 2. Furthermore, in the presence of Gaussian noise (Figure 4A) or baseline wander (Figure 4B) the performance of the two algorithms is dependent on the T-wave amplitude. This is illustrated by the decreasing QTV as the amplitude acquisition range increases in Figures 4A and 4B.

[0091] In the second tier of performance are the 2DSW and Template Shift algorithms, respectively. Observing Table 2, both algorithms obtained a QTV above 1 ms with a comparatively lower standard deviation to the aforementioned algorithms. The results of Figures 4A to 4C suggest a weakly inverse relationship between the T amplitude acquisition range and QTV pertaining to these two algorithms.

[0092] Producing state-of-the-art performance are i2DSW and an embodiment of the proposed method. Observing Table 2, i2DSW yields a 0.84 ± 0.51 QTV across the synthetic data. Notably the superior performance of i2DSW, relative to previous works, is superseded by the proposed method which obtained a 0.3 ± 0.07 QTV across the dataset. Furthermore, Figures 4A to 4C demonstrate that both algorithms are agnostic to the T-wave amplitude, presenting a near constant QTV across various amplitudes.

[0093] The second example examines a PTB database of QT Variability Tracking. [0094] Tracking of beat-to-beat variability is essential for the robust study of cardiac control, abnormalities and diseases [36], [2], [5], Thus it is highly desirable for QT interval measurement techniques to robustly capture pathophysiological variations. As such, we evaluate the sensitivity of the proposed technique against that of several existing methods. We utilize the readily available PTB Diagnostic ECG Database [9] containing 79 patients with acute myocardial infarction (22 female, mean age 63 ± 12 years; 57 male, mean age 57 ± 10 years) and 69 control subjects (17-female, 42 ± 18 years; 52 male, 40 ± 13 years). Approximately two minutes of sampled data were extracted for each subject. Previous efforts have examined the PTB database to determine if post myocardial infarction patients possess statistically different QT variability when compared to healthy controls [20] . Furthermore, Hasan et al. also concluded that ECG Lead II distinguished the two groups most effectively. We utilized Lead II in this work.

[0095] The data were pre-processed to meet the requirements of the proposed algorithm. For each subject we extracted the beats using QRS annotations and beat rejection criteria from Schmidt et al. [43] A beat was excluded if the normalized Manhattan distance exceeded IμV/sample across the 2DSW algorithm. This criterion was imposed to ensure identical beats across subjects were evaluated between the proposed method and previous methods. Next, the mean beat length was obtained. For each recording all beats were linearly interpolated to the respective template length via shrinking or stretching. The template length was obtained by taking the average beat length across the recording. Template QT annotations were manually marked.

[0096] Figure 5 illustrates the mean and standard deviation obtained by applying an embodiment of method described herein and several other popular prior art algorithms. Each algorithm showed a statistically significant difference in QTV (p - value < 0.05 using the unpaired student t-test) between control subjects and MI patients. As shown in Table 3, the proposed method yields the lowest coefficient of variation across healthy subjects in the PTB database. Furthermore, the present method and i2DSW produced a notably lower coefficient of variation across MI patients compared to the template stretch and 2DSW algorithms. This is further evidence supportive of the decreased dispersion obtained across the synthetic dataset and, hence, superior performance achieved by the proposed method.

TABLE 3

Summary of the coefficient of variation for an embodiment of the proposed method and other comparative methods.

[0097] QTDB

[0098] To study the performance of our algorithm in tracking beat-to-beat QTV, compared to manual annotations, we utilized the QT Database (QTDB) [29], This database contains 105 subject recordings with manual annotations pertaining to clinically important ECG features (onset, peak and end markers for P, QRS, T and where present U waves). Each recording contains at least 30 annotated cardiac cycles. The QTDB contains a variety of ECG morphologies and thus presents for a robust analysis of QT detection. The template for each recording was obtained by averaging the entirety of the equilength cardiac cycles for each subject. Furthermore, beat rejection was employed via the use of the Hausdorff Distance, a commonly employed metric to measure how far two metric subsets are from one another. The cut-off criteria for the Hausdorff distance was obtained by selecting the value of the 95th percentile of the supervised two lead Hausdorff distance values (i.e. rejection of bottom 5%); for comparative purposes, the 5% rejection value was selected to closely match the minimum rejection rate of 94.7% of 2DSW, as observed in Table 4.

[0099] To account for the two commonly employed strategies for evaluating the QTDB we have analysed the performance of our method based on a single lead and two leads. This is important to note as the associated manual annotations are based on two leads. In single lead analysis, each lead is independently delineated and the resultant features are compared to the reference annotations (single lead evaluation). Alternatively, in two lead analysis, each lead is independently delineated and the reference annotation is utilized to determine the algorithmic annotation that most closely matches the reference value for each beat.

[00100] Table 4 illustrates the QRS-onset and T-end results of the proposed method and several other methods across the QTDB. For each algorithm, we present the mean and standard deviation across the database. Since we are evaluating beat-to-beat variations (QTV) the standard deviation is of primary interest whilst the mean presents little value. Observing the single lead evaluation, we see that our algorithm yields comparable performance to the state-of-the-art algorithms. Furthermore, the supervised two lead evaluation results demonstrate that our algorithm yields the lowest standard deviation across the T-onset compared to all algorithms. Similarly, we can observe that the Q-onset standard deviation holds the second lowest standard deviation value. TABLE 4

Summary of the QTV across the QTDB of an embodiment of the proposed method and seven other prior art methods.

[00101] Simulated PPG Data [00102] In order to demonstrate the potential of the proposed method in PPG applications, we evaluated the performance of the algorithm in tracking the dicrotic notch across simulated data containing common factors known to affect PPG feature measurements. Some previous works have utilized this physiological feature to improve systolic blood pressure estimation and [18] and to study athletic differences via PPG pulse shape [51], To the best of our knowledge there are no expertly (manually) annotated databases to evaluate the performance of dicrotic notch detection algorithms. To overcome this limitation in literature, we have used a synthetic PPG generation tool [10], Briefly, a set of PPG beats were replicated to generate simulated signals consisting of trains of PPG beats, each of 210 seconds duration sampled at 500 Hz. Consequently, distortions in the form of amplitude modulation or baseline wander were introduced, leading to two distorted recordings. The distortions were repeated for a range of 29 physiologically plausible RR intervals, resulting in 58 recordings overall. Additionally, for the undistorted 210 seconds recording, additive white Gaussian noise was introduced and the distortion was repeated 29 times from 20 dB to 10 dB. Thus, the database contained 87 recordings in total. Since the RR interval is constant across the recordings, the dicrotic notch variability is zero. Therefore, algorithm performance is evaluated based on the obtained dicrotic notch variability and its proximity to zero. We have attached the modified simulated database in the supplementary material.

[00103] Furthermore, we evaluated the performance across the only open-source dicrotic notch detection algorithm, proposed by Li et al. [30], The results are summarised in Table 5. Similar to the QT interval, we are interested in the beat-to-beat tracking ability of the algorithm and therefore focus on the standard deviation. In the instance of AWGN and BW our method outperforms the other evaluated method. Regarding AM distortion, our method produced comparatively competitive results.

TABLE 5

Summary of the dicrotic notch variability across the proposed algorithm versus Li et al.

[00104] BIDMC PPG Adaptations [00105] To further illustrate potential applications of the proposed algorithm, we have included a visual gallery of adaptations on PPG data. The publicly available BIDMC PPG and Respiration Dataset [34], [16], [15] contains 53 eight minute recordings of PPG sampled at 125 Hz; the data were previously utilized to benchmark algorithms for estimating the respiratory rate from the PPG. For a particular subject, a template PPG was obtained by averaging the data across a thirty second interval; where the signal apex (systolic peak) was taken as the reference point to delineate each beat. In this work, we extracted template PPG for seven subjects across the BIMDC database and subsequently plotted five adaptations for each subject. Figures 6A to 6G plots five beats of a seven subjects, respectively, in the BIMDC database showing the how template (dashed fill line) is adapted to the beat (solid black line) producing the resultant deformation (dotted fill line). For each subject five beats were considered from left to right and Figures 6A to 6G illustrate the effectiveness of template adaptation achieved by an embodiment of the proposed method.

[00106] DISCUSSION

[00107] The proposed method provides a general framework for 2D template adaptations. Embodiments of the method extend to quasi-periodic signals of various morphology and noise level. In comparison to previously proposed template adaptation algorithms in signal processing literature our method is based on a robust mathematical foundation. By using free form deformations and a quadratic cost function we obtain mathematically smooth and continuously differentiable adaptations.

Embodiments of the method use a doubly stochastic correspondence matrix, which provides correspondence probabilities between two sets of signals. Under the proposed model, binary correspondences were persistent for pronounced features such as the R-peak of the ECG. For less evident features, the algorithm obtained non-binary probabilities which acted as temporal interpolation points. The advantage of permitting non-binary correspondences in the doubly stochastic matrix stems from the fact that contraction and expansion of localized regions in quasi-periodic data may not match up with the number of relevant samples in the given region. Non-binary correspondences are permissible by virtue of the fact that a spline interpolation method is being utilized under which there is no guarantee that the deformation model can retrieve an exact adaptation to the target data; that is, the model provides no guarantee that template data can perfectly match noisy samples. This is a desirable property as the proposed algorithm is restricted from overfitting. Whilst the method draws intuition from the registration field, the method was specifically designed for time series deformation. The important modifications include the use of a band matrix applied to an error measure term in the optimization energy (cost) function to limit the search space, and that the form of the correspondence matrix allows non-binary correspondence between the points and omits an outlier slack variable.

[00108] Additionally, embodiments of the method employed an iterative process to obtain the optimal transformation. This is achieved through a coarse deterministic annealing process, where the temperature parameter in Eq. 7 is reduced toward zero. As the temperature parameter approaches zero, binary correspondences approach their limiting values. Furthermore, the use of deterministic annealing reduces the need to estimate an appropriate Lagrangian multiplier for the entropy term in Eq. 7, as multiple solutions are iterated over to yield a final adaptation. Similarly, the band matrix is gradually increased to reduce the need for the approximation of a valid domain and to permit gradual deformations. In turn, this manifests as the algorithm capturing minimal variations in early iterations and expanding the search space in later stages to capture large scale variations. In other embodiment’s variations of deterministic annealing, including variants of simulated annealing could be used with equivalent constraints to those used for the deterministic annealing case. That is the optimisation problem could be reformulated as simulated annealing optimisation method in which the energy (cost) function is subject to similar constraints on the correspondence matrix and use of band weighting to limit the search space.

[00109] We have demonstrated the ability of the algorithm to perform on several databases. Compared to previously proposed QTV algorithms embodiments of the method described herein illustrate superior results across the baseline wander and amplitude modulation synthetic ECG tests. Furthermore, it achieved comparable results to the best performing algorithm across the white Gaussian noise test. The high level of performance on QTV analysis is further supported by the results obtained on the PTB database. An embodiment of the method detected statistically significant differences in QTV between myocardial infarction patients and normal subjects. Furthermore, an embodiment of the method yielded a similarly low coefficient of variation to i2DSW for MI patients across the PTB database; thus, further illustrating its ability to robustly estimate QTV. Regarding the QTDB, our method produced competitive single lead results compared to the state-of-the-art and yielded superior results under supervised two-lead analysis. This suggests that the proposed method may produce state-of-the-art QTV tracking under appropriate channel selection. By applying our proposed method to the BIDMC database we provided a visually intuitive representation of the algorithms prowess and applicability to PPG data. Importantly, we performed the adaptations utilizing the same hyperparameters used across the two ECG datasets, in turn demonstrating the generalisation ability of our framework. The results were further backed by a beat-to- beat analysis into the dicrotic notch of the PPG. Our algorithm demonstrated superior results to a state-of- the-art PPG delineator.

[00110] Embodiments of the methods described herein outline a 2D template adaptation framework with a robust theoretical foundation. Embodiments of the method are able to detect subtle features in noisy quasi-periodic time series and as noted above provide superior results to synthetic and real ECG data, as well as PPG data. It is of interest to note that the hyperparameters were kept constant across experiments. This suggests that embodiments of the method may be an important tool in various applications where 2D quasi-periodic time series are of interest. [00111] Embodiments of the method may be integrated into a range of medical equipment or medical sensor apparatus used to measure ECG, PPG and other quasi-periodic signals. This includes both clinical devices for measuring ECG including in health care and aged care settings, as well consumer and sporting devices such as smartwatches and exercise equipment that integrate sensors for measurement of ECG and similar quasiperiodic signals. Integration in a wide range of devices would thus provide increased monitoring and potential detection of adverse health outcomes including heart disease. Such devices could store a set of rules or thresholds indicative of a pathophysiological variation or adverse condition and be configured to generate an alarm (audio, visual, electronic message, etc.) or report if analysis of measured signals adapted/matched using embodiments of the method described herein triggers the rules or thresholds. Use of embodiments of the method described herein can improve the robustness of such analysis to enable more sensitive detection of pathophysiological variations of the measured signals.

[00112] An embodiment of computer system 700 for implementing the methods described herein is illustrated in Figure 7. Sensors 702 (or sensor apparatus), such as ECG or PPG sensors collect target signals which are sent to a computing apparatus 710 for processing according to embodiments of the method described herein. Alternatively the sensor signals may be collected and stored in an external storage device 704, including cloud storage, for later analysis by the computing apparatus 710. The computing apparatus 710 comprises a central processing unit (CPU) 720, a memory 730, and an Input/Output (or Communications) interface 740, and may include a graphical processing unit (GPU) 750, and input and output devices 760. The CPU 710 may comprise an Arithmetic and Logic Unit (ALU) 722 and a Control Unit and Program Counter element 724. The memory 730 may comprise solid state memory 732 and secondary storage 734 such as a hard disk. The Input/Output Interface 740 may comprise a network interface and/or communications module for communicating with an equivalent communications module in another apparatus using a predefined communications protocol (e.g.

Bluetooth, Zigbee, IEEE 802. 15, IEEE 802. 11, TCP/IP, UDP, etc). Input and output devices may be connected via wired or wireless connections. The Input/Output interface 740 may be in communication with the storage device 704 and save results from the analysis to the external storage device. Input and output devices 760 may comprise a keyboard, a mouse, and a display apparatus such as a flat screen display (eg LCD, LED, plasma, touch screen, etc), a projector, CRT, etc.

[00113] The computing apparatus 710 may comprise a single CPU (core) or multiple CPU’s (multiple core), or multiple processors. The computing apparatus may be a server, desktop, portable computer (including tablet), a smart phone, a smart watch, or a medical device incorporating a processor. The processor may use a parallel processor, a vector processor, or be may be part of a distributed (cloud) computing apparatus. The memory 730 is operatively coupled to the processor(s) 720 and may comprise RAM and ROM components 732, and secondary storage components such as solid state disks and hard disks 734, which may be provided within or external to the device. The external storage device 704 may a directly connected external device, or a network storage device (i.e. connected over a network interface) external to the computing apparatus 710, including cloud based storage devices. The memory may comprise instructions to cause the processor to execute a method described herein. The memory 730 may be used to store the operating system and additional software modules or instructions. The processor(s) 720 may be configured to load and execute the software modules or instructions stored in the memory 730. The computing apparatus may be configured to continuously analyse input sensor signals, and may be configured to generate reports or alarms if pathophysiological variations are detected. For example the computing device may store a set of rules or thresholds indicative of pathophysiological variations against which the results of the analysis can be compared. The computing apparatus further comprises a sensor for measuring one or more biomedical signals which are analysed using the stored method, and the memory may also stores a set of rules or thresholds indicative of pathophysiological variations of the biomedical signal, and is further configured to generate an alarm or report if analysis of a received biomedical signal or signals using the stored method triggers the set of rules or exceeds a threshold.

[00114] Those of skill in the art would understand that information and signals may be represented using any of a variety of technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

[00115] Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software or instructions, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

[00116] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. For a hardware implementation, processing may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof. Software modules, also known as computer programs, computer codes, or instructions, may contain a number a number of source code or object code segments or instructions, and may reside in any computer readable medium such as a RAM memory, flash memory, ROM memory, EPROM memory, registers, hard disk, a removable disk, a CD- ROM, a DVD-ROM, a Blu-ray disc, or any other form of computer readable medium. In some aspects the computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer- readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer- readable media. In another aspect, the computer readable medium may be integral to the processor. The processor and the computer readable medium may reside in an ASIC or related device. The software codes may be stored in a memory unit and the processor may be configured to execute them. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.

[00117] Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by computing device. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a computing device can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.

[00118] In one form the invention may comprise a computer program product for performing the method or operations presented herein. For example, such a computer program product may comprise a computer (or processor) readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material.

[00119] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.

[00120] As used herein, the terms “determining”, “obtaining” and “estimating” encompasses a wide variety of actions. For example, “determining”, “obtaining” and “estimating” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining”, and “obtaining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining”, “obtaining” and “estimating” may include resolving, selecting, choosing, establishing and the like.

[00121] It will be understood that the terms “comprise” and “include” and any of their derivatives (eg comprises, comprising, includes, including) as used in this specification is to be taken to be inclusive of features to which the term refers, and is not meant to exclude the presence of any additional features unless otherwise stated or implied

[00122] In some cases, a single embodiment may, for succinctness and/or to assist in understanding the scope of the disclosure, combine multiple features. It is to be understood that in such a case, these multiple features may be provided separately (in separate embodiments), or in any other suitable combination. Alternatively, where separate features are described in separate embodiments, these separate features may be combined into a single embodiment unless otherwise stated or implied. This also applies to the claims which can be recombined in any combination. That is a claim may be amended to include a feature defined in any other claim. Further a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.

[00123] The following references are referred to in the above description. However the reference to any prior art in this specification is not, and should not be taken as, an acknowledgement of any form of suggestion that such prior art forms part of the common general knowledge of the person skilled in the art or is otherwise well known:

[1] M Astrom, Elena Carro Santos, L Sommo, Pablo Laguna, and Bjorn Wohlfart. Vectorcardiographic loop alignment and the measurement of morphologic beat-to-beat variability in noisy signals. IEEE Transactions on Biomedical Engineering, 47(4): 497-506, 2000.

[2] Mathias Baumert, Elisabeth Lambert, Gautam Vaddadi, Carolina Ika Sari, Murray Esler, Gavin Lambert, Prashanthan Sanders, and Eugene Nalivaiko. Cardiac repolarization variability in patients with postural tachycardia syndrome during graded head-up tilt. Clinical Neurophysiology, 122(2):405-409, 2011.

[3] Mathias Baumert, Alberto Porta, Marc A. Vos, Marek Malik, Jean Philippe Couderc, Pablo Laguna, Gianfranco Piccirillo, Godfrey L. Smith, Larisa G. Tereshchenko, and Paul G.A. Volders. QT interval variability in body surface ECG: Measurement, physiological basis, and clinical value: Position statement and consensus guidance endorsed by the European Heart Rhythm Association jointly with the ESC Working Group on Cardiac Cellular Electroph. Europace, 2016.

[4] Mathias Baumert, Janet Smith, Peter Catcheside, R Douglas McEvoy, Derek Abbott, Prashanthan Sanders, and Eugene Nalivaiko. Variability of QT interval duration in obstructive sleep apnea: an indicator of disease severity. Sleep, 2008. [5] Mathias Baumert, Janet Smith, Peter Catcheside, R Douglas McEvoy, Derek Abbott, Prashanthan Sanders, and Eugene Nalivaiko. Variability of QT interval duration in obstructive sleep apnea: an indicator of disease severity. Sleep, 31(7):959-956, 2008.

[6] Ronald D. Berger, Edward K. Kasper, Kenneth L. Baughman, Eduardo Marban, Hugh Calkins, and Gordon F. Tomaselli. Beat-to-beat QT interval variability: Novel evidence for repolarization lability in ischemic and nonischemic dilated cardiomyopathy. Circulation, 1997.

[7] Paul J. Besl and Neil D. McKay. A Method for Registration of 3 -D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence , 14(2):239-256, 1992.

[8] Fred L. Bookstein. Principal Warps: Thin-Plate Splines and the Decomposition of Deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 (6) : 567 - 585, 1989.

[9] R. Bousseljot, D. Kreiseler, and A. Schnabel. Nutzung der EKG-Signaldatenbank CARD IO DAT der PTB liber das Internet. Biomedizinische Technik, 1995.

[10] Peter H Charlton, Jorge Mariscal Harana, Samuel Vennin, Ye Li, Phil Chowienczyk, and Jordi Alastruey. Modeling arterial pulse waves in healthy aging: a database for in silico evaluation of hemodynamics and pulse wave indexes. American journal of physiology. Heart and circulatory physiology, 317(5):H1062-H1085, nov 2019.

[11] Haili Chui and Anand Rangarajan. A new point matching algorithm for non-rigid registration. Computer Vision and Image Understanding, 89(2): 114 - 141, 2003.

[12] Remi Dubois, Pierre Maison-Blanche, Brigitte Quenet, and Gerard Dreyfus. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators. Computer Methods and Programs in Biomedicine, 88(3):217— 233, 2007.

[13] Jose Garcia, Galen Wagner, Leif Sbmmo, Salvador Olmos, Paul Lander, and Pablo Laguna. Temporal evolution of traditional versus transformed ECG-Based indexes in patients with induced myocardial ischemia. Journal of Electrocardiology , 33(1):37— 47, 2000.

[14] E Gil, R Bailon, J M Vergara, and P Laguna. PTT Variability for Discrimination of Sleep Apnea Related Decreases in the Amplitude Fluctuations of PPG Signal in Children. IEEE Transactions on Biomedical Engineering, 57(5): 1079-1088, 2010.

[15] Ary L. Goldberger, Luis A. N. Amaral, Leon Glass, Jeffrey M. Hausdorff, Plamen Ch. Ivanov, Roger G. Mark, Joseph E. Mietus, George B. Moody, Chung -Kang Peng, and H. Eugene Stanley. PhysioBank, PhysioToolkit, and PhysioNet. Circulation, 2012.

[16] A L Goldberger, L A Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, and H E Stanley. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation, 2000.

[17] Steven Gold, Anand Rangarajan, Chien-Ping Lu, Suguna Pappu, and Eric Mjolsness. New algorithms for 2D and 3D point matching: pose estimation and correspondence. Pattern Recognition, 31(8): 1019-1031, 1998. [18] W B Gu, C C Y Poon, and Y T Zhang. A novel parameter from PPG dicrotic notch for estimation of systolic blood pressure using pulse transit time. In 2008 5th International Summer School and Symposium on Medical Devices and Biosensors, pages 86-88, 2008.

[19] Muhammad A. Hasan, Derek Abbott, and Mathias Baumert. Beat-to-Beat Vectorcardiographic Analysis of Ventricular Depolarization and Repolarization in Myocardial Infarction. PLoS ONE, 7(11): 1-10, 2012.

[20] M. A. Hasan, D. Abbott, and M. Baumert. Beat-to-beat QT interval variability and T-wave amplitude in patients with myocardial infarction. Physiological Measurement, 34(9): 1075-1083, 2013.

[21] Geoffrey E. Hinton, Nitish Srivastava, and Kevin Swersky. Neural Networks for Machine Learning Lecture 6a Overview of mini-batch gradient descent. COURSERA: Neural Networks for Machine Learning, 2012.

[22] B Huang and W Kinsner. ECG frame classification using dynamic time warping. In IEEE CCECE2002. Canadian Conference on Electrical and Computer Engineering. Conference Proceedings (Cat. No.02CH37373), volume 2, pages 1105-1110 vol.2, 2002.

[23] Xiaolei Huang, Nikos Paragios, and Dimitris N. Metaxas. Shape registration in implicit spaces using information theory and free form deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8): 1303— 1318, 2006.

[24] L Karisik and M Baumert. Inhomogeneous Template Adaptation of Temporal Quasi- Periodic Three-Dimensional Signals. IEEE Transactions on Signal Processing, 67(23):6067-6077, 2019.

[25] Eamonn J. Keogh and Michael J. Pazzani. Derivative Dynamic Time Warping. 2001.

[26] Philip A. Knight. The sinkhom-knopp algorithm: Convergence and applications. SIAM Journal on Matrix Analysis and Applications , 30(l):261-275, 2008.

[27] P Laguna, J P Martinez Cortes, and E Pueyo. Techniques for Ventricular Repolarization Instability Assessment From the ECG. Proceedings of the IEEE, 104(2): 392-415, 2016.

[28] P Laguna, R Jane, and P Caminal. Automatic detection of wave boundaries in multilead ECG signals: validation with the CSE database. Computers and biomedical research, an international journal, 27(l):45-60, feb 1994.

[29] P Laguna, R G Mark, A Goldberg, and G B Moody. A database for evaluation of algorithms for measurement of QT and other waveform intervals in the ECG. In Computers in Cardiology 1997, pages 673-676, 1997.

[30] Bing Nan Li, Ming Chui Dong, and Mang I Vai. On an automatic delineator for arterial blood pressure waveforms . Biomedical Signal Processing and Control, 5 ( 1 ) : 76-81, 2010.

[31] J P Martinez, R Almeida, S Olmos, A P Rocha, and P Laguna. A wavelet-based ECG delineator: evaluation on standard databases. IEEE Transactions on Biomedical Engineering, 51(4): 570- 581, 2004. [32] Paul M Middleton, Collin H H Tang, Gregory S H Chan, Sarah Bishop, Andrey V Savkin, and Nigel H Lovell. Peripheral photoplethysmography variability analysis of sepsis patients. Medical & biological engineering & computing, 49(3):337-347, mar 2011.

[33] Makoto Yasuda EDI Hossein Peyvandi. Deterministic Annealing: A Variant of Simulated Annealing and its Application to Fuzzy Clustering, page Ch. 1. IntechOpen, Rijeka, 2017.

[34] M A F Pimentel, A E W Johnson, P H Charlton, D Birrenkott, P J Watkinson, L Tarassenko, and D A Clifton. Toward a Robust Estimation of Respiratory Rate From Pulse Oximeters. IEEE Transactions on Biomedical Engineering, 64(8): 1914-1923, 2017.

[35] Francois Pomerleau, Francis Colas, and Roland Siegwart. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Foundations and Trends in Robotics, 4(1): 1-104, 2015.

[36] Alberto Porta, Giulia Girardengo, Vlasta Bari, Alfred L George Jr, Paul A Brink, Althea Goosen, Lia Crotti, and Peter J Schwartz. Autonomic control of heart rate and QT interval variability influences arrhythmic risk in long QT syndrome type 1. Journal of the American College of Cardiology, 65(4):367-374, feb 2015.

[37] A. Porta, G. Baselli, F. Lombardi, S. Cerutti, R. Antolini, M. Del Greco, F. Ravelli, and G. Nollo. Performance assessment of standard algorithms for dynamic R-T interval measurement: Comparison between R-T(apex) and R-T(end) approach. Medical and Biological Engineering and Computing, 1998.

[38] Anand Rangarajan, Steven Gold, and Eric Mjolsness. A Novel Optimizing Network Architecture with Applications. Neural Computation, 8(5): 1041-1060, 1996.

[39] F Rincon, J Recas, N Khaled, and D Atienza. Development and Evaluation of Multilead Wavelet-Based ECG Delineation Algorithms for Embedded Wireless Sensor Nodes. IEEE Transactions on Information Technology in Biomedicine, 15(6): 854— 863, 2011.

[40] Stan Salvador and Philip Chan. Toward accurate dynamic time warping in linear time and space . Intelligent Data Analysis, 2007.

[41] Martin Schmidt, Mathias Baumert, Hagen Malberg, and Sebastian Zaunseder. Iterative two- dimensional signal warping - towards a generalized approach for adaption of one -dimensional signals. Biomedical Signal Processing and Control, 43:311-319, 2018.

[42] Martin Schmidt, Mathias Baumert, Thomas Penzel, Hagen Malberg, and Sebastian Zaunseder. Nocturnal ventricular repolarization lability predicts cardiovascular mortality in the Sleep Heart Health Study. American Journal of Physiology-Heart and Circulatory Physiology, 316(3):H495- H505, dec 2018.

[43] M Schmidt, M Baumert, A Porta, H Malberg, and S Zaunseder. Two-dimensional warping for one -dimensional signals - conceptual framework and application to ECG processing. IEEE Transactions on Signal Processing, 62(21):5577— 5588, 2014. [44] Thomas W. Soderberg and Scott R. Parry. Free-form deformation of solid geometric models. Proceedings of the 13th annual conference on Computer graphics and interactive techniques - SIGGRAPH '86, 20(4): 151-160, 1986.

[45] Leif Sommo and Pablo Laguna. Electrocardiogram (ECG) Signal Processing, apr 2006.

[46] Vito Stare and Todd T. Schlegel. Real-time multichannel system for beat-to-beat QT interval variability. Journal of Electrocardiology, 39(4) : 358-367, oct 2006.

[47] Keras Team. Keras documentation: RMSprop,jul 2020.

[48] Giorgio Tomasi, Frans Van Den Berg, and Claus Andersson. Correlation optimized warping and dynamic time warping as preprocessing methods for chromatographic data. Journal of Chemometrics, 2004.

[00124] It will be appreciated by those skilled in the art that the disclosure is not restricted in its use to the particular application or applications described. Neither is the present disclosure restricted in its preferred embodiment with regard to the particular elements and/or features described or depicted herein. It will be appreciated that the disclosure is not limited to the embodiment or embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the scope as set forth and defined by the following claims.




 
Previous Patent: PLANTS WITH STEM RUST RESISTANCE

Next Patent: A BARRIER SYSTEM