Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND DEVICES USING META-FEATURES EXTRACTED FROM ACCELEROMETRY SIGNALS FOR SWALLOWING IMPAIRMENT DETECTION
Document Type and Number:
WIPO Patent Application WO/2018/158219
Kind Code:
A1
Abstract:
A method can classify cervical accelerometry data acquired for a swallowing event to identify a possible swallowing impairment in a candidate. The method can include receiving axis-specific vibrational data for an anterior-posterior (A-P) axis and a superior-inferior (S-I) axis and representative of the swallowing event, for example from an accelerometer operatively coupled to a processing module that is a local or remote computing device. The method can include extracting one or more specific meta-features from the data and then outputting from the processing module a classification of the swallowing event based on the extracted meta-features, for example a first classification indicative of normal swallowing or a second classification indicative of possibly impaired swallowing.

Inventors:
PAJULA JUHA (FI)
PÖLÖNEN HARRI (FI)
Application Number:
PCT/EP2018/054749
Publication Date:
September 07, 2018
Filing Date:
February 27, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NESTEC SA (CH)
International Classes:
A61B5/00
Foreign References:
US7749177B22010-07-06
US8267875B22012-09-18
US9138171B22015-09-22
US20140228714A12014-08-14
Other References:
MAMUN KHONDAKER A ET AL: "Swallowing accelerometry signal feature variations with sensor displacement", MEDICAL ENGINEERING & PHYSICS, vol. 37, no. 7, 2015, pages 665 - 673, XP029211923, ISSN: 1350-4533, DOI: 10.1016/J.MEDENGPHY.2015.04.007
MOHAMMAD S NIKJOO ET AL: "Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier", BIOMEDICAL ENGINEERING ONLINE, BIOMED CENTRAL LTD, LONDON, GB, vol. 10, no. 1, 15 November 2011 (2011-11-15), pages 100, XP021093703, ISSN: 1475-925X, DOI: 10.1186/1475-925X-10-100
MEREY CELESTE ET AL: "Quantitative classification of pediatric swallowing through accelerometry", JOURNAL OF NEUROENGINEERING AND REHABILITATION, BIOMED CENTRAL, LONDON, GB, vol. 9, no. 1, 9 June 2012 (2012-06-09), pages 1 - 8, XP021116692, ISSN: 1743-0003, DOI: 10.1186/1743-0003-9-34
ERVIN SEJDIC ET AL: "Classification of Penetration--Aspiration Versus Healthy Swallows Using Dual-Axis Swallowing Accelerometry Signals in Dysphagic Subjects", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE SERVICE CENTER, PISCATAWAY, NJ, USA, vol. 60, no. 7, 1 July 2013 (2013-07-01), pages 1859 - 1866, XP011515753, ISSN: 0018-9294, DOI: 10.1109/TBME.2013.2243730
ZORATTO D C B ET AL: "Hyolaryngeal excursion as the physiological source of swallowing accelerometry signals; The physiological source of swallowing accelerometry signals", PHYSIOLOGICAL MEASUREMENT, INSTITUTE OF PHYSICS PUBLISHING, BRISTOL, GB, vol. 31, no. 6, 1 June 2010 (2010-06-01), pages 843 - 855, XP020175861, ISSN: 0967-3334
OLUBANJO TEMILOLUWA ET AL: "Real-time swallowing detection based on tracheal acoustics", 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 4 May 2014 (2014-05-04), pages 4384 - 4388, XP032617197, DOI: 10.1109/ICASSP.2014.6854430
LAGARDE, MARLOES LJ; KAMALSKI, DIGNA MA; VAN DEN ENGEL-HOEK, LENIE: "The reliability and validity of cervical auscultation in the diagnosis of dysphagia: A systematic review", CLINICAL REHABILITATION, vol. 30, no. 2, March 2015 (2015-03-01), pages 1 - 9
Attorney, Agent or Firm:
CHAUTARD, Cécile (CH)
Download PDF:
Claims:
CLAIMS

The invention is claimed as follows:

1. A device for identifying a possible swallowing impairment in a candidate during execution of a swallowing event, the device comprising:

an accelerometer configured to acquire axis-specific vibrational data along an anterior-posterior (A-P) axis and a superior-inferior (S-I) axis of the candidate's throat, the axis-specific vibrational data is representative of the swallowing event; and

a processing module that is a local or remote computing device operatively coupled to the accelerometer, the processing module configured for processing the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1 st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals,

the processing module configured to classify the swallowing event as one of a plurality of classifications based on the meta-features extracted from the vibrational data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing.

2. The device of Claim 1 , wherein the processing module is configured to automatically extract the meta-features from the data.

3. The device of Claim 1 , wherein the processing module is configured to automatically use the meta-features extracted from the data to classify the swallowing event.

4. The device of Claim 1 , wherein the processing module is configured to compare the meta-features extracted from the data to preset classification criteria to classify the swallowing event.

5. The device of Claim 4, wherein the preset classification criteria are defined for each of swallowing safety and swallowing efficiency.

6. The device of claim 5, wherein the preset classification criteria are defined by features previously extracted and classified from a known training data set.

7. The device of claim 1 , wherein the second classification is indicative of at least one of a swallowing safety impairment or a swallowing efficiency impairment.

8. The device of claim 1 , wherein the second classification is indicative of at least one of penetration or aspiration, and the processing module is configured to further classify the swallowing event as a first event indicative of a safe event or a second event indicative of an unsafe event.

9. The device of claim 1, wherein the processing module is configured to classify multiple successive swallowing events by classifying the data for each of the successive swallowing events as indicative of one of the first classification or the second classification.

10. The device of claim 1 , wherein the processing module displays the classification.

1 1. A method for classifying cervical accelerometry data acquired for a swallowing event to identify a possible swallowing impairment in a candidate, the method comprising:

receiving axis-specific vibrational data for an anterior-posterior (A-P) axis and a superior-inferior (S-I) axis and representative of the swallowing event, a processing module that is a local or remote computing device operatively coupled to an accelerometer receives the axis-specific vibrational data from the accelerometer;

processing the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals; and

outputting a classification of the swallowing event as one of a plurality of classifications based on the meta-features extracted from the data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing, and the processing module outputs the classification.

12. The method of Claim 1 1, wherein the processing module automatically extracts the meta-features from the data.

13. The method of Claim 1 1, wherein the processing module automatically uses the meta-features extracted from the data to classify the swallowing event.

14. The method of Claim 11 , comprising comparing the meta-features extracted from the data to preset classification criteria to classify the swallowing event on the processing module.

15. The method of Claim 14, wherein the preset classification criteria are defined for each of swallowing safety and swallowing efficiency.

16. The method of claim 15, wherein the preset classification criteria are defined by features previously extracted and classified from a known training data set.

17. The method of claim 1 1, wherein the second classification is indicative of at least one of a swallowing safety impairment or a swallowing efficiency impairment.

18. The method of claim 1 1, wherein the second classification is indicative of at least one of penetration or aspiration, and the method comprises further classifying the swallowing event as a first event indicative of a safe event or a second event indicative of an unsafe event.

19. The method of claim 11 , comprising classifying successive swallowing events by classifying the data for each of the successive swallowing events as indicative of one of the first classification or the second classification.

20. The method of claim 1 1 , comprising displaying the classification on the processing device.

21. A method of treating dysphagia in a patient, the method comprising:

positioning a sensor externally on the throat of the patient, the sensor acquiring vibrational data representing swallowing activity and associated with an anterior-posterior axis and a superior-inferior axis of the throat, the sensor operatively connected to a processing module configured to process the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals, the processing module configured to classify the swallowing event as one of a plurality of classifications based on the meta-features extracted from the vibrational data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing; and

adjusting a feeding administered to the patient based on the classification.

22. The method of Claim 21 wherein the adjusting of the feeding is selected from the group consisting of: changing a consistency of the feeding, changing a type of food in the feeding, changing a size of a portion of the feeding administered to the patient, changing a frequency at which portions of the feeding are administered to the patient, and combinations thereof.

Description:
TITLE

METHODS AND DEVICES USING META-FEATURES EXTRACTED FROM ACCELEROMETRY SIGNALS FOR SWALLOWING IMPAIRMENT DETECTION

BACKGROUND

[0001] The present disclosure generally relates to methods and devices for using meta-features extracted from accelerometry signals for swallowing impairment detection, whereby a candidate executes one or more swallowing events and dual axis accelerometry data is acquired representative thereof. More specifically, the present disclosure relates to specific meta-features extracted from axis-specific accelerometry signals.

[0002] Dysphagia is characterized by impaired involuntary motor control of swallowing process and can cause "penetration" which is the entry of foreign material into the airway. The airway invasion can be accompanied by "aspiration" in which the foreign material enters the lungs and can lead to serious health risks.

[0003] The three phases of swallowing activity are oral, pharyngeal and esophageal. The pharyngeal phase is typically compromised in patients with dysphagia. The impaired pharyngeal phase of swallowing in dysphagia is a prevalent health condition (38% of the population above 65 years) and may result in prandial aspiration (entry of food into the airway) and/or pharyngeal residues, which in turn can pose serious health risks such as aspiration pneumonia, malnutrition, dehydration, and even death. Swallowing aspiration can be silent (i.e., without any overt signs of swallowing difficulty such as cough), especially in children with dysphagia and patients with acute stroke, rendering detection via clinical perceptual judgement difficult.

[0004] The current gold standard for tracking swallowing activities is videofiuoroscopy that enables clinicians to monitor barium-infused foodstuff during swallowing via moving x-ray images. However, the videofiuoroscopy swallowing study (VFSS) cannot be done routinely due to the expensive procedure and the need for specialized personnel, as well as the excessive amount of harmful radiations. Another invasive assessment is the flexible endoscopic evaluation of swallowing, which also requires trained personnel and entails an expensive procedure. Non-invasive alternatives for swallow monitoring include surface electromyography, pulse oximetry, cervical auscultation (listening to the breath sounds near the larynx) and swallowing accelerometry.

[0005] Despite introduction of different non-invasive approaches, a reliable bedside detection of swallowing abnormalities remains a challenging task. For example, a recent systematic review of cervical auscultation studies suggests that the reliability of the approach is insufficient and it can not be used as a stand-alone instrument to diagnose dysphagia. Lagarde, Marloes LJ and Kamalski, Digna MA and van den Engel-Hoek, Lenie, "The reliability and validity of cervical auscultation in the diagnosis of dysphagia: A systematic review," Clinical Rehabilitation 30(2): l-9 (3/2015). Furthermore, perceptual clinical screening of dysphagia has been shown to lack agreement between different speech-language pathologists, possibly due to the subjective nature of the judgement as well as the presence of variety of environmental artifacts.

[0006] Over the past two decades, researchers have reported on various swallowing screening tools, among which those driven by swallowing sounds are the most popular. Swallowing sounds are either captured acoustically using a microphone or mechanically using an accelerometer placed on the patient's neck measuring cervical epidermal vibrations. Reports on discriminative analysis of swallowing auscultation signals vary in terms of the screening tool used, target swallowing problem (aspiration, penetration, pharyngeal residue), sample size, patient population and medical conditions, and validation approach, which makes a direct comparison between these studies virtually impossible.

[0007] Swallowing accelerometry harnesses the hyoid and laryngeal movements during swallowing activities, which are manifested as epidermal vibrations measurable at the neck by an accelerometer. Vibrations in both the anterior-posterior (A-P) and superior-inferior (S-I) anatomical directions are found to contain distinct information about the underlying swallowing activities.

[0008] Nevertheless, the development of a fully automated, accurate swallowing screening tool remains an elusive challenge.

SUMMARY

[0009] In a general embodiment, the present disclosure provides a device for identifying a possible swallowing impairment in a candidate during execution of a swallowing event. The device comprises: an accelerometer configured to acquire axis-specific vibrational data along an anterior-posterior (A-P) axis and a superior-inferior (S-I) axis of the candidate's throat, the axis-specific vibrational data is representative of the swallowing event; and a processing module that is a local or remote computing device operatively coupled to the accelerometer, the processing module configured for processing the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals. The processing module is configured to classify the swallowing event as one of a plurality of classifications based on the meta-features extracted from the vibrational data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing.

[0010] In an embodiment, the processing module is configured to automatically extract the meta-features from the data. The processing module can be configured to automatically use the meta-features extracted from the data to classify the swallowing event.

[0011] In an embodiment, the processing module is configured to compare the meta-features extracted from the data to preset classification criteria to classify the swallowing event. The preset classification criteria can be defined for each of swallowing safety and swallowing efficiency and/or defined by features previously extracted and classified from a known training data set. [0012] In an embodiment, the second classification is indicative of at least one of a swallowing safety impairment or a swallowing efficiency impairment.

[0013] In an embodiment, the second classification is indicative of at least one of penetration or aspiration, and the processing module is configured to further classify the swallowing event as a first event indicative of a safe event or a second event indicative of an unsafe event.

[0014] In an embodiment, the processing module is configured to classify multiple successive swallowing events by classifying the data for each of the successive swallowing events as indicative of one of the first classification or the second classification.

[0015] In an embodiment, the processing module displays the classification.

[0016] In another general embodiment, the present disclosure provides a method for classifying cervical accelerometry data acquired for a swallowing event to identify a possible swallowing impairment in a candidate. The method comprises: receiving axis-specific vibrational data for an anterior-posterior (A-P) axis and a superior-inferior (S-I) axis and representative of the swallowing event, a processing module that is a local or remote computing device operatively coupled to an accelerometer receives the axis-specific vibrational data from the accelerometer; processing the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals; and outputting a classification of the swallowing event as one of a plurality of classifications based on the meta- features extracted from the data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing, and the processing module outputs the classification.

[0017] In an embodiment, the processing module automatically extracts the meta-features from the data. The processing module can automatically use the meta-features extracted from the data to classify the swallowing event on the processing module.

[0018] In an embodiment, the method comprises comparing the meta-features extracted from the data to preset classification criteria to classify the swallowing event on the processing module. The preset classification criteria can be defined for each of swallowing safety and swallowing efficiency and/or defined by features previously extracted and classified from a known training data set.

[0019] In an embodiment, the second classification is indicative of at least one of a swallowing safety impairment or a swallowing efficiency impairment.

[0020] In an embodiment, the second classification is indicative of at least one of penetration or aspiration, and the method comprises further classifying the swallowing event as a first event indicative of a safe event or a second event indicative of an unsafe event.

[0021] In an embodiment, the method comprises classifying successive swallowing events by classifying the data for each of the successive swallowing events as indicative of one of the first classification or the second classification.

[0022] In an embodiment, the method comprises displaying the classification on the processing device.

[0023] In yet another general embodiment, the present disclosure provides a method of treating dysphagia in a patient. The method comprises: positioning a sensor externally on the throat of the patient, the sensor acquiring vibrational data representing swallowing activity and associated with an anterior-posterior axis and a superior-inferior axis of the throat, the sensor operatively connected to a processing module configured to process the axis-specific data to extract meta-features from the data, one or more of the meta-features associated with an approach selected from the group consisting of (i) swallow segmentation using spectrogram, (ii) sound direction for a non-segmented spectrogram, (iii) difference between SI and AP signals regarding correlation coefficient between residual and basic signals for a non-segmented spectrogram, (iv) difference between SI and AP signals regarding residual peaking feature for a non-segmented spectrogram, (v) velocity and position for integrating velocity and position from sensor signals, (vi) basic signal statistics for segmented spectrogram, (vii) spectral entropy at different bandwidths for segmented spectrogram, (viii) direction of focus of spectrogram components for segmented spectrogram, (ix) spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies for segmented spectrogram, (x) PCA from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis for segmented spectrogram, (xi) texture features from spectrogram image for specture, and (xii) signal entropy for head and swallow signals, the processing module configured to classify the swallowing event as one of a plurality of classifications based on the meta- features extracted from the vibrational data, the plurality of classifications comprising a first classification indicative of normal swallowing and a second classification indicative of possibly impaired swallowing; and adjusting a feeding administered to the patient based on the classification.

[0024] In an embodiment, the adjusting of the feeding is selected from the group consisting of: changing a consistency of the feeding, changing a type of food in the feeding, changing a size of a portion of the feeding administered to the patient, changing a frequency at which portions of the feeding are administered to the patient, and combinations thereof.

BRIEF DESCRIPTION OF THE FIGURES

[0025] FIG. 1 is diagram showing the axes of acceleration in the anterior-posterior and superior-inferior directions.

[0026] FIG. 2 is a schematic diagram of an embodiment of a swallowing impairment detection device in operation.

[0027] FIG. 3 is a schematic diagram of an embodiment of a method of discriminating swallowing aspiration-penetration.

DETAILED DESCRIPTION

[0028] As used in this disclosure and the appended claims, the singular forms "a," "an" and "the" include plural referents unless the context clearly dictates otherwise. As used herein, "about" is understood to refer to numbers in a range of numerals, for example the range of -10% to +10% of the referenced number, preferably -5% to +5% of the referenced number, more preferably -1% to +1% of the referenced number, most preferably -0.1% to +0.1 % of the referenced number. Moreover, all numerical ranges herein should be understood to include all integers, whole or fractions, within the range.

[0029] The words "comprise," "comprises" and "comprising" are to be interpreted inclusively rather than exclusively. Likewise, the terms "include," "including" and "or" should all be construed to be inclusive, unless such a construction is clearly prohibited from the context. A disclosure of a device "comprising" several components does not require that the components are physically attached to each other in all embodiments.

[0030] Nevertheless, the devices disclosed herein may lack any element that is not specifically disclosed. Thus, a disclosure of an embodiment using the term "comprising" includes a disclosure of embodiments "consisting essentially of and "consisting of the components identified. Similarly, the methods disclosed herein may lack any step that is not specifically disclosed herein. Thus, a disclosure of an embodiment using the term "comprising" includes a disclosure of embodiments "consisting essentially of and "consisting of the steps identified.

[0031] The term "and/or" used in the context of "X and/or Y" should be interpreted as "X," or "Y," or "X and Y." Where used herein, the terms "example" and "such as," particularly when followed by a listing of terms, are merely exemplary and illustrative and should not be deemed to be exclusive or comprehensive. Any embodiment disclosed herein can be combined with any other embodiment disclosed herein unless explicitly stated otherwise.

[0032] As used herein, a "bolus" is a single sip or mouthful or a food or beverage. As used herein, "aspiration" is entry of food or drink into the trachea (windpipe) and lungs and can occur during swallowing and/or after swallowing (post-deglutitive aspiration). Post-deglutitive aspiration generally occurs as a result of pharyngeal residue that remains in the pharynx after swallowing.

[0033] An aspect of the present disclosure is a method of processing dual-axis accelerometry signals to classify one or more swallowing events. A non-limiting example of such a method classifies each of the one or more swallowing events as a swallow with aspiration-penetration or a swallow without aspiration-penetration. Another aspect of the present disclosure is a device that implements one or more steps of the method.

[0034] In an embodiment, the method can further comprise classifying the patient as having safe swallowing or unsafe swallowing. For example, a patient can be classified as having unsafe swallowing if the one or more swallowing events comprise an amount or percentage of aspiration-penetration events that exceeds a threshold. In such an embodiment, the threshold can be zero such that the presence of any aspiration-penetration events classifies the patient as having unsafe swallowing. Of course, in other such embodiments, the threshold can be greater than zero.

[0035] In some embodiments, the method and the device can be employed in the apparatus and/or the method for detecting aspiration disclosed in U.S. Patent No. 7,749,177 to Chau et al., the method and/or the system of segmentation and time duration analysis of dual-axis swallowing accelerometry signals disclosed in U.S. Patent No. 8,267,875 to Chau et al., the system and/or the method for detecting swallowing activity disclosed in U.S. Patent No. 9,138,171 to Chau et al., or the method and/or the device for swallowing impairment detection disclosed in U.S. Patent App. Publ. No. 2014/0228714 to Chau et al, each of which is incorporated herein by reference in its entirety.

[0036] As discussed in greater detail hereafter, the device may include a sensor configured to produce signals indicating swallowing activities (e.g., a dual axis accelero meter). The sensor may be positioned externally on the neck of a human, preferably anterior to the cricoid cartilage of the neck. A variety of means may be applied to position the sensor and to hold the sensor in such position, for example double-sided tape. Preferably the positioning of the sensor is such that the axes of acceleration are aligned to the anterior-posterior and super-inferior directions, as shown in FIG. 1. As used herein, the anterior-posterior (A-P) axis and the superior-inferior (S-I) axis are relative to the candidate's throat.

[0037] FIG. 2 generally illustrates a non-limiting example of a device 100 for use in swallowing impairment detection. The device 100 can comprise a sensor 102 (e.g., a dual axis accelerometer) to be attached in a throat area of a candidate for acquiring dual axis accelerometry data and/or signals during swallowing, for example illustrative S-I acceleration signal 104. Accelerometry data may include, but is not limited to, throat vibration signals acquired along the anterior-posterior axis (A-P) and/or the superior-inferior axis (S-I). The sensor 102 can be any accelero meter known to one of skill in this art, for example a single axis accelerometer (which can be rotated on the patient to obtain dual-axis vibrational data) such as an EMT 25-C single axis accelerometer or a dual axis accelerometer such as an ADXL322 or ADXL327 dual axis accelerometer, and the present disclosure is not limited to a specific embodiment of the sensor 102.

[0038] The sensor 102 does not necessarily need to be fixed exactly in A-P and S-I orientation. In this regard, the sensor 102 can merely measure in two perpendicular directions along the sagittal plane of the subject, and the acceleration vectors in both S-I and A-P directions can be extracted from the two sensor signals.

[0039] The sensor 102 can be operatively coupled to a processing module 106 configured to process the acquired data for swallowing impairment detection, for example aspiration-penetration detection and/or detection of other swallowing impairments such as swallowing inefficiencies. The processing module 106 can be a distinctly implemented device operatively coupled to the sensor 102 for communication of data thereto, for example, by one or more data communication media such as wires, cables, optical fibers, and the like and/or by one or more wireless data transfer protocols. In some embodiments, the processing module 106 may be implemented integrally with the sensor 102.

[0040] Generally, the processing of the dual-axis accelerometry signals comprises at least one of (i) a process in which at least a portion of the A-P signal and at least a portion of the S-I signal are analyzed individually by calculating the meta-features of each signal separately from the other channel or (ii) a process combining at least a portion of the axis-specific vibrational data for the A-P axis with at least a portion of the axis-specific vibrational data for the S-I axis and then extracting meta-features from the combined data. Then the swallowing event can be classified based on the extracted meta-features. In applying this approach, the swallowing events may be effectively classified as a normal swallowing event or a potentially impaired swallowing events (e.g., unsafe and/or inefficient). Preferably the classification is automatic such that no user input is needed for the dual-axis accelerometry signals to be processed and used for classification of the swallow.

[0041] FIG. 3 illustrates a non-limiting embodiment of a method 500 for classifying a swallowing event. At Step 502, dual-axis accelerometry data for both the S-I axis and the A-P axis is acquired or provided for one or more swallowing events, for example dual-axis accelerometry data from the sensor 102.

[0042] At Step 504, the dual-axis accelerometry data can optionally be processed to condition the accelerometry data and thus facilitate further processing thereof. For example, the dual-axis accelerometry data may be filtered, denoised, and/or processed for signal artifact removal ("preprocessed data"). In an embodiment, the dual-axis accelerometry data is subjected to an inverse filter, which may include various low-pass, band-pass and/or high-pass filters, followed by signal amplification. A denoising subroutine can then applied to the inverse filtered data, preferably processing signal wavelets and iterating to find a minimum mean square error.

[0043] In an embodiment, the preprocessing may comprise a subroutine for the removal of movement artifacts from the data, for example, in relation to head movement by the patient. Additionally or alternatively, other signal artifacts, such as vocalization and blood flow, may be removed from the dual-axis accelerometry data. Nevertheless, the method 500 is not limited to a specific embodiment of the preprocessing of the accelerometry data, and the preprocessing may comprise any known method for filtering, denoising and/or removing signal artifacts.

[0044] At Step 506, the accelerometry data (either raw or preprocessed) can then be automatically or manually segmented into distinct swallowing events. Preferably the accelerometry data is automatically segmented. In an embodiment, the segmentation is automatic and energy-based. Additionally or alternatively, manual segmentation may be applied, for example by visual inspection of the data. The method 500 is not limited to a specific process of segmentation, and the process of segmentation can be any segmentation process known to one skilled in this art.

[0045] At Step 508, meta-feature based representation of the accelerometry data can be performed. For example, one or more time-frequency domain features can be calculated for each axis-specific data set. Combinations of extracted features may be considered herein without departing from the general scope and nature of the present disclosure. Preferably different features are extracted for each axis-specific data set, but in some embodiments the same features may be extracted in each case. Furthermore, other features may be considered for feature extraction, for example, including one or more time, frequency and/or time-frequency domain features (e.g., mean, variance, center frequency, etc.).

[0046] At Step 510 (which is optional), a subset of the meta-features may be selected for classification, for example based on the previous analysis of similar extracted feature sets derived during classifier training and/or calibration. For example, in one embodiment, the most prominent features or feature components/levels extracted from the classifier training data set are retained as most likely to provide classifiable results when applied to new test data, and are thus selected to define a reduced feature set for training the classifier and ultimately enabling classification. For instance, in the context of wavelet decompositions, or other such signal decompositions, techniques such as linear discriminant analysis, principle component analysis or other such techniques effectively implemented to qualify a quantity and/or quality of information available from a given decomposition level, may be used on the training data set to preselect feature components or levels most likely to provide the highest level of usable information in classifying newly acquired signals. Such preselected feature components/levels can then be used to train the classifier for subsequent classifications. Ultimately, these preselected features can be used in characterizing the classification criteria for subsequent classifications.

[0047] Accordingly, where the device has been configured to operate from a reduced feature set, such as described above, this reduced feature set can be characterized by a predefined feature subset or feature reduction criteria that resulted from the previous implementation of a feature reduction technique on the classifier training data set. Newly acquired data can thus proceed through the various pre-processing and segmentation steps described above (steps 504, 506), the various swallowing events so identified then processed for feature extraction at step 508 (e.g., full feature set), and those features corresponding with the preselected subset retained at step 510 for classification at step 512.

[0048] While the above exemplary approach contemplates a discrete selection of the most prominent features, other techniques may also readily apply. For example, in some embodiments, the results of the feature reduction process may rather be manifested in a weighted series or vector for association with the extracted feature set in assigning a particular weight or level of significance to each extracted feature component or level during the classification process. In particular, selection of the most prominent feature components to be used for classification can be implemented via linear discriminant analysis (LDA) on the classifier training data set. Consequently, feature extraction and reduction can be effectively used to distinguish safe swallows from potentially unsafe swallows, and efficient swallows from potentially inefficient swallows. In this regard, the extraction of the selected features from new test data can be compared to preset classification criteria established as a function of these same selected features as previously extracted and reduced from an adequate training data set, to classify the new test data as representative of a normal vs. impaired swallow (e.g., safe swallows vs. unsafe swallows, and/or efficient swallows vs. inefficient swallows). As will be appreciated by the skilled artisan, other feature sets such as frequency, time and/or time-frequency domain features may be used.

[0049] In a preferred embodiment, one or more of the extracted meta-features can be associated with segmentation preprocessing such as swallow segmentation using a spectrogram. An example of such a meta- feature is segment length.

[0050] In a preferred embodiment, one or more of the extracted meta-features can be associated with non-segmented spectrogram preprocessing, for example, for analyzing the audio frequency band of the acceleration spectra, the number of detected peaks in sound power measurement, measures of sound direction in respect to the sensor location, number of swallows based on spectrogram methods, detected noise artefacts on the signal and/or difference between S-I and A-P signals (residual) (one or more correlation coefficients between residual and basic signals, or residual peaking feature). In an embodiment, the accelerometer can also detect voices.

[0051] In a preferred embodiment, one or more of the extracted meta-features can be associated with integrating velocity and position from sensor signals preprocessing, for example with measures like maximum values, standard deviations, between signal difference and intra signal difference between different segments of signal (for example first third in comparison to last third segment of the position/velocity signal) and amount of change in the velocity or position in comparison to the initial movement or location at the beginning of the measurement; searching number of signal peaks on the S-I or A-P velocity measures or from combined velocity measures of both signals; or regression line and zero-crossings of the residual between S-I and A-P signals. [0052] In a preferred embodiment, one or more of the extracted meta-features can be associated with segmented spectrogram preprocessing, for example signal basic statistics from spectrogram (e.g., variance or standard deviation calculated over spectrogram for different bandwidths, which can be calculated separately for each of S-I or A-P signal channel, or as common for both; spectral entropy at different bandwidths; direction of focus of spectrogram components; spectral entropy for the spectrogram taken as a difference between low- frequencies and high-frequencies; or principal component model (PCA) from spectrogram, measuring percentage of variance explained by 1st or 2nd PCA component, either in time or frequency axis.

[0053] In a preferred embodiment, one or more of the extracted meta-features can be associated with specture: Texture features of intensity image formed from visualization of spectrogram.

[0054] In a preferred embodiment, one or more of the extracted meta-features can be associated with segmented head and swallow signals preprocessing, such as signal entropy.

[0055] In a preferred embodiment, one or more of the extracted meta-features can be Mean Absolute Value of the acceleration signals within swallow segment, number of detected swallows, Waveform Length of the swallow segment, distance between swallow and detected vocal/speech, number of detected coughs.

[0056] In a preferred embodiment, one or more of the extracted meta-features can be associated with head signal, which is computed from the AP and SI signal low-frequency trends for tracking of the head motion.

[0057] In a preferred embodiment, one or more of the extracted meta-features can be associated with swallow signal, which is computed from the AP and SI signal mid frequency range for tracking of the swallow motion.

[0058] In a preferred embodiment, one or more of the extracted meta-features can be associated with cough methods, e.g. measuring average or maximum signal energy within the detected cough period.

[0059] At Step 512, feature classification can be implemented. Extracted features (or a reduced/weighted subset thereof) of acquired swallow-specific data can be compared with preset classification criteria to classify each data set as representative of a normal swallowing event or a potentially impaired swallowing event. [0060] In an embodiment, the method 500 can optionally comprise a training/validation subroutine Step 516 in which a data set representative of multiple swallows is processed such that each swallow-specific data set ultimately experiences the preprocessing, feature extraction and feature reduction disclosed herein. A validation loop can be applied to the discriminant analysis-based classifier using a cross-validation test. After all events have been classified and validated, output criteria may be generated for future classification without necessarily applying further validation to the classification criteria. Alternatively, routine validation may be implemented to either refine the statistical significance of classification criteria, or again as a measure to accommodate specific equipment and/or protocol changes (e.g. recalibration of specific equipment, for example, upon replacing the accelerometer with same or different accelerometer type/model, changing operating conditions, new processing modules such as further preprocessing subroutines, artifact removal, additional feature extraction/reduction, etc.).

[0061] The classification can be used to determine and output which swallowing event represented a normal swallowing event as compared to a penetration, an aspiration, a swallowing safety impairment and/or an swallowing efficiency impairment at Step 514. In some embodiments, the swallowing event can be further classified as a safe event or an unsafe event.

[0062] For example, the processing module 106 and/or a device associated with the processing module 106 can comprise a display that identifies a swallow or an aspiration using images such as text, icons, colors, lights turned on and off, and the like. Alternatively or additionally, the processing module 106 and/or a device associated with the processing module 106 can comprise a speaker that identifies a swallow or an aspiration using auditory signals. The present disclosure is not limited to a specific embodiment of the output, and the output can be any means by which the classification of the swallowing event is identified to a user of the device 100, such as a clinician or a patient.

[0063] The output may then be utilized in screening/diagnosing the tested candidate and providing appropriate treatment, further testing, and/or proposed dietary or other related restrictions thereto until further assessment and/or treatment may be applied. For example, adjustments to feedings can be based on changing consistency or type of food and/or the size and/or frequency of mouthfuls being offered to the patient. [0064] Alternative types of vibration sensors other than accelerometers can be used with appropriate modifications to be the sensor 102. For example, a sensor can measure displacement (e.g, a microphone), while the processing module 106 records displacement signals over time. As another example, a sensor can measure velocity, while the processing module 106 records velocity signals over time. Such signals can then be converted into acceleration signals and processed as disclosed herein and/or by other techniques of feature extraction and classification appropriate for the type of received signal.

[0065] Another aspect of the present disclosure is a method of treating dysphagia. The term "treat" includes both prophylactic or preventive treatment (that prevent and/or slow the development of dysphagia) and curative, therapeutic or disease-modifying treatment, including therapeutic measures that cure, slow down, lessen symptoms of, and/or halt progression of dysphagia; and treatment of patients at risk of dysphagia, for example patients having another disease or medical condition that increase their risk of dysphagia relative to a healthy individual of similar characteristics (age, gender, geographic location, and the like). The term does not necessarily imply that a subject is treated until total recovery. The term "treat" also refers to the maintenance and/or promotion of health in an individual not suffering from dysphagia but who may be susceptible to the development of dysphagia. The term "treat" also includes the potentiation or otherwise enhancement of one or more primary prophylactic or therapeutic measures. The term "treat" further includes the dietary management of dysphagia or the dietary management for prophylaxis or prevention of dysphagia. A treatment can be conducted by a patient, a clinician and/or any other individual or entity.

[0066] The method of treating dysphagia comprises using any embodiment of the device 100 disclosed herein and/or performing any embodiment of the method 500 disclosed herein. The method can further comprise adjusting a feeding administered to the patient based on the classification, for example by changing a consistency of the feeding, changing a type of food in the feeding, changing a size of a portion of the feeding administered to the patient, changing a frequency at which portions of the feeding are administered to the patient, or combinations thereof.

[0067] In an embodiment, the method prevents aspiration pneumonia from dysphagia. In an embodiment, the dysphagia is oral pharyngeal dysphagia associated with a condition selected from the group consisting of cancer, cancer chemotherapy, cancer radiotherapy, surgery for oral cancer, surgery for throat cancer, a stroke, a brain injury, a progressive neuromuscular disease, neurodegenerative diseases, an elderly age of the patient, and combinations thereof. As used herein, an "elderly" human is a person with a chronological age of 65 years or older.

[0068] It should be understood that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.