Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM AND METHOD FOR CONTROLLING AN ASSISTIVE DEVICE
Document Type and Number:
WIPO Patent Application WO/2022/149165
Kind Code:
A1
Abstract:
Present invention relates to a system and method for controlling assistive device. The assistive device is controlled based on pattern recognition of body signals of a wearer wearing the assistive device. By adapting to the body signals of a person or wearer, pattern recognition based assistive device cuts down post-fitment-training period needed by the user or wearer of the assistive device. Said system does not require a wearer to generate definite impulses, receive input data from multiple data points and seamlessly controls the assistive device with multiple degrees of freedom.

Inventors:
BASU SUBHOJIT (IN)
BHALERAO PRATIK (IN)
PATIL VISHAL (IN)
Application Number:
PCT/IN2022/050014
Publication Date:
July 14, 2022
Filing Date:
January 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DEEDEE LABS PRIVATE LTD (IN)
International Classes:
A61H3/00; A61F2/72; G06F3/01
Foreign References:
US20160107309A12016-04-21
US20200275895A12020-09-03
CN110695959A2020-01-17
Attorney, Agent or Firm:
JOSHI, Archana (IN)
Download PDF:
Claims:
CLAIMS

We claim,

1. A system for controlling assistive devices to perform a desired task comprises, plurality of electrodes (110) mounted at various predetermined positions of a user’s residual limb configured to receive surface electromyography signals from the active muscle sites of a residual limb, atleast one motion sensor (140) to capture motion change, atleast one orientation sensor (150) to capture change in orientation of a residual limb with respect to assistive device (10) for accurate detection of motion and orientation of a residual limb/healthy muscle, a sensing module (120) configured to amplify the received electromyography signals , reduce noise, filter received signals and digitized the said signals, a processing unit (130) configured to receive digital electromyography signals data from said sensing module (120), classify, analyze and store the said data, a machine learning module (170) configured to predict correct gesture, false motions and a plurality of actuators (160) configured to receive signal from said machine learning module (170) and perform the gesture intended to be performed by the user.

2. The system for controlling assistive devices to perform a desired task as claimed in claiml, wherein; said electrodes (110) are non-invasive and passive electrodes and mounted on the users skin surface either in the form of flexible, adjustable socket or to be worn around the limb as a wearable band.

3. The system for controlling assistive devices to perform a desired task as claimed in claiml, wherein; said motion (140) and orientation (150) sensors are selected from the group consisting of accelerometer, gyroscope, and magnetometer. 4. The system for controlling assistive devices to perform a desired task as claimed in claim 1 further comprises of; force or pressure feedback sensors; overcurrent limit sensors and finger movement limiting sensors for providing feedback of parameters to the system to modify or alter control signals to control the assistive device (10) and prevent the assistive device or its actuator from damage.

5. A method for identification of gestures/actions and undesired motions by controlling an assistive device (10), comprising the steps of:

> capturing the surface electromyography signals from the active muscle sites of the wearer by plurality of non-invasive, passive electrodes (110);

> sensing of the said signals by a sensing module (120);

> amplifying, filtering and digitalizing of the electromyography signals;

> receiving digital electromyography signals by a processing unit (130);

> storing, analyzing and classifying digital electromyography signals by the processing unit (130);

> determining of energy levels of said signals and comparing the energy level with a pre-determined or threshold energy level to determine presence of valid intentionally performed gestures;

> analyzing and processing of the gesture signal by the processing unit (130);

> collecting motion and pose parameters from the motion sensor (140) and the orientation sensors (150);

> predicting the accurate gesture signals and controlling actuator by machine learning module (170) to perform a gesture intended to be performed by the user.

6. A method for identification of gestures/actions and undesired motions by controlling an assistive device (10) as claimed in claim 5; wherein; said classification parameters are selected from the group consisting of Mean, Variance, Correlation, Mode, Quartiles, Median, Standard Deviation, Minimum, Maximum, Slope sign changes, Zero crossing, Structure, Length.

Description:
TITLE OF THE INVENTION

A system and method for controlling an assistive device

FIELD OF THE INVENTION

[001] The present invention relates to a system and method for controlling an assistive device.

[002] More particularly, present invention pertains to a system for controlling limb prosthesis devices, upper limb exoskeletal devices or neuro-rehabilitation devices and the method of controlling thereof.

BACKGROUND OF THE INVENTION

[003] Functional assistive devices play an important role in enhancing the quality of life of people with locomotor disability, partial or full loss of a limb by improving mobility and the ability to manage activities of daily living.

[004] Functional or powered assistive devices employ multiple parts to exhibit an intended gesture or action or perform tasks. Most of the commercially available powered prosthetic devices use sensors to capture surface electromyographic (EMG) signals or muscle’s electrical impulses from the residual limb of a wearer. It is critical to place the sensors relative to the active muscle sites of a wearer as the sensors non-invasively measure and amplify electric impulses generated by the active muscle. The functionality of the assistive device is thus dependent upon electric impulse captured by the sensor. Typically, assistive devices comprise one or two sensors and supports only one degree of freedom (DoF). In particular, conventional myoelectric prosthetic hand systems with one or two sensors support one DoF primarily, opening and closing the hand. Furthermore, for the assistive device to function accurately, the wearer has to learn to trigger the prosthetic with a definite signal by intentionally making specific types of muscle movements in order to make/instruct the assistive device to perform a desired action or gesture or a task. For Ex: The user needs to trigger a specific group of muscles to perform a power grip and trigger a different set of muscles to perform a relaxed hand grip. The wearer thus needs to master the art of triggering muscles separately for the different actions or gestures or tasks. This necessitates for the wearer to be put through an extensive post-fitment-training program to not only adapt to the externally fixated assistive devices, but also to learn control of the assistive device. This is a cumbersome process and takes a long time for the wearer to get trained and used to.

[005] A notable limitation to all currently available myoelectric control systems is the inability to simultaneously control independent multiple DoFs of the prosthetic devices.

[006] Further, systems in the prosthetic devices are such that if the wearer experiences a mild jerk in the device while moving around or travelling, it leads to an undesired gesture triggering.

[007] Also, to support more gestures, a conventional prosthesis requires the user to change gesture modes by generating a specific type of electrical impulse by triggering muscles. A gesture mode consists of a group of few predefined types of gestures e.g. power grip, open together form one gesture mode, lateral pinch and point form another gesture mode. All the gestures and the modes need configuration via an external device like mobile app or external switch based control or RFID based tags or voice control input. This complicates its use by making the person remember multiple modes and impulses for using it.

[008] In view of the above, there is a need in the art for a system for management of prosthetic devices which addresses at-least the aforementioned problems.

OBJECTS OF THE INVENTION

[009] Main object of the present invention is to provide a system for controlling the assistive device which eliminates the need of an extemal/additional device for performing a particular gesture.

[010] Other object of the present invention is to provide an intuitive system to the user to control assistive device by a smart machine learning module with realtime surface pattern recognition and thereby reducing muscle fatigue. [011] Another object of the present invention is to provide a system for controlling assistive devices which can simultaneously control multiple degrees of freedom thereby minimizing undesired triggering. SUMMARY OF THE INVENTION

[012] Present invention relates to a system and method for controlling assistive devices such as limb prosthesis, exoskeleton devices or any other electromechanical rehabilitation devices for paraplegic patients. The invention also describes a method of controlling assistive devices.

[013] In accordance with one aspect of the present invention, features of a system for controlling assistive devices which eliminates the need of extemal/additional device for performing a particular gesture, to overcome issues of undesired triggering of system, user friendly, simple to operate and intuitive in nature, wherein the system comprising plurality of electrodes mounted at various positions of a user’s residual limb configured to receive surface electromyography signals from the active muscle sites of a residual limb, at least one motion sensor to capture change in motion, at least one orientation sensor to capture change in orientation of a residual limb w.r.t. assistive device for accurate detection of motion and orientation of a residual limb/healthy muscle, a sensing module configured to amplify the received electromyography signals, reduce noise, filter received signals and digitized the said signals to provide digital electromyography signals to processing unit, a processing unit configured to receive digital electromyography signals data from said sensing module, store the data, extract various parameters, classify and analyze; a supervised machine learning module configured to identify correct gesture, false motions and a plurality of actuators configured to receive signal from said supervised machine learning module and perform the gesture intended to be performed by the user.

[014] In accordance with another aspect of present invention, features of a method for controlling assistive devices to avoid undesired triggering of an assistive device, and without the need of external controlling device, wherein; the method comprising the steps of mounting plurality of non-invasive electrodes on the active muscle sites of a wearer , capturing the electromyography signals through said electrodes, capturing motion and orientation of residual limb and assistive device independently by motion sensor and orientation sensor respectively, amplifying the received electromyography signals, reducing noise, filtering received signals and digitizing the said signals by sensing module to provide digital electromyography signals to processing unit, receiving digital electromyography signals data from said sensing module, storing, extracting various parameters, classifying and analyzing said data by a processing unit, generating training parameters, storing the parameters in memory, identifying correct gestures, false motions by a machine learning module, and sending control signals to plurality of actuators configured to receive signal from said supervised machine learning module to perform the gesture intended by the wearer.

BRIEF DESCRIPTION OF THE DRAWINGS

[015] Reference will be made to embodiments of the invention, examples of which may be illustrated in accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.

Figure 1 a block diagram of a system for controlling an assistive device in accordance with an embodiment of the invention.

Figure 2 a block diagram of a system for controlling an assistive device in accordance with an embodiment of the invention.

Figure 3 a block diagram of a method for controlling an assistive device in accordance with an embodiment of the invention.

DESCRIPTION OF THE INVENTION

[016] The present invention is directed towards a system and method for controlling an assistive device. The assistive device is controlled based on pattern recognition of body signals of a wearer wearing the assistive device. By adapting to the body signals of a person or wearer, pattern recognition based assistive device cuts down post-fitment-training period needed by the user or wearer of the assistive device. Accordingly, the system does not require a wearer to generate definite impulses, and obviates the problems associated with prior-art, receive input data from multiple data points and seamlessly controls the assistive device with multiple DoFs. (Degree of Freedom).

[017] Figure 1 shows a system (100) for controlling an assisting device (10) in accordance with an embodiment of the invention. The assistive device can be an externally powered multi-functional or multiple DoFs assistive device like a prosthetic arm having a hand assembly or upper limb rehabilitation devices. The assistive devices are configured to perform several actions or tasks, and the system and method of the present invention provides control signals to an actuator or a drive mechanism (160) of the assistive device to perform multiple tasks or actions.

[018] Said system comprises of plurality of non-invasive electrodes (110) to be mounted on external skin surface of a user to capture surface electromyography signals (EMG) signals or muscle’s electrical impulses from the active muscle sites of a residual limb of a wearer. The electrodes are customized medical grade passive electrodes meant for capturing EMG signals from the surface of skin as a result of contraction of muscle groups in case the user intends to perform a gesture. These electrodes may not have any pie-amplification electronic module. [019] The plurality of electrodes (110) may place nearly equidistant from each other such that the electrodes are in contact with the circumference of the residual limb at different locations. The electrodes can be placed in flexible, elastic, inelastic or an adjustable socket made for a wearer or can be worn around the hand as a wearable band having multiple EMG sensors. Though linear equidistant placement of electrodes is preferred, precise placement of electrodes at specific location or at specific orientation is not essential. Further, if required, for placement of electrodes like in trauma cases site identification can be done to check for presence of EMG signals. [020] In one embodiment, the system for controlling assistive devices further comprises of atleast one orientation sensor (150) and atleast one direction sensor (140) mounted either on residual limb or assistive device or both for independently capturing direction and orientation related movements of a limb. The motion and orientation sensors are selected from but not limited to accelerometer, gyroscope, magnetometer, etc. The motion and orientation sensors may place on the residual or healthy limb and/or assistive device that perform independent motion relative to the joint they are connected with to estimate any motion being performed by the limb or change in pose of the limb. The motion and orientation sensors reduces incorrect/inaccurate detection for different poses or when performing motion of the assistive device or any body part, that can directly or indirecdy affect the EMG signals at the residual limb.

[021] In another embodiment, the system for controlling assistive devices to eliminate the need of extemal/additional device for performing a particular gesture further comprises of a sensing module (120) for sensing micro-volt EMG signals through plurality of electrodes, amplify, filter and convert the signals into machine readable digital format. The sensing module is an analog electronic circuit which senses EMG signals and then amplifies filters and digitizes the EMG signals using an ADC to provide a digital EMG signal.

[022] Said sensing module may configured for lead-off detection of each electrode and in case of detachment of any of the electrodes from user’s skin surface, the user will be informed by audio-visual alarms.

[023] In yet another embodiment the system for controlling assistive devices to eliminate the need of extemal/additional device for performing a particular gesture further comprises of a processing unit (130) configured to acquire signals from electrodes (110) and sensors (140, 150) wherein the digital EMG signals are stored and/or analyzed. The processing unit (130) is configured to determine the energy level of the digital EMG signals. In this regard, the processing unit (130) is configured to compare the energy level with a pre-determined or threshold energy level. In case the energy level is greater than the pre-determined energy level, the digital signals are classified as a gesture signal until the energy level falls below the pre-determined/thieshold energy level. Accordingly, valid digital EMG signals are identified by the system.

[024] The digital EMG signal has two phases: Start trigger - When wearer starts moving his hand and End trigger - When movement of hand by wearer ends.

[025] Said Processing module (130) configured to detect presence of motion. When there is some motion observed, it generates a start trigger and end trigger. Digital EMG data is being captured between the triggers.

[026] Said digital EMG signals may classify as a gesture signal and further analyzed/processed by the processing unit (130). The processing unit (130) is configured to extract parameters such as Mean; Variance; Correlation; Mode; Quartiles; Median; Standard Deviation; Minimum; Maximum, but need not be limited to these statistical parameters. Some other parameters like Slope sign changes; Zero crossing; Structure; Length for every channel of digital signals are determined. Also values from motion and orientation sensors are collected and processed for motion and pose parameters.

[027] Based on any combination of some or all of the parameters mentioned, a feature vector for the gesture is created. Each feature vector is assigned to a gesture performed by the user and thereby the assistive device. Accordingly, plurality of feature vectors are determined for each gesture or task that the user wishes to perform. Also feature vectors are determined when the person is not performing any gestures or some kind of motion which might correspond to false detection as gestures.

[028] In an embodiment, the system comprises a supervised machine learning module, wherein the plurality of feature vectors and their respective ascribed gestures and false gestures are inputs for training the supervised machine learning module. The supervised machine learning module may generate some training parameters not to be limited to weights, distances after the training process has completed. Training parameters, if any, are stored in a volatile or non-volatile memory and used during a gesture identification phase as needed. After training has been completed the system and method of the present invention can identify gestures/actions and also motions falsely detected as gestures that the user intends to perform and control the actuator or the assistive device accordingly.

[029] Once the system is trained, and as and when the wearer intends to perform one or more tasks, the electrodes obtain EMG signals. These EMG signals are then amplified, filtered and digitized by the ADC to provide a digital EMG signal. [030] The digital EMG signals are thereafter received by the processing unit, wherein the digital EMG signals are stored and/or analyzed by the processing unit to determine whether the digital EMG signals correspond to a gesture/action intended to be performed by the wearer. The processing unit is configured to determine the energy level of the digital EMG signals. In this regard, the processing unit is configured to compare the energy level with a pre-determined or threshold energy level. In case the energy level is greater than the predetermined energy level, the digitized signals are classified as starting of gesture signal until the energy level falls below the pre-determined/threshold energy level. Accordingly, valid digitized signals are identified by the system.

[031] The digital EMG signals classified as a gesture signal are further analyzed/processed by the processing unit to determine a gesture associated with the digital EMG signal to generate a control signal. The processing unit is configured to extract parameters such as Mean; Variance; Correlation; Mode; Quartiles; Median; Standard Deviation; Minimum; Maximum but need not be limited to these statistical parameters. Some other parameters like Slope sign changes; Zero crossing; Structure; Length for every channel of EMG samples are determined. Also values from motion and orientation sensors are collected and processed for motion and pose parameters. [032] Based on any combination of some or all of the parameters mentioned, a feature vector for a detected gesture is created. In an embodiment, each feature vector is an input to a trained supervised machine learning module which generates a control signal specifying the gesture that has been performed by the wearer of the assistive device or no motion when a false gesture is detected. The gesture or the control signal identified by the trained supervised machine learning system is communicated to the actuator which controls the assistive device to perform the given task.

[033] In an embodiment, the present invention provides a method for controlling an assistive device. The method is carried out on a system discussed hereinbefore. The method starts with capturing surface EMG signals from active muscle sites of the wearer. The EMG signals are then amplified, filtered and converted to a digital signal. The digital EMG signals are thereafter analyzed to determine the energy level of the digital EMG signals. Upon determining the energy level, the energy level is compared with a pre-determined or threshold energy level. In case the energy level is greater than the pre-determined energy level, the digital signals are classified as starting of a gesture signal until the energy level falls below the predetermined/threshold energy level. The digital signals classified as a gesture signal are further analyzed/processed, whereby parameters such as Mean; Variance; Correlation; Mode; Quartiles; Median; Standard Deviation; Minimum; Maximum but need not be limited to these statistical parameters are extracted. Some other parameters like Slope sign changes; Zero crossing; Structure; Length for every channel of digital signals are determined. Also values from motion and orientation sensors are collected and processed for motion and pose parameters. [034] Based on any combination of some or all of the parameters mentioned, a feature vector for the gesture is created, and for each feature vector a gesture performed by the user is ascribed. Accordingly, plurality of feature vectors is determined for each gesture or task to be performed by the assistive device. Also feature vectors are determined when the person is not performing any gestures or some kind of motion which might correspond to false detection as gestures.

[035] In an embodiment, the plurality of feature vectors and their respective ascribed gestures are inputs for training a supervised machine learning module. The supervised machine learning module may generate some training parameters not to be limited to weights, distances after the training process has completed. Training parameters if any are stored in a volatile or non-volatile memory and used during a gesture identification phase as needed. After training has been completed the system and method of the present invention can identify gestures/actions that the user intends to perform and controls the assistive device accordingly. If motion or gestures are falsely detected as gestures, the assistive device performs no motion.

[036] Once the method is trained, and as and when the wearer intends to perform one or more tasks, surface EMG signals are captured from active muscle sites of the wearer. The EMG signals are then amplified, filtered and converted to a digital EMG signal. The digital EMG signals are thereafter analyzed to determine whether the digital EMG signals correspond to a gesture/action intended to be performed by the wearer. The method determines the energy level of the digital signals. Upon determining the energy level, the energy level is compared with a pre-determined or threshold energy level. The digital signals are classified as a gesture signal until the energy level in case the energy level is greater than the pre-determined energy level.

[037] The digital signals classified as a gesture signal are further analyzed/processed whereby parameters such as Mean; Variance; Correlation; Mode; Quartiles; Median; Standard Deviation; Minimum; Maximum but need not be limited to these statistical parameters are extracted. Some other parameters like Slope sign changes; Zero crossing; Structure; Length for every channel of EMG samples are determined. Also values from motion and orientationsensors are collected and processed for motion and pose or orientation parameters.

[038] Based on any combination of some or all of the parameters mentioned, a feature vector for a detected gesture is created. In an embodiment, each feature vector is an input to a trained supervised machine learning module which generates a control signal specifying the gesture that has been performed by the wearer of the assistive device. The gesture or the control signal identified by the trained supervised machine learning system is communicated to the actuator which controls the assistive device to perform the given task. If motion or gestures are falsely detected as gestures, the assistive device performs no motion.

[039] Figure 2 shows a system (200) for controlling an assistive device (10) in accordance with an embodiment of the invention. The system is similar to the system illustrated in figure 1, and further comprises one or more sensors which provide feedback of parameters of the assistive device. The sensors can include force or pressure feedback sensors, over current limit sensors, finger movement limiting sensor, etc. Based on input from the sensors, the system may modify or alter control signals to control the assistive device to prevent the assistive device or its actuator from damage.

[040] Advantageously, with the present invention a wearer of the assistive device can perform or control the assistive device to perform a task or action by thinking of it which is the same as performing an action in case the wearer had a healthy hand. Further, the present invention by adapting to the body signals of a person, cuts down on the training needed to control the assistive device.

[041] Figure 3 shows a method for controlling an assistive device (10) as another embodiment of the present invention

[042] A method for controlling assistive devices to avoid undesired triggering of an assistive device, intuitive in nature and without the need of external controlling device, wherein; the method comprising the steps of mounting plurality of non- invasive electrodes (110) on the active muscle sites of a wearer at predetermined positions, capturing the electromyography signals through said electrodes (110), capturing motion and orientation of residual limb and assistive device independently by motion sensor (140) and orientation sensor (150) respectively, amplifying the received electromyography signals, reducing noise, filtering received signals and digitizing the said signals by sensing module (120) to provide digital electromyography signals to processing unit (130), receiving digital electromyography signals data from said sensing module (120), storing, extracting various parameters, classifying and analyzing said data by a processing unit (130), generating training parameters, storing the parameters in memory, identifying correct gestures, false motions by a machine learning module (170), and sending control signals to plurality of actuators (160) configured to receive signal from said supervised machine learning module and perform the gesture intended by the wearer. The following experimental results illustrate the valid gesture detection and classification of EMG signals in accordance with the system and method of present invention.

A. Valid Gesture Detection (For example : Open, Close, Point, Pinch gestures) Figure.1 Surface EMG data plot of 4 electrodes placed on user’s skin at 4 different locations. The Blue line plot represents raw EMG data and the Red line plot represents the energy level of the valid gesture.

Figure 1 shows digitized surface electromyography (EMG) signal data from four EMG electrodes placed across forearm muscle sites of a normal hand user and in the next step average energy of all the sampled data is being calculated and the graph plot of the average energy of the signal is shown in red line.

Y-axis represents the amplitude of EMG signals and their energy.

X-axis represents the number of samples captured from an individual sensor. Figure 2 Combined Average Energy Plot of 4 EMG sensors For example, when the hand is in a resting state, all the four EMG signals have steady values near to zero baseline. And when the user performs a gesture, e.g. closing of hand, all the four EMG signals show variations which are a measure of magnitude of muscle force as shown in Figure 1. The muscle force generated at different muscle sites is different and it varies with different gestures also. Hence the pattern of close hand gesture will be different from the pattern of the open hand gesture.

Fig 3 shows a magnified view of the Energy plot in Figure. 2. The Red line represents the threshold level and The Blue line plot represents the average energy of the sampled data of a particular gesture.

In the next step the combined average energy of sampled data is being calculated and an example plot of the combined average energy is shown in Figure 2 and Figure 3.

When the user performs a gesture, e.g. close of hand gesture, the average energy value goes above the predetermined threshold value then it is a valid gesture otherwise it is steady hand state. When a user performs a gesture, the energy goes above the Threshold indicating Start of gesture, which continues until the gesture is completed, after which energy goes below the threshold indicating End of gesture as shown in figure 3.

B. Training and Classification of the Machine Learning Module In the experiment carried out in the lab with a normal user wearing four surface EMG sensors across the forearm, the machine learning model is being trained by the sampled data of valid gestures with 18 sets of close-open gestures, 18- set of pinch-open gestures, 18 sets of point-open gestures.

Table below depicts experimental results regarding accuracy of recognition of four gestures (Open, Close, Point, Pinch) by the classifier

Total accuracy of training algorithm, 18 sets of close-open, pinch-open, point-open sequences, is 92.5694 [043] Advantages of the Invention:

> The user does not need exhaustive training

> User can perform gestures with assistive devices with minimum help from healthcare provider/physiotherapist

> No external device or app is needed for performing a gesture

> Minimizes false signals due to undesired movements

> Control over multiple degree of freedom can be achieved