Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PRESSURE-BASED FORCE MYOGRAPHY (pFMG) SYSTEM FOR DETERMINING BODY MOVEMENT
Document Type and Number:
WIPO Patent Application WO/2023/220775
Kind Code:
A1
Abstract:
An apparatus (1) determines body movement and including a chamber (2) having an internal volume, wherein the chamber is configured to vary in internal volume in response to body movement. The apparatus (1) includes a sensor module (3) configured to detect variances in the internal volume of the chamber (2).

Inventors:
ALICI GURSEL (AU)
TAWK CHARBEL (AU)
ZHOU HAO (AU)
ZAGDOUN MARINE (AU)
YOUNG SAM (AU)
Application Number:
PCT/AU2023/050399
Publication Date:
November 23, 2023
Filing Date:
May 12, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV WOLLONGONG (AU)
International Classes:
A61B5/11; A61B5/00; A61F2/68; A61F2/70; A61F4/00; G06F3/01; G06N3/02; G06N20/00
Foreign References:
CN111061368A2020-04-24
KR20150121938A2015-10-30
Other References:
S. YOUNG ET AL.: "Pattern Recognition for Prosthetic Hand User's Intentions using EMG Data and Machine Learning Techniques", 2019 IEEE /ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, pages 544 - 550, XP033630068, Retrieved from the Internet DOI: 10, 1109/AIM.2019.8868766
C. TAWK ET AL.: "3D Printed Soft Pneumatic Bending Sensing Chambers for Bilateral and Remote Control of Soft Robotic Systems", 2020 IEEE /ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM, 2020, pages 922 - 927, XP033807604, Retrieved from the Internet DOI: 10.1109/AIM43001.2020.9158959
Attorney, Agent or Firm:
SPRUSON & FERGUSON (AU)
Download PDF:
Claims:
CLAIMS

1. An apparatus for determining body movement, the apparatus including: a chamber having an internal volume, wherein the chamber is configured to vary in internal volume in response to body movement; and a sensor module configured to detect variances in the internal volume of the chamber.

2. The Apparatus for determining body movement according to claim 1, wherein the chamber includes a deformable portion configured to be contactable with the body of a user thereby to vary the internal volume.

3. The Apparatus for determining body movement according to claim 1 or claim 2, wherein the sensor module includes a pressure sensor configured to detect variances of pressure within the internal volume of the chamber.

4. The Apparatus for determining body movement according to any one of the preceding claims, further including a plurality of chambers, each chamber having a respective internal volume and being configured to respectively vary in internal volume in response to body movement and wherein the sensor module is configured to detect variances in the respective internal volume of each chamber.

5. The Apparatus for determining body movement according to claim 4, wherein the sensor module includes a pressure sensor for each chamber and each pressure sensor is configured to detect variances of pressure within the respective internal volume of each chamber.

6. The Apparatus for determining body movement according to claim 4 or claim 5, further including at least five chambers.

7. The Apparatus for determining body movement according to any one of claims 4 to 6, wherein the chambers are configured to be worn around a forearm of a user.

8. The Apparatus for determining body movement according to claim 7, wherein the chambers are configured to be uniformly circumferentially disposed around the forearm of the user.

9. The Apparatus for determining body movement according to any one of the preceding claims, further including a controller configured to acquire a bio-signal from the sensor module, wherein the bio-signal is indicative of variances in the internal volume of the or each chamber.

10. The Apparatus for determining body movement according to claim 9, wherein the controller is configured to process the bio-signal to identify one or more features within the biosignal.

11. The Apparatus for determining body movement according to claim 10, wherein the one or more features of the bio-signal are time domain features.

12. The Apparatus for determining body movement according to claim 10 or claim 11 , wherein the features are one or more of: Root Mean Square (RMS); Integrated Absolute Value (IAV); Simple Square Integral (SSI); Sample Variance (Var); Mean Absolute Value (MAV); Modified Mean Absolute Value type 1 (MAV1); Modified Mean Absolute Value type 2 (MAV2); and Modified Mean Absolute Value type 3 (MAV3).

13. The Apparatus for determining body movement according to any one of claims 10 to 12, wherein the controller is configured to process the one or more features to predict a gesture corresponding to the features.

14. The Apparatus for determining body movement according to claim 13, wherein the controller is configured to implement a machine learning algorithm to predict the gesture.

15. The Apparatus for determining body movement according to claim 13, wherein the controller is configured to implement a classifier to predict the gesture.

16. The apparatus for determining body movement according to claim 15 wherein the classifier is based on one or more of the following classifier models: Decision Tree, kNearest Neighbour, Random Forest, AdaBoost, Gradient Boosting, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Classifier (RBF and Nu), Gaussian (Naive Bayes and Process) and Neural Network.

17. The apparatus for determining body movement according to any one of claims 13 to 16, wherein the controller is configured to make multiple predictions of a gesture based on the same bio-signal and to post-process the multiple predictions to determine a final predicted gesture.

18. The apparatus for determining body movement according to any one of claims 9 to 17, wherein the controller is configured to acquire the bio-signal as a rolling window of samples of the variances in internal volume detected by the sensor module.

19. The apparatus for determining body movement according to any one of claims 9 to 18, wherein the controller is configured to output a control signal in response to the bio-signal.

20. An apparatus for determining body movement and controlling a device, the apparatus including: at least one deformable chamber; a sensor module configured to detect pressure information based on the pressure within the deformable chamber; and a controller configured to: acquire the pressure information from the sensor module; identify one or more time domain features within the pressure information; implement a classifier to predict a gesture corresponding to the pressure information based on the one or more time domain features; and output to the device a control signal corresponding to the gesture.

21. A method for determining body movement and controlling a device, the method including the steps of: detecting pressure information within a deformable chamber worn by a user; identifying one or more time domain features within the pressure information; implementing a classifier to predict a gesture corresponding to the pressure information based on the one or more time domain features; outputting to the device a control signal corresponding to the gesture.

Description:
PRESSURE-BASED FORCE MYOGRAPHY (pFMG) SYSTEM FOR DETERMINING BODY MOVEMENT

FIELD OF THE INVENTION

[0001] The present invention relates to a pressure-based force myography (pFMG) system.

[0002] The invention has been developed primarily for use as pFMG system for determining body movement and in particular for detecting gestures for human-machine interaction, for example, for prosthetics. However, it will be appreciated that the invention is not limited to this particular field of use. For example, the invention may be used to detect various movements; may be integrated with multi-modal sensing systems; and may be used in various humanmachine interaction applications.

BACKGROUND

[0003] Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of the common general knowledge in the field.

[0004] The loss of a limb, particularly an upper body limb (unilateral or bilateral) poses a major problem that must be solved to return mobility. Where a limb is lost for example due to amputation, traumatic accidents or congenital disease, prosthetic devices are used to replace the limb. Prosthetic devices are one example of a range of devices involving body movement to facilitate human-machine interactions. In this field, gesture recognition systems are highly desired as an effective means of human-machine interfacing for such devices.

[0005] Broadly speaking, there are three main kinds of prosthetic devices: cosmetic devices; passive body powered devices; and active myoelectric devices. Cosmetic prostheses provide life-like representations of lost limbs but are passive and lack utility. Body powered prostheses utilise the mechanical power of other parts of the body to partially emulate the utility of a lost limb. Active prostheses are externally powered and require a command signal to operate. Myography, the measurement of forces produced by muscles under contraction, has been utilised to facilitate human-machine interaction with active prosthetic devices. Typically, myoelectric prostheses measure electrical activity of muscle fibres to determine the intentions of a user and operate the prostheses in accordance with those intentions. [0006] Overall, traditional prostheses are known to have significant rejection rates due to unacceptable aesthetics, poor performance, high costs and discomfort. Whilst myoelectric prostheses are the most advanced, they are expensive and also have further difficulties and limitations.

[0007] More specifically, myoelectric prostheses use surface electromyography (sEMG) to measure bioelectric signals in the electrical activity of the muscle fibres’ action potential via electrodes placed directly on the skin of a user’s residual limb. This electrical activity is then used as a control signal to operate an externally powered prosthesis. The sEMG electrodes are relatively effective in some cases but are not an ideal solution for everyday use due to the complexity of the electrodes and their placement and the fact that the signal can be easily perturbed by sweat, muscle fatigue and changes in arm movement and position. Moreover, the learning process for amputees to adapt to such sEMG systems is long and tedious.

[0008] As a partial solution to the limitations of surface electromyography, various devices incorporating multiple sEMG electrodes have been attempted and machine learning algorithms have been utilised to augment the interpretation of sEMG signals. However, the sensitivity of sEMG electrodes and their ergonomics remains imperfect. Additionally, both the precise placement of the electrodes and the limb position significantly affects the signal, exacerbating the above difficulties.

[0009] Force based myography systems have also been attempted. However, such systems rely on force sensitive resistors (FSR), which require ‘normal’ force or pressure perpendicularly acting on the surface of the FSR to generate a mechanical bio-signal from the physical displacement of the FSR. These FSR based bio-signals are generally unstable and unreliable with continuous change in the baseline reading. Existing force-based myography systems thus rely on sensors that suffer limitations similar to the sEMG electrodes discussed above. Additionally, motion tracking and vision-based systems for determining body movement and gesture recognition are unsuitable for many applications as such systems often require expensive and bulky equipment, wearable markers, clear open workspaces without occlusion, and favourable lighting.

[0010] Thus, existing gesture recognition and body movement determination techniques have shortcomings in one or more of accuracy; sensitivity; ergonomics; cost; accessibility; complexity; and difficulties in customisation, fabrication and adaptability. Accordingly, having identified the above limitations of existing solutions, the inventors of the present invention determined that there remains a need for robust, ergonomic and cost-effective systems and methods for accurately, conveniently and adaptively determining body movements.

SUMMARY OF THE INVENTION

[001 1] It is an object of the preferred embodiments of the present invention to overcome or ameliorate at least one of the disadvantages of the prior art, or to provide a useful alternative.

[0012] It is an object of the invention in one particular form to provide a robust, ergonomic and cost-effective system and method for robustly, accurately and conveniently determining body movements. Particularly preferred embodiments of the invention provide a customisable pFMG system which: is easily fabricated or 3D printable; is sweat resistant; can be worn over clothes; is durable; is resilient; is soft and deformable and is not effected by large mechanical loads or disturbances; does not require direct contact with skin; is able to be combined with other sensing modalities; requires minimal signal processing; is integratable into portable wearable devices; requires minimal calibration; and/or undergoes minimal data drifting.

[0013] According to an aspect of the present invention, there is provided a system for identifying an intention of a user of a device requiring a control command and controlling said device according to the intention of the user, the system including: one or more pressure-based force myography (pFMG) units wearable by the user, wherein each pFMG unit includes: a pressure sensitive chamber (PSC) (capable of undergoing a mechanical deformation) having a pressure sensor configured to measure the pressure within the chamber; and a control module configured to: receive pressure data indicative of the pressure measured by the pressure sensor of each of the one or more pFMG units; detect physical movement of the user on the basis of the pressure measured within the chamber of each of the one or more pFMG units; determine the intention of the user on the basis of the physical movement; and provide the control command to the device on the basis of the intention.

[0014] According to an aspect of the present invention there is provided an apparatus for determining body movement, the apparatus including: a chamber having an internal volume, wherein the chamber is configured to vary in internal volume in response to body movement; and a sensor module configured to detect variances in the internal volume of the chamber.

[0015] In some embodiments, the chamber includes a deformable portion configured to be contactable with the body of a user thereby to vary the internal volume. In some embodiments, the apparatus further includes a plurality of chambers, each chamber having a respective internal volume and being configured to respectively vary in internal volume in response to body movement and wherein the sensor module is configured to detect variances in the respective internal volume of each chamber. In some embodiments, the apparatus includes at least five chambers.

[0016] In some embodiments, the sensor module includes a pressure sensor configured to detect variances of pressure within the internal volume of the chamber. In some embodiments, the sensor module includes a pressure sensor for each chamber and each pressure sensor is configured to detect variances of pressure within the respective internal volume of each chamber.

[0017] In some embodiments, the chambers are configured to be worn around a forearm of a user. In some embodiments, the chambers are configured to be uniformly circumferentially disposed around the forearm of the user.

[0018] In some embodiments, the apparatus further includes a controller configured to acquire a bio-signal from the sensor module, wherein the bio-signal is indicative of variances in the internal volume of the or each chamber. In some embodiments, the controller is configured to process the bio-signal to identify one or more features within the bio-signal. In some embodiments, the one or more features of the bio-signal are time domain features. In some embodiments, the features are one or more of: Root Mean Square (RMS); Integrated Absolute Value (IAV); Simple Square Integral (SSI); Sample Variance (Var); Mean Absolute Value (MAV); Modified Mean Absolute Value type 1 (MAV1); Modified Mean Absolute Value type 2 (MAV2); and Modified Mean Absolute Value type 3 (MAV3).

[0019] In some embodiments, the controller is configured to process the one or more features to predict a gesture corresponding to the features. In some embodiments, the controller is configured to implement a machine learning algorithm to predict the gesture. In some embodiments, the controller is configured to implement a classifier to predict the gesture. In some embodiments, the classifier is based on one or more of the following classifier models: Decision Tree, kNearest Neighbour, Random Forest, AdaBoost, Gradient Boosting, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Classifier (RBF and Nu), Gaussian (Naive Bayes and Process) and Neural Network. In some embodiments, the controller is configured to make multiple predictions of a gesture based on the same bio-signal and to post-process the multiple predictions to determine a final predicted gesture. In some embodiments, the controller is configured to acquire the bio-signal as a rolling window of samples of the variances in internal volume detected by the sensor module. In some embodiments, the controller is configured to output a control signal in response to the bio-signal.

[0020] According to another aspect of the invention, there is provided an apparatus for determining body movement and controlling a device, the apparatus including: at least one deformable chamber; a sensor module configured to detect pressure information based on the pressure within the deformable chamber; and a controller configured to: acquire the pressure information from the sensor module; identify one or more time domain features within the pressure information; implement a classifier to predict a gesture corresponding to the pressure information based on the one or more time domain features; and output to the device a control signal corresponding to the gesture.

[0021] According to another aspect of the invention, there is provided a method for determining body movement and controlling a device, the method including the steps of: detecting pressure information within a deformable chamber worn by a user; identifying one or more time domain features within the pressure information; implementing a classifier to predict a gesture corresponding to the pressure information based on the one or more time domain features; outputting to the device a control signal corresponding to the gesture.

BRIEF DESCRIPTION OF THE DRAWINGS

[0022] Preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:

[0023] Figure 1(A) is a perspective view of a pressure-based force myography (pFMG) chamber according to an embodiment of the invention; [0024] Figure 1(B) is a perspective view of the cross section of the pFMG chamber of Figure 1(A);

[0025] Figure 1 (C) is side view of the pFMG chamber of Figure 1 (A);

[0026] Figure 1 (D) is an end view of the pFMG chamber of Figure 1 (A);

[0027] Figure 2(A) is a graph of measured pressure as a function of force applied to a pFMG sensor unit according to an embodiment of the invention;

[0028] Figure 2(B) is a scaled view of the graph of Figure 2(A) with a line showing the linear fitting of pressure as a function of force for small force values;

[0029] Figure 3 is a graph of measured pressure as a function of increasing force versus decreasing force applied to a pFMG sensor unit according to an embodiment of the invention;

[0030] Figure 4 is a graph of measured pressure according to a plurality of measurements of the same force applied to a pFMG sensor unit according to an embodiment of the invention;

[0031] Figure 5 is a graph of measured pressure over time for a constant force applied to a pFMG sensor unit according to an embodiment of the invention;

[0032] Figure 6(A) is a perspective front view of a pFMG armband including five pFMG chambers and a control housing according to an embodiment of the invention;

[0033] Figure 6(B) is a perspective side view of the pFMG armband of Figure 6(A);

[0034] Figure 7 is a perspective view of a pFMG armband according to an embodiment of an invention when worn on a forearm of a user;

[0035] Figure 8 is a perspective view of the anterior; side; and posterior part of a left forearm showing positioning of the pFMG chambers of a pFMG armband according to an embodiment of the invention;

[0036] Figure 9 is a perspective view of common gestures performed in myography-based gesture recognition;

[0037] Figure 10 is a characterisation of the gestures of Figure 9 as determined by a pFMG system according to an embodiment of the invention;

[0038] Figure 11 is a graph of filtered data versus raw data acquired by a pFMG system according to an embodiment of the invention;

[0039] Figure 12(A) is a graph of featurised pFMG data acquired by a pFMG system according to an embodiment of the invention based on the Mean Absolute Value time domain feature; [0040] Figure 12(B) is a graph similar to Figure 12(A) but based on the Variance time domain feature;

[0041] Figure 12(C) is a graph similar to Figure 12(A) but based on the Root Mean Square time domain feature;

[0042] Figure 12(D) is a graph similar to Figure 12(A) but based on the Log Detector time domain feature;

[0043] Figure 12(E) is a graph similar to Figure 12(A) but based on the Waveform Length time domain feature;

[0044] Figure 13 is a representation of sliding window data acquisition by a pFMG system according to an embodiment of the invention;

[0045] Figure 14 is a collection of graphs showing classifier results for gesture prediction by a pFMG system according to an embodiment of the invention;

[0046] Figure 15 is an indicative normalised confusion matrix for an LDA classifier implementation of a pFMG system according to an embodiment of the invention;

[0047] Figure 16 is a flowchart presenting a simplified overview of the framework process of a pFMG Method according to an embodiment of the invention;

[0048] Figure 17 is a series of side views of a right forearm comparing dynamic gestures and quasi-dynamic gestures;

[0049] Figure 18 is perspective view of an implementation of an extended framework of a pFMG system according to an embodiment of the invention shown using quasi-dynamic gestures to control a lighting strip;

[0050] Figure 19(A) is a perspective view of the components of a hybrid EMG and pFMG sensor according to an embodiment of the invention;

[0051 ] Figure 19(B) is a perspective view of the assembled hybrid EMG and pFMG sensor of Figure 19(A);

[0052] Figure 20(A) is a perspective view of a hybrid microphone-based MMG sensor and pFMG sensor according to an embodiment of the invention; and

[0053] Figure 20(B) is a perspective view of an MMG sensor and a near-infrared spectroscopy (NIRS) sensor for implementation in a hybrid sensor similar to Figure 20(A).

[0054] For ease of reference in the figures and throughout the description, corresponding features have been given corresponding reference numerals. DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0055] The pressure-based force myography (pFMG) system 1 according to embodiments of the invention is a skin and/or garment-mounted non-invasive system consisting of one or more pressure sensitive air chambers 2 configured to convert the mechanical activities of the muscles, tendons and other parts of the human arm or human body and any other physical movement associated with the human body and its limbs into bio-signals. Bio-signals are typically bioelectric signals, however, the pFMG system 1 is able to acquire a mechanical bio-signal which is intrinsically more stable than electrical bio-signals, particularly in the frequency ranges for body movement applications.

[0056] When users are interacting with devices, for example active prosthetic devices, the devices may be configured to respond to control signals. Control signals can be provided by users via various input devices. Gestures and other expressive body movements are convenient and intuitive motions for users. In order to interface such gestures with devices, there is a need to measure the body movement of the user and interpret the gesture in a machine-readable format. This interfacing involves acquiring measurements of the body movement, referred to as bio-signals, and identifying meaningful gestures within those bio-signals.

[0057] Referring generally to all the Figures, there is illustrated a pressure-based force myography (pFMG) system 1 for determining body movement. The following description of the system 1 is made primarily with reference to controlling prosthetic devices, specifically hand prostheses. However, the system 1 is applicable to a vast array of devices including various human-computer interaction systems, sign language recognition systems, musical creation systems, and robotics control and entertainment/virtual reality interfaces. The system 1 includes one or more pressure sensitive deformable chambers 2 configured to be worn by a user. Each pressure sensitive deformable chamber 2 includes a sensor 3 for measuring variances in pressure within the chamber. The measured variances in pressure are used to transform bodily movements caused by muscular contractions of the user into bio-signals. The bio-signals are processed by a controller 4 to identify a defined set of movements, and in particular to recognise gestures performed by the user. The identified gestures may then be used to correspondingly control prosthetic devices and the like.

[0058] The one or more pressure sensitive chambers 2 are at least partially formed from a soft, deformable material designed to be mechanically displaced under a minimal input force when in contact, directly or indirectly, with the body of a user. The chambers 2 may be soft pneumatic sensing chambers that allow the identification of user’s intentions and the control of prosthetic hands. The chambers 2 are made of a soft and flexible material that deforms when muscles are actuated. The chambers 2 are optimised to achieve a large mechanical displacement under a minimal force input while deforming only the part of the chamber in contact with the skin.

[0059] Referring to FIGURES 1 (A) to 1 (D), there is illustrated a pressure-based force myography (pFMG) sensor unit 2 in the form of a pressure sensitive deformable chamber. The chamber 2 includes a body 5 defining a hollow interior 6; and a deformable portion having a contact point 7 for varying the pressure within the hollow interior when the contact point is displaced. The chamber 2 is a substantially rectangular prism 5 that is 32.2 millimetres (mm) wide, 50mm long and 7.4mm thick. In alternative embodiments, the body 5 of the chamber 2 may be an alternative shape. The contact point 7 is disposed on a broad face of the prism 5. The contact point 7 is substantially circular, approximately 23.32mm in diameter, and is elevated from the face of the main body 5 of the chamber 2 by a substantially cylindrical extension 8 of approximately 7mm giving the chamber 2 a total thickness of approximately 14.4mm. In alternative embodiments, the contact point 7 may be of a shape other than circular and the extension 8 may be of a shape other than cylindrical. The cylindrical extension 8 extends from a central portion 9 of the face of the prism 5. The central portion 9 is symmetrically positioned on the face of the prism 5 and rises up to approximately 3.66mm from the face of the prism in a longitudinal arc having a radius of approximately 56.73mm. The central portion 9 is deformable and is configured to be deformable along a continuous range of positions between a raised configuration and a depressed configuration. The central portion 9 includes a seam 10 to facilitate the central portion of the face of the chamber 2 folding between the raised and depressed configurations.

[0060] In use, the chamber 2 is sufficiently sealed or airtight to enable changes in pressure when the chamber is deformed thereby to alter the internal volume 6 of the chamber. The system 1 may include one or more chambers 2 for respectively determining body movement at respective points. Each chamber 2 may include one or more mounting apertures 11 along the lateral side faces of the prism 5 to facilitate joining or connecting the respective chambers. In the illustrated embodiments, the mounting apertures 11 are approximately 3.2mm in diameter. The above referenced shapes, geometries and dimensions of the illustrated embodiments are representative dimensions only and may be varied according to particular users and applications. For example, in alternative embodiments, the chamber 2 may have alternatively shaped portions, alternative geometries and alternative dimensions depending on the size/type of muscles and tendons or body part on which the chamber will be placed. Additionally, when multiple chambers 2 are used, not all chambers need or should necessarily have the same geometry, shape and/or dimensions. The geometry, shape and dimensions of the chambers 2 are optimised using finite element analysis to provide the essential characteristics for the system 1 to function. That is, the chambers 2 are linear, present no hysteresis, are stable over time, and their measurements are repeatable. The shape and dimensions of the chambers 2 may also be customised to fit the particular application and user as required.

[0061] The contact point or tip 7 of the elevated central portion 9 is configured such that, in use, a maximum mechanical displacement under a minimal input force is achieved while only deforming the portion of the overall chamber 2 that is in contact with the user. For example, when the pFMG unit 2 is worn around the forearm of a user, the chamber is configured such that the circular flat contact point 7 touches the skin of the forearm of the user to maximise the volume change upon muscle contraction. That is, under muscle contraction, the circular contact point 7 is depressed into the remaining volume 6 of the chamber 2 causing a linear pressure change due to the input force due to the advantageous configuration of the chamber. In order to facilitate ease of manufacturing, the chambers 2 are preferably formed with all or substantially all of the portions formed from the same deformable material thereby to enable mechanical forces to deform the chamber and vary the internal volume and pressure. In alternative embodiments one or more portions of the chamber 2 are deformable, for example, one or more of the cylindrical extensions 8, the central portion 9 or the body 5 of the chamber may be deformable with the remaining portions being substantially rigid. In other words, the contact point 7, or other deformable portion, is used to cause localized deformation in the chamber 2 as per muscle and/or body movement.

[0062] To measure accurate pressure changes, each chamber 2 is formed from a material which is soft and flexible enough to be stretched when the muscles contract. The material is also configured to be strong enough to sustain repeated deformations over long periods. The chambers may be, for example, formed from a thermoplastic polyurethane (TPU). Alternatively, other suitable soft and stretchable materials may be used, for example, silicone or rubber. Advantageously, the chambers may be 3D printed, for example using a fused deposition modelling 3D printing process which is readily available, relatively cost efficient and simple. Additionally, 3D printable chambers can be easily customized and tailored to prosthetic hand users based on their needs and use case.

[0063] Each chamber 2 is a soft, strong and deformable pneumatic sensing chamber. Advantageously, the chambers 2 provide a comfortable experience for the user, biocompatibility for safe skin contact and interaction, and are resistant to sweat and wet conditions. The chambers 2 are arranged in an armband 12 configuration which is adaptive to the forearm of a user, smooth enough to not scratch the skin, comfortable to wear for extended periods of more than 8 hours, lightweight, resistant to sweat and hair presence, and free of heating elements. The design of the chambers 2 of the armband 12 are optimised to provide a linear response wherein the relationship between the pressure output and the force input is linear; to have negligible hysteresis; to provide repeatable output signals; to provide stable signals over time; to be highly sensitive to forces in the order of 0.05N; and to have a long life wherein a single chamber 2 can sustain more than 1,300,000 actuation cycles.

[0064] Each chamber 2 includes a sensor module 13. The sensor module 13 is configured to measure mechanical bio-signals in the body movement induced deformation of the chambers. The sensor module 13 is configured to measure the pressure, and variances in pressure, within the hollow interior 6 of the chamber 2 as the interior changes in volume. The sensor module 13 includes the sensor 3 placed inside the chamber 2 to detect and measure the pressure changes due to mechanical deformations. Each sensor 3 is a through hole type sensor for easily integrating into the chambers 2 with sufficient contacts and input/output functions to provide power and to connect to a controller 4. Each sensor 3 is preferably calibrated for a pressure range with a smallest possible resolution. Advantageously, the higher resolution of pressure provides greater accuracy and sensitivity when determining body movements. The output of each sensor 3 is analogue for compatibility with various controllers 4. Each sensor 3 is configured to operate at five volts for safety purposes and also to provide compatibility with a micro-controller 4.

[0065] Advantageously, the configuration of the chambers 2 enables the use of readily available pressure sensors 3 without requiring substantial customisation. In the illustrated embodiments the commercially available Honeywell ABPDANT005PGAA5 pressure sensor 3 was chosen as a suitable pressure sensor. The response time of the pressure sensors 3 is 1ms and they require a very small amount of energy in the order of 13.5mW of power consumption per sensor. Each chamber 2 includes a sensor aperture 14 to provide access for the pressure sensor 3 to the interior 6 of the chamber 2 whilst retaining a substantially sealed configuration. The sensor aperture 14 is preferably provided on a longitudinal end face of the body 5 of each chamber 2. Advantageously, a pressure sensor 3 can then be mounted to the chamber 2, through the sensor aperture 14 thereby to measure the pressure within the chamber and to substantially seal the aperture. In the illustrated embodiments, the sensor aperture 14 has a diameter of approximately 2.34mm suitable for receiving, for example, an ABPDANT005PGAA5 pressure sensor 3. Alternatively, the pressure sensor 3 may be mounted outside of the chamber 2 at a location remote from the chamber and may be connected to the chamber via a tube or suitable channel.

[0066] Referring to FIGURES 2 to 5, the linearity, nominal hysteresis, stability, and repeatability of each of the chambers 2 is demonstrated. In addition to the data discussed below, a linear motor was used to push on the surface of a chamber 2 with a frequency of 1 hertz (Hz), that is, once per second. The chamber 2 was pushed/activated by the motor to achieve the same pressure difference each time of approximately 1.4 kilopascals (kPa) corresponding to the largest difference occurring in the expression of common hand gestures. The pressure difference and the reference pressure remained unchanged during the experiment. The experiment was stopped at 1 ,500,000 cycles, without any changes in the pressure values nor in the structure of the tested pressure sensitive chamber 2. Accordingly, the chambers 2 are robust with a long lifetime, considered to be able to resist more than 876,000 actuation cycles which is estimated to correspond with a full two years of utilization, for example, to control a prosthetic hand 8 hours per day at 1 ,200 cycles per day.

[0067] To demonstrate the linearity of each pressure sensitive chamber 2 a range of forces were applied to the surface of the chamber and the associated pressure output was measured. In FIGURE 2(A) and 2(B) pressure in kilopascals (kPa) is plotted 15 on the y axis against force in newtons (N) on the x axis. Figure 2(A) shows a range of forces from 0 to 4.5N against a range of pressures between -0.5 to 2.5 kPa. In order to determine the sensitivity of the chambers 2, relatively small forces under 0.0981 N were applied to the surface of the chambers. A change in pressure was observed with a force applied of 0.049N resulting in a measured pressure difference of 0.027kpA. This correlates with the linear fitting equation (y=0.53*x-0.19) of the curve when looking at measurements for small force values in the range of 0 to 0.2N against a range of pressures between -0.2 and -0.08 kPa as shown in Figure 2(B) with the measured values in broken lines and a solid line of best fit to account for small forces. From these figures, it can be seen that the chambers 2 are substantially linear in their output.

[0068] To assess the hysteresis exhibited by the chambers 2, the mechanical deformation applied to each chamber was ramped up and down by applying increasing weight/force to the chambers and then decreasing said weight/force. In FIGURE 3 pressure between -0.5 and 2.5 kPa is plotted 16 on the y axis against force between 0 and 4.5 N on the x axis. The measurements of pressure relative to applied force are illustrated with the measurements corresponding to an increasing range of force superimposed with the measurements corresponding to a decreasing range of force. As can be seen, the increasing and decreasing forces result in substantially identical pressures at equivalent forces demonstrating that the chambers 2 exhibit only insignificant hysteresis.

[0069] FIGURE 4 shows the repeatability, reliability and consistency of the chambers 2. The chambers 2 were repeatedly activated by applying the same weight of 200 grams (g) on the chamber once every 30 minutes, collecting 5 pressure values and taking the mean value. Figure 4 shows pressure in the range of 0.61 to 0.65 kPa plotted 17 on the y axis against 16 such measurements along the x axis. As can be seen the maximum measured value is just above approximately 0.645kPa and the minimum measured value is just below approximately 0.615kPa. This represented a variance of approximately 4.954% which is considered nonsignificant, additionally the mean value for all measurements is substantially between the minimum and maximum measurements. Accordingly, the chambers 2 can be seen to provide repeatable, reliable and consistent measurements of pressure for equivalent forces.

[0070] Referring to FIGURE 5, the chamber 2 was activated for a period of 30 minutes by placing a weight of 200g on its surface and a pressure value was measured and saved once every second. Pressure between 0.46 and 0.6 kPa is plotted 18 on the y axis against time along the x axis. As can be seen, the average internal pressure of the chamber remains unchanged throughout the activation period. Thus, the chambers 2 are stable over time further demonstrating that the 3D printable chambers are stable and substantially airtight.

[0071] Advantageously the chambers 2 provide linear, stable, repeatable and consistent measurements of pressure in response to force with negligible hysteresis which enables the use of such chambers for the accurate pressure-based force myography determination of body movement.

[0072] Referring to FIGURES 6(A) and 6(B) and FIGURE 7, five chambers 2 are provided in a pFMG armband 12 configuration for force myography applications. In Figures 6(A) and 6(B) the armband 12 is displayed with the contact points 7 of the chambers 2 outwardly directed for ease of viewing. In use, the armband 12 is worn around the forearm with the contact points 7 inwardly directed towards the forearm, as shown in Figure 7. A control housing 19 is provided as sixth component, in the armband 12, of substantially similar size to each of the chambers 2. The chambers 2 and control housing 19 are joined together by joining members or links 20 to form the armband 12. The control housing 19 accommodates a control module in the form of a controller 4, for example a micro-controller, and the necessary connectivity to the pressure sensors 3. The micro-controller 4 may be, for example, an Arduino Mega2560 or another suitable controller. The control module 4 is configured to receive pressure data from each of the chambers 2. The control housing 19 and joining members 20 may be formed from the same soft material as the chambers 2 and may also be 3D printed to improve manufacturing efficiency, comfort and ergonomics. The generally flexible nature of the armband 12 also allows the armband to be fully customisable and stretchable, increasing comfortability as it can be worn over clothes. The armband 12 is configured such that in use, five chambers 2 and the control housing 19 are arranged uniformly radially around the forearm. Advantageously, in use, the five pressure sensors 3 of the chambers 2 can encompass the dominant flexor/extensor muscles of the forearm regardless of its orientation. For most users five chambers 2 are sufficient to fully encompass the forearm tightly, however, the armband 12 is also modular and can be configured with more or less chambers where required. Additionally, the diameter of the armband 12 can be increased or decreased by varying the size of the joining members 20 to provide scalability for individual users.

[0073] Referring to FIGURE 8, the armband 12 is configured to be worn around the forearm of a user, encompassing the dominant flexor/extensor muscles of the forearm. The chambers 2 are intended to be positioned about positions p1 and p3 to p6, with the control housing 19 slightly medial to the anconeus, at p2 about the posterior aspect of the elbow joint, of the forearm. The position p2 is of lesser significance when capturing the muscle’s activity, accordingly, it is suitable for placement of the control housing 19. The positioning of the chambers 2 about positions p1 and p3 to p6 is sufficient to determine the holistic muscle deformations which occur during movement of the forearm, even over clothing. If necessary, the tightness of the armband 12 may be adjusted for sufficient contact pressure by varying the dimensions of the links 20 to suit a particular user. These positions represent a single positioning possibility and the pFMG armband 12 is also robust enough to accurately determine body movements at various alternative or additional positions with respective pFMG sensor chambers 2. Unlike surface EMG sensors, the pFMG armband 12 does not require any gel, is not perturbed by bad electrode placement and is not affected by sweat, hair, or clothing.

[0074] The effectiveness of the armband 12 in determining body movements is demonstrated particularly with respect to gestures. Referring to FIGURE 9, there is shown several gestures which are relatively prevalent in myography-based gesture recognition including: wave-in 21 ; wave-out 22; close-fist 23; spread-fingers 24; and pinch 25. Additional gestures used in the following description of the functionality of the system include: thumb up; index-point; indexdown; tripod-grip; key-grip; open palm; and okay. These gestures are all discernible enough to be suitably detected by myography; are distinguished from each other in that they use differing muscles groups during actuation and are sufficiently different from a resting position; are simple to understand; are recognisable; are comfortable to perform; and are intuitive to the user in performing operations and/or actuating devices. These gestures are only a subset of all possible gestures and a vast array of suitable gestures may be measured and recognised by the system 1 . Many of the gestures considered were selected for their prevalence in the use of EMG-based gesture recognition systems used by existing devices and additional gestures may also be made possible by the pFMG system 1.

[0075] The pFMG armband 12 is configured to acquire a mechanical bio-signal in the form of pressure information in response to force induced by muscle activity deforming the chambers 2. Referring to FIGURE 10, in the case of an armband 12 including five pressure sensitive chambers 2, the pressure measured by each chamber throughout a variety of hand gestures was recorded. For each chamber 2, the pressure is plotted across a range of 0 to 2000 pascals (Pa) on the y axis of each respective chamber’s graph. The x axis denotes 2000 samples per gesture, each of which was held for 2.7 seconds for data acquisition. Prior to data acquisition of the gestures, a control armband output value was attained by placing the armband 12 on the ground with no disturbances to determine the base pressure already contained within each of the chambers 2. The base pressure within each of the five chambers 2 was substantially identical and differed only insignificantly due to their small values and because the output pressures for classification are relative to initial user pressure. The gestures wave-in 21 ; wave-out 22; closed- fist 23; spread-fingers 24; and pinch 25 correspond to those illustrated in Figure 9. Figure 10 shows that the the pressure output values from each of the chambers 2 were relatively stable and showed little hysteresis effects.

[0076] Observation of the pressure data from the five chambers 2 enables the identification of optimal chamber area placements to allow for highly distinguished gestures. For example, in Figure 10, Chamber 1 had a clear discrepancy between the spread-fingers 24 gesture and the closed-fist 23 gesture whereas Chamber 2 had virtually no discrepancy between the gestures. To understand the reasoning behind this drastic change it must be noted that Chamber 2 sits approximately where the pronator teres muscle lies in the forearm. The closed-fist 23 and spread-fingers 24 gesture both do not require movement of the pronator teres and thus remain at relatively the same pressure level. In fact, Chamber 2 remains relatively the same pressure level across all 5 gestures. This does not necessarily indicate that Chamber 2 is a redundant chamber (therefore allowing for similar gesture recognition accuracies with only 4 chambers). Rather, alternative gestures and a greater quantity of more varied gestures may utilise Chamber 2. For example, a gesture that requires pronation of the forearm would increase the variability of Chamber 2 data and thus increase its contribution to gesture definitions. Advantageously, the armband 12 is able to acquire sufficient data to differentiate the five gestures tested in Figure 10 with ample room to expand for further gestures.

[0077] The pressure data is acquired by the controller 4 from the pressure sensitive chambers 2. The controller 4 can implement a filter of the raw data to remove noise. For example, a median filter or bandpass filter may be used. Referring to FIGURE 11 a comparison between raw pressure data and filtered pressure data is illustrated. The signals in Figure 11 represent the sum of pressure measured by five pFMG chambers 2 of a pFMG armband 12 worn by a user while performing five gestures (closed fist 23; thumb up 26; index point 27; index down 28; and pinch 25) holding each gesture for 5 seconds and then resting for 5 seconds between each gesture.

[0078] As can be seen in Figure 11 , the filtering removes initial spikes and generally smooths the received pressure data. In alternative embodiments, filtering and rectification is not required for the pressure data. Advantageously, the stable signal and minimal hysteresis provided by the chambers 2 enables that additional filtering and rectification stages can be foregone where desirable, further reducing cost and complexity. As a precautionary measure to further ensure accurate data and to prevent sampling error, the controller 4 may include a 5 th order digital bandpass filter to filter the pressure data. The controller 4 may further include a threshold function to alert the user if there was ever an abnormal spike in pressure data. In experimental testing, the threshold was defined as a 20% change in pressure value over a period less than a second, throughout the duration of a single static gesture. No alerts were recorded and thus the band pass filter was deemed not necessary for this data set but was included as a fail-safe and error detection mechanism.

[0079] In order to determine gestures from the filtered or raw pFMG data, the data from each sensor 3 is processed into features. Featurisation is the process of transforming raw pFMG signals obtained from the armband 12 into features that better represent the data for further processing. This increases the predictive capabilities of machine learning models and improves their accuracy when predicting gestures on the basis of unseen data. The features can be based on various time domain features, for example, one or more of: Mean Absolute Value (MAV); Modified Mean Absolute Value type 1 (MAV1); Modified Mean Absolute Value type 2 (MAV2); Modified Mean Absolute Value type 3 (MAV3); Root Mean Square (RMS); Sample Variance (Var); Log Detector (LOG); Waveform Length (WL); Integrated Absolute Value (IAV); Slope Sign Changes (SSC); Zero Crossing (ZC); Willison Amplitude (WAMP); Difference Absolute Standard Deviation Value (DASDV); Average Amplitude Change (AAC); Slope Sign Change Integrated Absolute Value (SSC IAV); Simple Square Integral (SSI); third, fourth, or firth Temporal Moment (TM3, TM4, TM5); and Myopulse Percentage Rate (MYOP).

[0080] The above listed features provide a comprehensive mathematical description of data from a single sensor 3 over the course of a defined time window, providing a single value per pressure chamber 2 per period. In practice, RMS; VAR; IAV; SSI and MAV were found to be particularly distinguishing between tested gestures with less distinguished features including: SSC, ZC, WAMP and MYOP. In alternative embodiments, alternative or additional features may be used including various alternative time-domain features and/or frequency domain features and/or other features.

[0081] Referring to FIGURES 12(A) to 12(E), the data shown in Figure 11 , of 11 ,000 data points, has been divided and processed utilising Mean Absolute Value, Root Mean Square, Variance, Log detector and Waveform Length featurisation respectively. The plots of Figures 12(A) to 12(E) represent the sum of pressures measured by five pFMG chambers. The signal durations corresponding to each gesture are divided into sections of 50 points and the features calculated for each segment of 50 points. In Figures 12(A) to 12(D), the featured data of 14 segments corresponding to each of the five gestures (closed fist 23; thumb up 26; index point 27; index down 28; and pinch 25) and a rest position 29 have been superimposed over each other to illustrate the distinction between the various features of each gesture. In these figures, the data corresponding to rest 29 is the lower most line and then in ascending order is index down 28, index point 27, pinch 25, thumb up 26 and finally closed fist 21 at the top of each graph. Figure 12(A) shows Mean Absolute Value featurisation; Figure 12(B) shows Variance featurisation; Figure 12(C) shows Root Mean Square featurisation; and Figure 12(D) shows Log detector featurisation. In these figures, the red plotted line is closed fist 23; blue is thumb up 26, green is pinch 25, pink is index point 27, cyan is index down 28 and yellow is rest 29. Figure 12(E) shows Waveform length featurisation for the five gestures across the 11 ,000 data points. As can be seen in Figures 12(A) to 12(E) the gestures are well differentiated and easily discriminated across the featurised data acquired from the armband 12.

[0082] In order to perform the final gesture determination, the featurised pressure sensor data is analysed by the controller 4 to identify the gesture being performed by the user. A classifier is used to select the gesture based on the features in the sensor data. For example, the featurised data may be fed into an implemented Machine Learning (ML) algorithm. The classifier may be based on one or more of the following classifier models: Decision Tree, kNearest Neighbour, Random Forest, AdaBoost, Gradient Boosting, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Classifier (RBF and Nu), Gaussian (Naive Bayes and Process) and Neural Network. In practical implementations of the system, the Inventors found the LDA model to typically have the highest accuracy score for gesture recognition although, in some implementations the Random Forest model outperformed in execution time and log loss. The classifier is trained on previously collected data sets and utilises machine learning techniques to make the gesture selection.

[0083] The output of the classifier may be subject to post-processing for example based on popularity history, multilevel debouncing, or inertial popularity. The inertial popularity postprocessing involves assigning a counter to every possible gesture in the expected set; initially assuming a gesture and setting the corresponding counter to a maximum value; for every prediction that does not match the current gesture incrementing the predicted gesture’s counter and simultaneously decrementing all other counters (down to a minimum value) including the current assumed gesture’s counter; conversely if an observed prediction matches the current gesture, the current gesture’s counter is incremented (up to the maximum value) and simultaneously all other counters are decreased. The current assumed gesture will change when the corresponding counter reaches a predetermined threshold. Whichever noncurrent gesture counter reaches the threshold first, will be assigned as the new current gesture. If, however, the current gesture counter reaches zero before this happens, the algorithm will decide that it is unlikely to still be the current gesture, however, this is indeterminant of which other gesture it should be and thus is assigned as an unknown gesture. The main advantage of this method is that an accidental prediction is immediately rectified with the next prediction as the incorrect gesture counter will increase once and immediately be decreased at the next prediction. This creates a much more stable gesture prediction output.

[0084] In order to demonstrate live gesture recognition, the above described armband 12 of system 1 was utilised to capture body movement of a user when performing gestures identified as the most functional and useful grips for activities of daily life including: power grip; tripod grip; finger point; and key grip. Additionally, a relaxed hand and open palm gesture were also included in the group of gestures for this testing. The pFMG armband 12 was used for all data acquisition, the featurisation was performed based on RMS, VAR, IAV, SSI, WL and MAV; and a Random Forest classifier model pipeline with inertial popularity used as the training and classifier algorithm. In the testing, ten healthy subjects, 5 females and 5 males, aged from 22 to 58) and 4 armband sizes were used. The subjects were asked to perform the six gestures in the following order: relaxed hand, open palm, power grip, tripod grip, finger point (non-opposed) and key grip. The Random Forest algorithm was trained three times with a record length of 500 datapoints. Each gesture was maintained for at least five seconds and the gestures performed sequentially (i.e. in the same recording of live prediction). The global confusion matrix for all predictions in this test is presented in Table 1 where the global results of the tests are shown. The obtained global accuracy for this test is 98.3%.

Table 1: Global Confusion Matrix

[0085] Another test was performed on one subject in order to study how the position of the arm affects the predictions. For this test, three gestures were performed: rest, close fist and okay. The same algorithm was trained three times with the subject’s elbow on the table. During the live prediction session, the subject was asked to first do the gestures with the elbow on the table (i.e. during the training), then with the arm up in the air and finally with the arm under the table. The three gestures were maintained for at least five seconds and this test was performed five times. Tables 2, 3 and 4 present the confusion matrices for the three arm positions and resume the results of the 5 tests.

Table 2: Confusion matrix for the elbow on the table

Actual Gesture Table 3: Confusion matrix for the arm up in the air

Table 4: Confusion matrix for arm under the table

[0086] The global accuracy for these tests with varying arm position is 99.5% which lead to the conclusion that the arm position does not substantially affect the signal as the gestures are still well discriminated.

[0087] A further implementation the armband 12 will now be outlined in detail to demonstrate the base framework process of data acquisition, featurisation, classification and gesture determination. A pFMG armband 12 including five pressure sensitive deformable sensor chambers 2 and a controller 4 was utilised. In the implementation, the pFMG armband 12 was placed over the belly of the forearm (slightly distal to the elbow). The orientation of the armband 12 was adjusted to ensure the base control housing 19 was situated slightly medial to the anconeus muscle. The base 19 contains no pressure chambers 2 and was placed on this region as it has little muscle deformation relative to other sites. Proper orientation was not crucial for gesture recognition but the same orientation between sessions was required unless a user chose to define new gestures or redefine existing ones. [0088] Raw FMG data was recorded by the pFMG armband 12 as a stream of arrays at a sampling period of 1.35msec (740 Hz), each containing five elements (for each pneumatic chamber 2). A single gesture recording was defined as obtaining 2000 FMG data samples which equated to approximately 2.7 seconds of data. The raw FMG data for this implementation is as in Figure 10. This amount of data was deemed sufficiently long to encompass a static pose whilst not causing fatigue effects during training sessions when the user would have to repeat gestures multiple times in a row. Each of the 2.7 second performances of the five gestures (wave-in; wave- out; close-fist; spread-fingers; and pinch) were repeated three times. Three separate recordings of each gesture for approximately 3 seconds were used, with other gestures interspersed, to increase the variability in the subject data; prevent the likelihood of overfitting; and reduce muscle fatigue as compared to a single nine second gesture recording. Users invariably made discrete adjustments to their actuation of a gesture upon re-initiation.

[0089] In preparation for featurisation, the acquired gesture data was segmented by defining a ‘window’ 30 of 100 samples (approximately 135 milliseconds worth of acquired data) and stepping through the data with a stride length of 50 samples (allowing for 50% overlap), as depicted in FIGURE 13. For a given gesture recording length, this would produce (N/S - 1) windows 30, where N is the number of samples and S is the stride length. Due to the consistent, steady nature of the pressure and orientation readings, it was identified that little rectification or smoothening was required prior to featurisation. As a precautionary measure, a 5 th order digital bandpass filter was utilised to limit a 10% sampling rate margin of error. This would ensure any lag or lead effects would be disregarded and alert the user of invalid data. No such alerts were recorded during the process of experimentation.

[0090] The actual featurisation process was achieved through the application of mathematical operations to each window 30 of gesture data. A total of seventeen time-domain features were applied including: IAV, WL, AAC, SSC IAV, SSI, RMS, MAV, MAV1 , MAV2, ZC, TM3, TM4, TM5, WAMP, DASDV and MYOP.

[0091] Feature construction transforms a single window 30 of gesture data into a ‘featured window’ which is a comprehensive mathematical description of approximately 135msec of data from a single sensor. A ‘featured frame’ is an amalgamation of each sensors’ featured windows 30. Therefore, a single gesture can be effectively defined as an array of multi-dimensional frames. For example, if five pFMG sensor chambers are utilised, then the featured frame is a five-dimensional array, one for each chamber, and each chamber dimensionality further includes a multi-dimensional array again (corresponding to the number of features). The respective features of each chamber for each frame are then provided to the classifier for processing.

[0092] Intra-gesture variation of features was determined by subjecting element-wise relative standard deviation (coefficient of variation) tests respectively between the features of the same gesture from the same chamber over time. Inter-gesture variation of features was obtained by applying coefficient of variation tests on the intra-gesture means, the respective mean values of each feature for a gesture from the same chamber over time (mean values between the frames of all compiled gestures). Feature selection was based on the results of the intra-gesture and inter-gesture variances. A feature was considered highly effective if it was characterized by both low intra-gesture variability and high inter-gesture variability. This demonstrated that the gesture was consistent during its execution and/or between trials but sufficiently different from other defined gestures to be easily distinguished. This method of feature selection follows a filterbased approach, that is, it chooses the optimal features without involving the final classifiers in the process (as is the case for wrapper-based methods). This method prevents the influence of overfitting errors affecting feature performance. Furthermore, a filter-based approach has a significant reduction in processing time when compared to wrapper-based approaches because the latter must iterate the selection process with every combination of feature and classifier model (an exponential function).

[0093] Based on the intra-gesture and inter-gesture variance ranking method, the top 50% of features were chosen as being the most useful. This included (in order) the features: RMS, IAV, SSI, VAR, MAV1, MAV2 and MAV3. Following feature selection, the data is formatted into a dimensionality that is suitable for feeding into the classifier models.

[0094] The featurised data was then fed into a classifier model for training and subsequent live gesture determination using machine learning. The classifier may use one or more of the following classifier models: Decision Tree, Random Forest, k-Nearest Neighbour, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Nu-Support Vector Classification (NuSVC), and Gaussian Naive Bayes.

[0095] For model evaluation purposes the trial datasets were split into training and testing sets by the ratio 2:1. Therefore, for every three trials recorded one trial was hidden from the classifier for future testing purposes. Every unique combination of hidden trials was used to determine the overall classifier accuracy for the given data set. Therefore, the overall classifier accuracy is based on the average of the binomial coefficient accuracy scores according to the binomial coefficient equation: C(n, r) = n !/(r!(n- r) I) for r trials from a set of n trials. [0096] As a total of six trials were recorded per user in the experimental procedure, the classifier accuracy was taken as the average of 15 unique accuracy scores. This method of estimating the skill of the classifier models is essentially a customised version of a k-fold cross- validation method. The primary difference between the two is that due to the relatively small data collection size and use of static gestures, complete segregation of the trials was necessary to avoid overfitting.

[0097] Due to the high-frequency sampling rate and small data window sizes, the classifiers were able to produce approximately 10 gesture predictions per second. Post-processing was performed on the system 1 to condense these predictions into a single prediction which increases the stability and accuracy of the results. A prediction queue was established and after it was filled to a maximum of 10 predictions, the mode value was taken as the final prediction. Note that if the mode value did not constitute at least 75% of the queue, the predictions were deemed too random, the gesture was labelled as ‘Unknown’. If desired, the last confident prediction can be used as the current gesture during these ‘Unknown’ gesture periods. This is useful for scenarios where the user is experiencing fatigue as a result of transmitting repeat codes over an extended period of time.

[0098] The gesture recognition accuracy component of the above framework was evaluated by utilising 12 able-bodied subjects (9 males, 3 females, age 23.1 age 23.1 ± 2.2 years; mean ± SD). Each subject was asked to wear the pFMG armband 12 on the belly of their dominant hand’s forearm, positioning the base chamber 19 (containing no sensors 3) slightly medial to the anconeus. If the armband was deemed to have insufficient contact pressure (perhaps due to a smaller forearm circumference) then the base chamber 19 was detached to reduce armband circumference.

[0099] The performance of the gesture recognition component of the base framework was evaluated by taking the average accuracy from the results of the 12 participant subjects. A single trial consisted of performing the five gestures (wave-in, wave-out, spread-fingers, closed-fist and pinch) in a random sequence for approximately 3 seconds each. The trial would then be repeated twice more for a total of three trials per session, each gesture containing approximately 9 seconds worth of data. The entire session would then be repeated, resulting in a total of 18 seconds worth of data per gesture.

[00100] The above framework defined optimal features to be those that increased the intergesture variance of data and simultaneously decreased the intra-gesture variance of data. This meant that the data of a given gesture was repeatable and consistent within itself whilst still being effectively distinguished from other recorded gestures. Initially a mask was used to highlight which features produced an intra-gesture variance of <10% and an inter-gesture variance of >20%. These percentage thresholds were also varied but ultimately, this method proved to be ineffective as a feature could be deemed effective for one chamber 2 but not another, even within the same gesture. Instead, the thresholds were removed, and features were ranked by the intra-gesture and inter-gesture variance average scores across the chambers 2. This whole process highlighted a potential alternative for classifier selection in future work. In alternative embodiments, it may be beneficial to have a classifier model allocated to each chamber 2 (in parallel) and then the amalgamation of the predictions is fed into an overall classifier (in series) to decide the final gesture based on constituent predictions.

[00101 ] Based on the intra-gesture and inter-gesture variance ranking method, the top 50% of features were chosen as being the most useful. This included (in order) the features: RMS, IAV, SSI, VAR, MAV1 , MAV2 and MAV3. It was observed that feeding classifier models these features (thereby reducing the feature by half from 14 to 7) resulted in improvements in prediction accuracy (approximately 2-3%). This may initially seem insignificant; however, it is to be noted that this was for a pre-established effective/customized feature set and thus it demonstrated a significant improvement to standardized feature sets. In fact, removal of the features altogether resulted in an accuracy drop between 16-37%, despite the relative stability in the pre-processed pressure values (it is believed this drop would be even more significant for EMG-based systems due to a relatively higher variation in the EMG signals). This drop ultimately signifies the importance of the feature selection process.

[00102] A factor that should be considered in the results obtained is that the effectiveness of these features was tested independently of each other; that is, feature combinations were not analysed. This could result in two or more features being classed as effective - independently they would increase the prediction accuracy but combined would not further increase the prediction accuracy because they are describing similar class differentiations. It is hypothesized that this may be the case for features such as MAV1 , MAV2 and MAV3. The ability to distinguish effective features was additionally hindered by the difference in rankings of intra-gesture and inter-gesture variances. Some features were ranked highly in having a large inter-gesture variance (positive attribute) but simultaneously had a low-ranking score due to their large intra- gesture variance (negative attribute), for example, TM3. When it came to these edge cases, higher scores in the inter-gesture variance rankings were weighted more than intra-gesture variance scores. It was ascertained that two gestures could be significantly varied within themselves and still be sufficiently distanced from each other for easy differentiation.

[00103] The seven classifier models (Decision Tree, Random Forest, k-Nearest Neighbour, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Nu-Support Vector Classification (NuSVC), and Gaussian Naive Bayes) were evaluated in their performance with respect to accuracy, log loss and training/prediction times. Firstly, the classifier models were fed for the participant sessions individually to determine the inter-trial gesture recognition results.

[00104] The average results of all participants for the inter-trial gesture recognition capabilities are summarized in Table 5. The LDA classifier model is highlighted as the optimal model due to its accuracy score being the highest recorded on average and with the least amount of standard deviation. The training/prediction time for LDA was not the smallest recorded out of all the models, that was achieved by the GaussianNB model. It was decided however that accuracy takes precedence as the times are so small in nature that there is an insignificant difference between the majority of models from the user perspective.

Table 5: Inter-trial quantitative results for classifier models

[00105] The average results of all participants for the inter-session gesture recognition capabilities are summarised in Table 6. The LDA classifier model is again highlighted as the optimal model due to its accuracy score being the highest recorded on average and with the least amount of standard deviation. Similar to the inter-trial gesture recognition results, the Guassian NB model was recorded to have the smallest training/prediction time but, in practice, was imperceivably different from the LDA model and thus the latter was chosen as the optimal classifier model. Table 6: Classifier model inter-sessional quantitative results

[00106] A sample of the above participant classifier model results with respect to accuracy, log loss and training/prediction time is presented in FIGURE 14. In Figure 14, classifier accuracy is illustrated in the top row of graphs; classifier log loss in the middle row of graphs; and model training and prediction time in the bottom row of graphs. The results for the inter-trial testing are presented in the left column of graphs; and the inter-session results in the right column of graphs, noting the varied scaling. For inter-trial results classifier accuracy is marked on the x axis between 0 to 80% in 20% increments, whereas for inter-session results classifier accuracy is marked on the x axis between 0 to 100% in 20% increments. For inter-trial results classifier log loss is marked on the x axis between 0 to 0.5 in increments of 0.1 , whereas for inter-session results classifier log loss is marked on the x axis between 0 to 0.175 in increments of 0.025. For inter-trial results model training and prediction time is marked on the x axis between 0 and 100 milliseconds in increments of 20 milliseconds, whereas for inter-session results time is marked on the x axis between 0 to 300 milliseconds in increments of 50 milliseconds. In all graphs, the Y axis depicts the classifier from top to bottom as: Gaussian Naive Bayes; Nu-Support Vector Classification (NuSVC); Quadratic Discriminant Analysis (QDA); Linear Discriminant Analysis (LDA); Random Forest; k-Nearest Neighbour; and Decision Tree.

[00107] The final classifier model used in the framework gesture recognition system 1 was the LDA model. To summarize the final results, the gesture recognition system was able to achieve an inter-trial accuracy score of 88.93 ± 8.70%. The inter-trial dataset consisted of approximately 6 seconds worth of training data and 3 seconds worth of testing data. The inter-session gesture recognition accuracy, based on training and testing datasets of 12 and 6 seconds of data per gesture respectively, was found to achieve an accuracy score of 92.2 ± 9.35%. It should be noted that the accuracy scores reported are for the individual gesture predictions made by the LDA classifier. In further implementations of the system 1 , post-processing gathers as many as 10 predictions per second to create a final gesture prediction that is greater in accuracy than its constituent predictions.

[00108] Following the selection of the LDA classifier model as the primary machine learning model, normalised confusion matrices were generated for each participant to visualise the certainty of the predictions. Referring to FIGURE 15, an indicative normalised confusion matrix is shown for a given participant sample with LDA classifier model. In Figure 15, the true gesture is labelled along the Y axis and the predicted gesture labelled along the X axis, with percentage accuracy graded within the matrix. Qualitative observations of these matrices highlighted some interesting conclusions that could be drawn from the data. There were multiple cases of minor labelling confusions between the ‘Wave Out’ and ‘Spread Fingers’ gestures. It is postulated that these minor confusions may be the result of common muscles groups being displaced in the posterior forearm (e.g. extensor digitorum and extensor carpi ulnaris). Furthermore, the ‘wave out’ gesture could understandably be perceived as a continuous form of ‘spread fingers’ i.e. there is no significant change in finger/arm movements between the two. If a participant is feeling fatigued or bored, their ‘wave out’ gesture may become slackened to look more like a ‘spread fingers’.

[00109] Another interesting feature to note is the difference in participant confusion matrices as a whole. Some participants had exceptionally high scores (97%+) across all gesture labels while those that had poor label predictions tended to overpredict a single gesture instead of an even distribution of incorrect labelling. It was noted that those with the exceptional scores tended to have the armband pressed tighter against their forearm. This intuitively makes sense as pressure sensor chambers 2 of the armband 12 are able to be displaced with greater magnitude and be affected more by deep muscles. The prototype armband 12 used in this body of work had two circumference sizes through detachment of the non-sensing base I controller chamber 19 and did not include stretching. In alternative embodiments, the links 20 between chambers 2 and the control housing 19 are stretchable or adjustable to provide the required sizing. Additionally, the nature of the 3D printing construction of the armband 12 enables that customised armbands can designed and manufactured to fit each individual’s forearm.

[001 10] In summary of the results, the optimal features for use in the pFMG gesture recognition system 1 were found to be RMS, IAV, SSI, VAR, MAV1 , MAV2 and MAV3. The optimal classifier model for both inter-trial and inter-session accuracy was found to be the LDA model. For a dataset containing 6 seconds worth of pFMG data per gesture, the recognition system was able to achieve an inter-trial accuracy score of 88.93 ± 8.70%. For a dataset containing 12 seconds worth of pFMG data per gesture, the recognition system was able to achieve an inter-session accuracy score of 92.20 ± 9.35%.

[001 11 ] Put another way, the pFMG system 1 is able to identify an intention of a user of a device requiring a control command and to control the device according to the intention of the user. The intentions can be identified via gesture recognition or the determination of other body movements. The devices can be prosthetic devices, commercial electronics, robotics or any suitable device requiring a control signal. The pFMG system 1 includes one or more pressurebased force myography (pFMG) units 2 wearable by the user, wherein each pFMG unit includes a pressure sensitive chamber (PSC) capable of undergoing a mechanical deformation, the PSC 2 including a pressure sensor 3 configured to measure the pressure within the chamber. The measured pressure within the chamber 2 is thus a mechanical bio-signal resultant from the body movements of the user. The system further includes a control module 4 configured to: receive pressure data indicative of the pressure measured by the pressure sensor 3 of each of the one or more pFMG units 2; detect physical movement of the user on the basis of the pressure measured within the chamber 2 of each of the one or more pFMG units; determine the intention of the user on the basis of the physical movement; and provide the control command to the device on the basis of the intention. Advantageously, the system 1 , pFMG units 2 and controller 4 are rapidly configurable and calibratable for various users and gestures to control various devices with high accuracy and utility.

[001 12] Direct comparison of the above outlined framework to existing systems is made difficult by the fact that conventional systems do not provide a device working on the same principle as the pFMG armband 12. Comparisons are further complicated by varying parameters including differing numbers of gestures; differing data acquisition methods and devices; differing machine learning systems; differing recording times per gesture and variabilities in participant pools. Benchmarks that have been used as points of comparison include the Myo armband system (developed by Thalmic Labs) and other EMG and myography-based systems as outlined in the background section.

[001 13] When compared to existing systems incorporating FMG-based or EMG-based gesture recognition systems with similar gesture classes, the pFMG system 1 meets or even exceeds results with respect to recognition accuracy and prediction speed. That is, the above pFMG framework either meets or exceeds the recognition capabilities; the accuracy; and the preprocessing and post-processing speeds deemed acceptable in literature. It should be noted that even better accuracy scores are achieved in the pFMG system 1 when incorporating post- processing functionalities. The pFMG armband 12 uses fewer sensors (five) than most of the frameworks analysed in literature. One of the main features of the pFMG system 1 is the turnaround time between users picking up the armband 12 for the first time and ultimately being able to control devices using gesture recognition. This can be done effectively within a matter of minutes - this time can be shortened even further if the user has previously saved their gesture data. Aside from recognition accuracies, the implemented pFMG system 1 has the benefit of being very customisable, affordable, lightweight, 3D printed, impervious to sweat and does not require direct skin contact.

[001 14] A method of controlling devices according to the pFMG system 1 for determining body movement will now be described. In overview, the method involves the use of the pFMG system 1 worn by a user for determining body movement; transforming the bodily movements caused by muscular contractions of the user into bio-signals; processing the bio-signals to identify one or more a defined set of gestures; and providing corresponding control signals to a device in response to the identification of the performance of a gesture by the user. The implementation of the system in this method involves the phases of device characterisation; FMG data acquisition; pre-processing; feature construction and selection; classifier model selection; gesture mapping; post-processing; and device actuation with reference to the above framework and which will also be discussed in greater detail below.

[001 15] Referring to FIGURE 16, there is illustrated a flowchart presenting a simplified overview of the framework process of the pFMG Method with reference to consumer electronic devices based on infrared (IR) remote control. At the beginning of a new session of a user using the pFMG system to control a device, the system determines whether there is relevant prerecorded data of the user, their gestures and/or the device for control. If relevant pre-recorded data exists, it is retrieved from stored data. If the external device for control is new or a new functionality is desired, the relevant IR remote control signal for such control is acquired and stored by the system. If a new gesture is desired, the system acquires pFMG data of the user from the armband when performing the gesture. The pFMG data corresponding to the gesture is filtered and rectified; feature extraction and reduction is performed on the filtered pFMG data; and the classifier model processes the features of the pFMG data to fit the gesture for future classification and prediction. Once the device functionality and user gestures are configured, the gestures are mapped to the device functionality and all relevant data stored in the data store for future sessions. In use, the pFMG armband 12 acquires pFMG data of the user as they perform gestures, the pFMG system 1 performs online prediction to determine gestures and when a gesture is determined the corresponding IR control codes are transmitted to actuate the corresponding device functionality. The pFMG system 1 can also be configured to run offline for testing and calibration purposes and all pFMG data and predictions can be stored for troubleshooting, customisation, and report generation.

[001 16] The phases of the method will now be described in greater detail with reference to the control of consumer electronic devices as well as prosthetic devices by way of example.

[001 17] The device which is intended to be controlled by the pFMG system 1 is characterised. In particular the control signals required by the device are determined and the pFMG system 1 is interfaced with the device to provide the control signals as required. In the case of prosthetic devices, the control signals required to actuate a prosthesis are acquired and stored by the controller 4 of the pFMG system 1 . For consumer electronics, for example television units, air- conditioning units; computers; sound systems; video players; and the like, input and output signals are acquired and stored by the controller 4 of the pFMG system 1. In the case that a device utilises infrared (IR) remote control via an IR receiver, the pFMG system 1 can include an IR transceiver to broadcast corresponding IR control signals. In such cases, the pFMG system 1 can be configured to also record IR remote signals to emulate IR control for additional devices. Similarly, the pFMG system 1 can be interfaced with the relevant device or devices via a direct wire connection; over Wi-Fi, Bluetooth or other wireless connectivity, or via another suitable interface or connection. The key requirement for the interfacing is that the pFMG system 1 is able to provide control signals to the device.

[001 18] The pFMG system 1 is calibrated to recognise particular body movements and/or gestures for a particular user and/or users. This process involves the required users performing the desired body movements or gestures while wearing the pFMG armband 12 in a training mode such that the pFMG system 1 is able to acquire the pFMG pressure sensor data; featurise the data; and classify the featurised data corresponding to the desired gestures in training for future gesture recognition. The base pFMG framework outlined above describes various examples of pFMG calibration processes.

[001 19] The desired body movements and/or gestures are mapped to respective control signals for controlling the interfaced device in response to the recognition of the corresponding gesture by the pFMG system 1.

[00120] After both gestures and control signals are configured, the gestures are mapped to control signals as desired. In the case of prosthetic devices, the gestures corresponding to the actuation of certain forearm muscles may be mapped to a hand or limb prostheses such that the prosthesis performs the corresponding gesture. For consumer electronics, the gestures may be mapped as desired by the user and they may be given the option to map gestures to any number of devices and or IR codes. Assigning the same gesture to multiple codes allows for the teleoperation of multiple devices using the same gesture. This is beneficial for when a gesture is intuitive for multiple cases, for example, an outwards wave gesture could allow for both turning up the volume on a speaker and for turning up temperature on an air-conditioner. Mapping multiple IR codes to the same gesture also allows for synchronous teleoperation of multiple devices. This technique is however limited by the line of sight transmission angle that is a characteristic of infrared communication. A possible solution to this limitation is incorporating IR repeaters and/or extenders around the environment. A single gesture could therefore in theory turn off all infrared devices when leaving the house (compatible lights, televisions, air conditioners, speakers etc). Alternatively, other wireless communication protocols not dependent on line-of-sight may be used. For example, in a Wi-Fi connected smart home, various devices could be simultaneously turned on or off in response to a single gesture.

[00121 ] Once the desired devices are characterised and the pFMG system 1 and gestures calibrated and mapped, the system can be used to control the devices. The pFMG armband 12 is worn by the user and they perform gestures to control the device. The determination of gestures by the pFMG system 1 based on the final prediction of the classifier having processed the featurised pFMG sensor data is used to control devices according to the gesture mapping setup. For example, with a prosthetic hand device, a user may actuate their forearm muscles to control the hand prothesis to open or close. For consumer electronic devices, after a gesture is recognised by the pFMG system 1 , the IR codes corresponding to the given gesture may be transmitted to the targeted external device. For example, a user may wave their hand out to the increase volume of their TV and/or sound system or may point to an application to select the application on a computer. For any such actuations, the repetition of control can be configured as appropriate. For example, turning a device on or off can be configured to only occur when a change in state of the gesture is determined, that is, repeat recognition of the same gesture may not result in repeat transmission of the control signal to prevent unintended actuation and/or debouncing-characteristic behaviour. Alternatively, if desired, for relevant gestures a command signal can be maintained while the gesture is maintained. For example, volume could be progressively increased whilst maintaining a wave-out gesture. The utility and function of the pFMG system 1 for controlling various devices using various gestures is not limited to these examples and it will be appreciated that a vast array of applications are possible. [00122] The benefit of the above phases is that depending on the gestures desired to be implemented; the number of devices to be controlled; and the number of users envisioned various phases can be repeated or omitted to scale the system. For example, once a device has been characterised, the same characterisation for the same device can be used for the gesture mapping to a pFMG system 1 for various users. Similarly, once a pFMG system 1 has been calibrated for particular gestures by a particular user, the same gestures can be mapped to various control signals for various devices in different applications.

[00123] The above outlined framework and applications are primarily in relation to the determination of body movement in respect of relatively static gestures. The pFMG system 1 can also be scaled and used to determine alternative body movements based on muscle actuation. Additionally, an extended pFMG framework is provided with essentially the same general phases as the base framework and additionally with spatial augmentation to the actuation capabilities. That is, the pFMG system 1 can determine body movements and actuate devices in response to such determinations and additionally, the extended framework can augment the determination of body movement and the corresponding actuations based on spatial considerations. This extended framework is provided to achieve recognition of a gesture across a dynamic variable orientation, that is, a quasi-dynamic gesture recognition functionality. This extended framework is an extension to the base framework and does not alter the foundation of the base framework. Thus, the augmentation capabilities can be turned on and off seamlessly in real-time as desired without affecting performance of the base framework.

[00124] Spatial augmentation allows for quasi-dynamic gestures to be recognised in the extended framework. A quasi-dynamic gesture differs from a dynamic gesture in that the former actuates a constant gesture (no transient motion) with variable orientation and the latter actuates a transformative gesture (with transient motion). Referring to FIGURE 17, an exemplary dynamic gesture 31 is shown in the top row and an exemplary quasi-dynamic gesture 32 shown in the bottom row.

[00125] Quasi-dynamic gesture recognition was implemented in the extended framework rather that fully dynamic gesture recognition due to the processing delays and complications that are necessarily imposed by dynamic gestures. One of the key components of the pFMG system 1 is its ability to perform pFMG data acquisition, pre-processing, featurisation and a classification gesture prediction in a matter of milliseconds (approximately 10 predictions can be made per second). This speed is necessary for the substantially real-time use of the system. Such speed is made possible as a result of the real-time sliding window implementation with small packets of data, essentially taking many snapshots of the data to understand discrepancies in motion. This implementation is not as well suited to dynamic control as it is for quasi-dynamic control. Dynamic control requires a lagging window and holding off on final predictions to ensure a gesture is completed first. Without any modifications, the base framework would potentially identify the dynamic wave gesture 31 in the top row of Figure 17 as a series of discrete gestures, quickly transitioning between Wave in 21 , Rest 29 and Wave out 22 (with perhaps ‘Unknown’ in-between states). Another consideration of dynamic gestures would be the temporal parameter of movement speed. There are virtually an infinite amount of speeds that the gesture could be actuated with and thus the movement will have to be scaled in real-time (with dynamic time warping algorithms) to measure similarities between the trained and recorded temporal sequences.

[00126] Quasi-dynamic gestures 32 do not require any scaling as the gesture relative to the armband is constant. Advantageously, this significantly reduces the processing power required and processing time until a final prediction as no lagging is necessary. The quasi-dynamic gesture 32 in the bottom row of Figure 17 is still considered to be a static closed fist 23 gesture in the eyes of the classifiers but the orientation change allows for additional functionalities to be added during the post-processing stage.

[00127] Spatial data was added as part of the extended framework by incorporating a spatial sensor into the sensor module 13 of the pFMG armband 12, for example an orientation sensor, accelerometer, gyroscope and/or magnetometer. In particular, the controller 4 can include a 9- DOF sensor (BNO055) integrated with the armband 12. The extended framework focuses on the orientation sensor for efficiency of processing, although further implementations may incorporate information such as angular velocity and acceleration vectors in the spatial augmentation. In order to provide fast and convenient utilisation by end users the extended framework utilises Euler vectors as a means of spatial augmentation. Sampled at a rate consistent with the base framework (1.35msec), a Euler vector provides absolute orientation information in three axes (x, y, and z).

[00128] In early prototypes, the orientation data was included within the featurisation process, however this immediately presented a significant issue. Defining gestures with integrated orientation values meant that every new orientation was defined as a different gesture. This made the method impractical as there were infinite orientations in 3D space. A potential fix would be to place a sensor onto the torso of the user to give an initial frame of reference and allow for relative orientational data. This however poses issues in itself; the extra sensor increases the bulkiness of the final devices, requires an attachment mechanism, increases the overall cost, technical complexity and furthermore the user torso would have to remain still. An alternative solution was devised by removing the orientation data from within the feature and classification process and instead using it within the post-processing phase of the framework. This meant that orientation data was only considered in a complementary form after a gesture had been predicted. This had three primary benefits:

1 . As a complementary approach, no significant modifications to the base framework were necessary.

2. Spatial gestures were now relative to the starting position of the gesture (as opposed to the infinite starting positions relative to 3D orientational space).

3. Gestures were now more stable across different orientations. For example, prior to spatial augmentation, slight muscle deviations caused by changing the orientation while holding a static gesture may have influenced the prediction of a gesture. With spatial augmentation, the gestures were more ‘locked in’ - i.e. it was recognised that the user had moved whilst holding a gesture (less likely to be perceived as a new gesture altogether).

[00129] Proportional control is a linear feedback process control system in which a correction is applied to a given variable with a magnitude that is proportional to the difference between a desired and recorded value. In the extended framework, proportional control is made possible through changing the orientation of the arm whilst a gesture is being held. The setpoint orientation is established during initiation of the gesture and subsequent movements determined the process variable. The difference between the instantaneous orientational values of the initial and current poses influenced the magnitude of the controller output in a linear fashion. Depending on the gesture being performed, only one axis may be considered. Alternatively, multiple axes may be considered either independently or in combination depending on the nature of the control desired.

[00130] For example, the pFMG system 1 under the extended framework can be configured to control the brightness of lighting such as an LED strip 33. A closed fist 23 gesture may be used to commence control of the brightness of the LED strip 33 and rotation of the fist may be used to correspondingly vary the brightness of the LED strip 33. To test this application, a closed fist 23 gesture that controls the LED strip 33 was actuated in an anterior orientation before the arm was laterally rotated. In this example, the system was designed to compute the difference in the orientation about the longitudinal (craniocaudal) axis. The magnitude of lateral rotation from the starting setpoint was then subsequently used to set the brightness of the LED strip 33. Where IR codes or signal broadcast are used to control external devices, the proportional control can be configured to determine the rate of sending discrete packets (e.g. increase television volume faster when laterally rotated); send a signal in a continuous stream; or incrementally send a series of respective control signals (increase television channel number when laterally rotated). These examples of quasi-dynamic gestures and proportional control are not limiting, and a wide variety of control is possible with the extended framework pFMG system 1.

[00131 ] To test and verify the extended pFMG framework, the pFMG system 1 was implemented with the spatial augmentation outlined above. An augmented version of five common gestures used in the base framework were used for the extended framework testing. These quasi-dynamic gestures were: closed fist with supination/pronation rotation; pinch with medial/lateral rotation; spread fingers with arm flexion/extension; wave in with flexion/extension and wave out with medial/lateral rotation. These covered rotations about the longitudinal (craniocaudal), anteroposterior (dorsoventral) and the horizontal frontal (mediolateral) axes.

[00132] For the testing of the extended framework, 3 participant subjects (1 male, 2 females, age 32.3 ± 13.2 years; mean ± SD) wore the pFMG armband 12 and performed quasi-dynamic gestures. Using the same procedure as the base framework, the raw pFMG data was recorded from the upper forearm by the pFMG armband 12 at a sampling period of 1 ,35msec. 2000 pFMG samples (2.7 seconds of data) were obtained per gesture for a single trial with 3 trials recorded in total for a user session. The only adjustment was that users were asked to perform the orientational motions slowly during the 2.7s recording time, taking care to keep the gesture consistent and ensure the pressure chambers were not pressed against the body during rotation.

[00133] The spatial augmentation of the system 1 was deemed to be sufficient for determining quasi-dynamic gestures. Qualitative analysis of the extended framework demonstrated the potential efficacy and usefulness. All gestures used in the base framework were able to be augmented with spatial data and with proportional control, providing extended capabilities when controlling devices, for example, teleoperating consumer IR devices. Users were able to perform a static gesture and then extend the functionality of that gesture by modifying their arm orientation. As demonstrated in FIGURE 18, a ‘closed fist’ gesture 23 was able to turn a LED strip 33 on and with longitudinal orientation of the arm, modify the LED colours during the postprediction phase. Lateral and medial rotation of the arm changed the LED strip 33 to red and blue, respectively. In the top row of Figure 18, the LED strip 33 is initially green 34 in the first frame; in the second frame lateral rotation controls the LED strip to be red 35; and in the third frame medial rotation controls the LED strip to be blue 36. Figure 18 also demonstrates how this augmented functionality is a post-prediction process, that is, if a gesture is not identified the device is not “locked on” and therefore orientation does not control the device. That is, in the bottom row of Figure 18, the orientation of the arm is rotated laterally and medially between the three frames, however, the LED strip is not controlled to change colour and remains blue 36 despite the rotation of the user’s arm because the closed fist 23 gesture is not held during the variation in orientation. Advantageously, the post-prediction augmentation of quasi-dynamic gestures enables users to intuitively and conveniently control the application of the gestures to external devices.

[00134] Figure 18 showcases a discrete implementation of the spatial augmentation capabilities, that is, the LED strips 33 can be in the colour state of green 34, red 35 or blue 36 with no intermediate colours between states. Alternatively, a continuous gradient of colour change could be defined depending on the nature of the control desired. The implemented proportional control system allowed for a more continuous variable that was proportionate to the difference between the orientated gesture position and the initial gesture position. This variable produced a spectrum of possible boundaries, the resolution determined by the user’s desire. For example, a DVD player’s fast-forwarding functionalities were tested using proportional control. The more a ‘Wave Out’ gesture was laterally rotated, the faster the fast-forwarding function would operate. Where the system is reliant on broadcasting IR codes to control external devices, the varying orientation of a quasi-dynamic gesture may impede the IR transmitters line of sight to the device. In such cases, IR extenders/boosters or alternative placements or control signal systems may be used.

[00135] The base pFMG framework is able to perform a vast array of body movement and gesture recognition as well as control and actuate external devices, for example devices in the prostheses, consumer electronic and/or teleoperation space. Advantageously, the base framework is modular in fashion allowing for expansion and further frameworks.

[00136] Further frameworks of the pFMG system may also be configured particularly for seniors or particular types of amputees where the sensitivity of the chambers and the pressure thresholds may be calibrated to account for factors such as muscle tone degradation in ways that existing systems are unsuitable for. Whilst the above frameworks primarily refer to infrared or wired communication protocols, the pFMG system 1 may alternatively communicate and control devices using radiofrequency (RF), Bluetooth or Internet Protocol (IP) transmission methods. [00137] The modular architecture of the base framework enables the expansion of additional data acquisition methods and augmentations. That is, the system 1 is capable of interchanging entire components of the system with minimal modification required. The system utilises pFMG data as the primary means of gesture recognition using the pFMG armband 12. The system 1 , however, is also able to accept any type of two dimensional, continuously streamed data. Accordingly, other data acquisition data sensors and data types can be utilised. This includes using dry or wet electrodes to capture EMG data; FSR sensors for FMG or MMG data and microphones for acoustic-myography (AMG) data among others. Furthermore, with little modifications, combinations of data sensors and data types could be utilised to generate a more holistic insight into the muscular activities of the forearm to establish application specific humanmachine interfaces. The data acquisition phase of the framework is not the only component that could benefit from incorporating additional sensors. The extended framework has presented the spatial augmentation of pFMG data primarily based on orientation data by a 9-DOF sensor. Additionally, further frameworks are configurable to implement linear/angular velocity vectors, linear/angular acceleration vectors and magnetic field strength vectors. Inclusion of these data types enables greater possibilities and increases the performance of augmented gestures. For example, quasi-dynamic gestures could be established based on arm movement speed, therefore having a temporal quality to them.

[00138] The above frameworks of the pFMG system 1 have been in relation to myography based on pressure sensors 3, in particular pressure sensitive deformable chambers 2. In addition to the pressure sensitive deformable chambers 2, additional sensors can be incorporated into the sensor module 13 of the system 1 to augment and extend functionality. A wide variety of sensor modalities for detecting body movement may be incorporated into the system, for example including one or more of: electromyography (EMG), mechano-myography (MMG), acoustic-myography (AMG); sono-myography (SMG), and the like. The additional sensors may be combined with the pFMG sensors 2 in a sensor fusion implementation for determining body movements, target tracking, anomaly detection and the like. Advantageously, the pFMG chambers 2 do not require direct skin contact which enables the chambers to be readily combined with other sensing modalities which do require direct skin contact, such as sEMG sensors.

[00139] Compared to existing commercial force sensitive resistive (FSR) sensors, the pFMG sensor chambers 2 have numerous advantages including low drift, stable signal, and minimal calibration requirements. Additionally, the pFMG chambers 2 are also advantageous to work as one part of a hybrid-modality sensor due to their easier customization (3D-printability) and better softness than FSR sensors. The robustness and reliability of human machine interfaces may thus be improved by the integration of the pressure-based FMG (pFMG) sensor 2 and one or more other sensor modalities. For example, referring to FIGURE 19(A) and FIGURE 19(B) there is illustrated a hybrid EMG and pFMG sensor 37. The respective components are shown in Figure 19(A) and the assembled sensor 37 for determining body movement of a forearm muscle is shown in Figure 19(B). For such an implementation, on the surface of a pneumatic chamber 2, conductive ink is sprayed to work as dry EMG electrodes. With flexible EMG electrodes, the softness of the pneumatic chamber 2 can be maximally maintained for the pFMG sensor to detect force/pressure from multiple directions, sensitive to not only compressive but also tangential stress and forces. In addition to pFMG and EMG hybrids 37, the pFMG sensor 2 can additionally or alternatively be integrated with other modalities such as a microphone-based MMG sensor 38 attached to the top of a pneumatic chamber 2 as shown in FIGURE 20(A) or a near-infrared spectroscopy (NIRS) sensor 39 as shown in FIGURE 20(B).

[00140] Additionally, other sensor modalities and transductions (e.g., capacitive and piezoresistive sensors) can also be either placed on the surface of the pFMG unit 2 or located inside the pneumatic chamber. Advantageously, the chambers 2 which are configured to deform in response to force induced by muscle activity during body movement enables the implementation of a wide variety of sensors 3 and combinations of sensors in the sensor module 13 to determine such body movement by sensing such deformation using one or more sensor modalities. In sensor fusion implementations the corresponding sensor fusion architecture and framework is configured to leverage each sensing modality’s strength to achieve robust and stable user intention recognition.

[00141 ] The pFMG system 1 is a skin and/or garment-mounted non-invasive sensor system consisting of pressure sensitive deformable air chambers 2 configured to convert the mechanical activities of the muscles/tendons and other parts of the human arm or human body and any other physical movement associated with the human body and its limbs into bio-signals. The chambers 2 and sensors 3 record the arm muscle/tendon/organ activities as per finger, wrist or hand movements making various hand gestures to identify the user intention through measuring the pFMG signals of the arm muscles and subsequently control external devices. Advantageously, the pFMG signals are stable and reliable without continuous change in their baseline readings. Accordingly, the pFMG system 1 is able to provide human-machine interfacing with greater accuracy and reliability as well as greater comfort when compared to existing EMG and/or FSR systems. [00142] The optimisation of the pFMG chambers 2 of the pFMG armband 12 are adaptable to an individual user’s anatomy allowing the design of pFMG chambers that provide essential characteristics for the armband to be functional. That is, the chambers 2 are configurable to be linear, present no hysteresis, be stable over time, and the measurements are repeatable. Gestures are well discriminated when wearing the armband 12. The pFMG chambers 2 and armband 12 are reliable, biocompatible, flexible, resistant, comfortable, lightweight and fully customizable. Advantageously, because the pFMG armband 12 can be 3D printed, the dimensions, placement of the chambers 2 and even the colours of the armband can be readily customised for each user.

[00143] The pFMG system 1 is envisioned as particularly suitable for control of prosthetic devices as a replacement for existing EMG operated prostheses. The pFMG armband 12 is soft, lightweight, low cost, compact, non-invasive, highly sensitive, robust, responsive, not sensitive to sweating, electrode location and/or impedance changes, and is highly modular and customisable. The armband 12 can also be worn directly on the skin or over garments without any degradation in performance, which greatly increases the comfort of the user.

[00144] The pFMG armband 12 can be formed from low cost 3D printable components and enables the use of readily available pressure sensor 3 electronics for highly sensitive body movement determination applications. The response time of these sensors 3 is 1ms and they require a very small amount of energy in the order of less than 13.5mW per sensor. Advantageously, the pFMG chambers 2 are conveniently manufacturable; relatively inexpensive; customisable; and effective acquisition devices for user intent and device communication. Advantageously, the pFMG system 1 provides a solution that is modular, adapted to the individual user and is sufficiently fast to allow for ‘plug and play’ capabilities. The pFMG system 1 is able to be combined with other sensing modalities; requires minimal signal processing; is integratable into portable wearable devices; requires minimal calibration; and/or undergoes minimal data drifting.

[00145] Advantageously, the modular nature and customisable configuration of the pFMG system 1 enables the application of the system not only for prosthetic devices, but also other applications requiring human-machine interfacing, for example, wearable robotic systems, gaming applications, and teleoperation and remote control of various devices and appliances.

[00146] The mean accuracy of implementations of the pFMG armband 12 system 1 was 98.3% for the tests performed on able bodied subjects. Advantageously, the pFMG armband 12 system 1 meets or exceeds the accuracy of conventional EMG techniques. Compared with existing EMG techniques, the signal processing burden for the pFMG system 1 is significantly less making pFMG more portable and lower-cost. Additionally, as a mechanical bio-signal, pFMG signals are intrinsically more stable than EMG signals (electrical bio-signals) and are more suitable for low- frequency (<=10Hz) hand gesture recognition applications. pFMG signals are thus, stable, repeatable, and consistent, with minimal noise. Compared to resistive polymer thick film (RPTF) sensors (FSR series and Flexiforce), pFMG sensors require minimal calibration. Advantageously, this enables the implementation of machine learning algorithms with pFMG sensors which result in high accuracy. The pFMG system’s 1 gesture recognition accuracy is very high (>90%) based on the developed machine learning algorithms.

[00147] The pFMG armband 12 is also relatively lightweight, less than approximately 100grams, which means it can be used for extended periods of time. The pFMG armband 12 can be worn either directly on the forearm or over a shirt without affecting the signal. The pFMG armband 12 can be customized for each subject, leading to comfort and enhanced performance. The pFMG armband 12 is robust and soft, adaptive to the forearm of each subject and sensitive enough to sense the muscle activities. Advantageously, the overall performance of the armband is not affected by sweat, hair, or skin, which is a key point given the fact that robotic prosthetic hands tend to heat when used.

[00148] The pFMG system 1 is relatively agnostic to positioning of the armband and variance between users making the system substantially universal and enabling that machine learning classifiers are particularly effective. Advantageously, the pFMG system 1 can be adapted for various users to recognise various body movements with minimal modification required. The easy process of training also makes the armband an easy plug and play solution. Additionally, the pFMG armband 12 is modular wherein a failed chamber or module can be easily and quickly replaced at a relatively low cost.

[00149] The pFMG system 1 is relatively simple in that complex materials, manufacturing techniques or processing is not required. However, the system 1 is also innovative, effective and robust. Moreover, the pFMG chambers 2 enable the integration of pressure sensors with other types of bio-signal sensors (for example, EMG, MMG, NIR, and the like) to create a hybrid sensor 37 for a more complete picture of muscular information.

[00150] The pFMG system 1 being accurate, efficient, affordable, sensitive, adaptable, modular, robust, stable, interference resistant, responsive, and customisable enables the use of the system in a vast array of applications. For example, the pFMG system 1 can be used to directly control prosthetic hands. Therefore, it can be an integral part of a prosthetic device in which it makes control more intuitive, just like a natural limb. Moreover, as an effective human machine interface system, the pFMG system 1 also has applications including controlling robotic devices, wheelchairs, drones, home devices, and virtual reality environments. This technology can also bring benefits to rehabilitation and protective facilities (i.e., adaptive and sensible flooring) since it can work as a dual-mode device with active actuation and passive sensing simultaneously. The pFMG system 1 is low enough in cost for applications in the entertainment and commercial goods areas as well as accurate and effective enough for applications in medical fields and high-end robotics.

[00151 ] The results in the specification were substantially derived from: Sam Young, “Gesture intention framework for infrared device teleoperation using novel pressure myography soft armband’, Honour’s Thesis in Biomedical Engineering, University of Wollongong, Australia, June 2020, supervised by Gursel Alici, (unpublished and confidential thesis); and Marine Zagdoun, “Development and Performance Evaluation of a Soft and Stretchable Armband to Identify the Intention of Prosthetic Hand Users”, Practicum Project Report, University of Wollongong, Australia, 04/02/2019-02/08/2019, supervised by Gursel Alici, (unpublished and confidential report); the contents of which are incorporated herein by reference.

[00152] It will be appreciated that the illustrated embodiments of the invention provide a robust, ergonomic and cost-effective system and method for robustly, accurately and conveniently determining body movements.

[00153] Although the invention has been described with reference to specific examples, it will be appreciated by those skilled in the art that the invention may be embodied in many other forms.