Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
WEIGHT BEARING IMAGING CALIBRATION BASED ON PRESSURE SENSING
Document Type and Number:
WIPO Patent Application WO/2023/154546
Kind Code:
A1
Abstract:
Systems and methods are provided for weight bearing images. A pressure sensor has a surface for receiving the feet of a patient while the patient is standing and generating pressure distribution map having a first region associated with a first foot of the patient and a second region associated with a second foot of the patient. An imager captures an image of a region of interest associated with the patient while the patient is standing on the sensor interface. A predictive model receives a representation of the first region and a representation of the second region and determines a value representing a stance of the patient. An output interface provides the value representing the stance of the patient to one of a display, the imager, and an image post-processing model.

Inventors:
SCHWAB JOSEPH H (US)
GHAEDNIA HAMID (US)
ASHKANI-ESFANI SOHEIL (US)
DIGIOVANNI CHRISTOPHER W (US)
TASEH ATTA (US)
DETELS KELSEY (US)
Application Number:
PCT/US2023/012998
Publication Date:
August 17, 2023
Filing Date:
February 14, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MASSACHUSETTS GEN HOSPITAL (US)
International Classes:
A61B5/103; A43D1/02; G06V40/60
Domestic Patent References:
WO2018192933A12018-10-25
WO2017062530A12017-04-13
Foreign References:
US9414781B22016-08-16
Attorney, Agent or Firm:
WESORICK, Richard S. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A system, comprising: a pressure sensor having a surface for receiving the feet of a patient while the patient is standing and generating pressure distribution map having a first region associated with a first foot of the patient and a second region associated with a second foot of the patient; an imager that captures an image of a region of interest associated with the patient while the patient is standing on the sensor interface; a predictive model that receives a representation of the first region and a representation of the second region and determines a value representing a stance of the patient; and an output interface that provides the value representing the stance of the patient to one of a display, the imager, and an image post-processing model.

2. The system of claim 1 , wherein the value representing the stance of the patient is a categorical parameter representing whether the stance of the patient represents a balance in weight support between the first foot and the second foot.

3. The system of claim 2, wherein the output interface provides the value to the imager, the imager being configured to capture an image of a region of interest of the patient when the value representing the stance of the patient assumes a first value.

4 The system of claim 1 , wherein the value representing the stance of the patient is a set of pressure parameters representing the stance of the patient, the set of pressure parameters being provided to the image post-processing model.

5. The system of claim 4, wherein the image post-processing model further receives a first image from the imaging system as an input, the image postprocessing model providing a second image, representing an expected appearance of the first image if the stance of the patient had been balanced.

6. The system of claim 4, wherein the set of pressure parameters representing the stance of the patient include at least two of a total applied force, a total contact area on the sensor, a center of the pressure, a center of the contact area, a maximum pressure, a minimum pressure, a standard value distribution of the pressure at each pixel, and a weight bearing ratio between the contralateral sides.

7. The system of claim 4, wherein the image post-processing model further receives an image from the imaging system as an input and generates a continuous or categorical parameter representing a disorder of the lower extremities of the patient.

8. The system of claim 4, wherein the image post-processing model generates a matrix of correction values that can be applied to a set of clinical parameters extracted from the image from the imaging system to correct the set of clinical parameters for a deviation of the stance of the patient from a balanced stance.

9. A method for weight-bearing imaging comprising: generating a pressure distribution map having a first region associated with a first foot of a patient and a second region associated with a second foot of the patient while the patient is standing on a pressure sensor; determining a value representing a stance of the patient from a representation of the first region and a representation of the second region; capturing an image of a region of interest associated with the patient at an imager while the patient is standing on the sensor interface; and providing the value representing the stance of the patient to one of a display, the imager, and an image post-processing model.

10. The method of claim 9, further comprising giving the patient access to a smart application that asks the patient a set of questions about common symptoms associated with disorders in the lower extremities while the value representing the stance of the patient is determined.

11 . The method of claim 10, wherein the set of questions includes questions about at least two of pain, decreased range of motion, bruising, tenderness to touch, and changes in pain with activity.

12. The method of claim 9, wherein the image of the region of interest is captured at the imager automatically in response to the value representing the stance of the patient.

13. The method of claim 9, wherein the value representing the stance of the patient is a set of pressure parameters representing the stance of the patient, and providing the value representing the stance of the patient to one of the display, the imager, and the image post-processing model comprises providing the set of pressure parameters to the image post-processing model.

14. The method of claim 13, further comprising: providing the image of the region of interest as an input to the image postprocessing model; and generating a corrected image at the image post-processing model, representing an expected appearance of the first image if the stance of the patient had been balanced, from the set of pressure parameters and the image of the region of interest.

15. The method of claim 13, further comprising: providing the image of the region of interest as an input to the image postprocessing model; and generating a continuous or categorical parameter representing a disorder of the lower extremities of the patient from the image of the region of interest and the set of pressure parameters.

16. The method of claim 13, further comprising generating a matrix of correction values that can be applied to a set of clinical parameters extracted from the image of the region of interest to correct the set of clinical parameters for a deviation of the stance of the patient from a balanced stance.

17. A method for weight-bearing imaging comprising: instructing a patient to stand on a pressure sensor; giving a patient access to a smart application that asks the patient a set of questions about common symptoms associated with disorders in the lower extremities while the patient is standing on the pressure sensor; generating a pressure distribution map at the pressure sensor and capturing an image of a region of interest while the patient is interacting with the smart application; and displaying the captured image and the pressure distribution map to a clinician.

18. The method of claim 17, wherein the pressure distribution map has a first region associated with a first foot of a patient and a second region associated with a second foot of the patient, the method further comprising determining a value representing a stance of the patient from a representation of the first region and a representation of the second region.

19. The method of claim 18, wherein the value representing the stance of the patient is a set of pressure parameters representing the stance of the patient, and the method further comprising providing a set of pressure parameters to the image post-processing model.

20. The method of claim 17, wherein the set of questions includes questions about at least two of pain, decreased range of motion, bruising, tenderness to touch, and changes in pain with activity.

Description:
WEIGHT BEARING IMAGING CALIBRATION BASED ON PRESSURE SENSING

RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application Serial No. 63/310,081 , filed February 14, 2022, and entitled “Weight Bearing Imaging Calibration Based on Foot Pressure and Patient Reported Outcome Recorder Tool”, the entirety of which is incorporated by reference herein.

TECHNICAL FIELD

[0002] The present invention relates generally to sensor systems, and specifically to calibration of weight bearing imaging using pressure sensing.

BACKGROUND

[0003] Patients referred to orthopaedic clinics often need to undergo clinical imaging. When the trauma involves a weight-bearing axis of the body, such as the spine, hip, knee, ankle, or foot joints, imaging under the physiologic weight load will bring about a more accurate view of the joints. The congruence of the joints and ligaments are better and more realistically shown in weightbearing mode.

Furthermore, any abnormality in the congruency, alignments, diastases, and minor defects will be more precisely appreciated in weightbearing images. Weight bearing imaging using radiographs, fluoroscopy, and computed tomography (CT) have been used widely as the most important imaging techniques in detecting orthopaedic conditions, particularly in the lower extremities. The reliability and accuracy of weightbearing imaging increase particularly when one side is compared to the contralateral side as a control. All the measurements and alignments can be compared between the bilateral joints, and a clinician can decide if abnormalities are present based on the differences. However, one of the main limitations of this imaging, which can sometimes mislead the technician and reduce the precision of the technique, is a failure of the patient to balance weight evenly while the images are being obtained. This imbalance might be due to various reasons such as pain, musculoskeletal deformities, neuropathies, weakness, or even lack of experience among the technicians. Since many of these subtle deformities require meticulous observation, measurement, and comparison of different parameters in the joint with the contralateral side, lack of calibration can lead to missed diagnosis or even misdiagnosis of the patient’s injury.

SUMMARY

[0004] In one example, a system includes a pressure sensor with a surface for receiving the feet of a patient while the patient is standing and generating pressure distribution map having a first region associated with a first foot of the patient and a second region associated with a second foot of the patient. An imager captures an image of a region of interest associated with the patient while the patient is standing on the sensor interface. A predictive model receives a representation of the first region and a representation of the second region and determines a value representing a stance of the patient. An output interface provides the value representing the stance of the patient to one of a display, the imager, and an image post-processing model.

[0005] In another example, a method is provided for weight-bearing imaging. A pressure distribution map having a first region associated with a first foot of a patient and a second region associated with a second foot of the patient is generated while the patient is standing on a pressure sensor. A value representing a stance of the patient is determined from a representation of the first region and a representation of the second region. An image of a region of interest associated with the patient is captured at an imager while the patient is standing on the sensor interface. The value representing the stance of the patient is provided to one of a display, the imager, and an image post-processing model.

[0006] In a further example, a method is provided for weight-bearing imaging. A patient is instructed to stand on a pressure sensor. The patient is given access to a smart application that asks the patient a set of questions about common symptoms associated with disorders in the lower extremities while the patient is standing on the pressure sensor. A pressure distribution map is generated at the pressure sensor and an image of a region of interest is captured while the patient is interacting with the smart application. The captured image and the pressure distribution map are displayed to a clinician. [0007] Other objects and advantages and a fuller understanding of the invention will be had from the following detailed description and the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] FIG. 1 illustrates a system for performing weight bearing imaging in accordance with an aspect of the present invention;

[0009] FIG. 2 illustrates an example of a pressure sensor that can be used with the system of FIG. 1 ;

[0010] FIG. 3 illustrates another example of a system for performing weight bearing imaging in accordance with an aspect of the present invention;

[0011] FIG. 4 illustrates one example of a method for weight bearing imaging;

[0012] FIG. 5 illustrates one example of a method for capturing an image of a region of interest of a patient;

[0013] FIG. 6 illustrates one example of a method for capturing an image of a region of interest of a patient during acquisition of patient’s reported outcome data; and

[0014] FIG. 7 is a schematic block diagram illustrating an exemplary system of hardware components capable of implementing examples of the systems and methods disclosed herein

DETAILED DESCRIPTION

[0015] As used herein, a “matrix” is an array of values in which at least one the number of rows and the number of columns in the array is greater than one. Accordingly, a matrix, as used herein, should be read to encompass values traditionally referred to as row vectors or column vectors, but should not be read to encompass scalar values.

[0016] The systems and methods disclosed herein utilize a high-resolution pressure analysis device to analyze features of a patient’s stance, including the weight applied on each side of the body, leaning, the shape of the foot, valgus and varus positions, planus and Cavus, and to correlate the parameters obtained from the pressure of the patient's feet and distribution of the pressure with the weight bearing imaging technique, including weight bearing radiography, fluoroscopy, and weight bearing computed tomography (CT). This allows for balancing and levelling the images of the patient so in the time of comparison, the injured side can be at the same level and position as the uninjured contralateral side, removing the bias caused by imbalanced and/or mispositioned stance. This enables the clinician to see a specific location in both extremities at the same time bilaterally. For example, if an axial image of bilateral ankles is provided via weight bearing CT, if in one side, the tibial plafond is depicted, the clinician will see the tibial plafond at the same level and with similar orientation in the contralateral side as well.

[0017] FIG. 1 illustrates a system 100 for performing weight bearing imaging in accordance with an aspect of the present invention. The illustrated system 100 includes a pressure sensor 102 having a surface for receiving the feet of a patient while the patient is standing. The pressure sensor 102 generates a pressure distribution map with a first region associated with a first foot of the patient and a second region associated with a second foot of the patient. It will be appreciated that the pressure sensor 102 can be implemented as any appropriate sensor for generating a pressure distribution map over a region large enough to accommodate a standing patient, such as a pressure mat or similar device

[0018] FIG. 2 illustrates an example of a pressure sensor 200 that can be used with the system 100 of FIG. 1 . While the example sensor 200 is described below, further details can be found in PCT Publication WO 2022/232676, published February 1 1 , 2022, and entitled “System and Method for Optical Pressure Sensing of Body Part," the entire contents of which are hereby incorporated by reference. In one instance, the pressure sensor 200 is configured as a frustrated total internal reflection (“FTIR”) system. A frustrated internal reflection occurs when light travels from a material with a higher refractive index in the direction of a lower refractive index at an angle greater than its critical angle. It becomes “frustrated” when a third object comes into contact with the surface and alters the way the waves propagate, and the capture of which may be used to produce a surface pattern. FTIR is able to detect the interface/contact area at a very high resolution through image processing. Through software, measured light intensity can be sorted into a gradient of high-to- iow intensity pixels. [0019] The system 200 includes a platform or frame having a series of legs 222 supporting a sensor interface 230. The sensor interface 230 includes a planar panel having a contact surface facing upwards that is intended to be stood upon by a patient during imaging. An imaging system 254 is configured to capture images of multiple wavelength light reflected off the individual’s feet 256 while engaging the sensor interface 230. A computing device 270 can then extract data from the reflected light images and - either analytically or with machine learning - transform the light intensity data into a real time pressure distribution map for visualizing the same and making diagnoses/assessments therefrom.

[0020] A light source 250 is provided for emitting light through the crosssection of the panel. In other words, the light is emitted between the surfaces defining the thickness of the panel. The light source 250 can be formed as a series of light emitting diodes (LEDs) arranged in a predefined pattern. The light source 250 can emit light from one or more wavelengths. In one example, the light source emits red, green, and blue light (RGB). Alternatively, the light source emits ultraviolet and/or infrared light. As shown, the light source 250 is provided on opposite sides of the frame 220 such that the LEDs all emit light towards the center of the panel. Since the light source 250 emits multiple wavelength light into the sensor interface 230, extracted images can include not only different light intensity but also different light color. It will be appreciated that human body parts are not homogenous and therefore different portions of, for example, the foot can exhibit different surface roughness, Young’s Modulus, elasticity, etc. Different wavelength light reacts differently to these different material properties and, thus, imaging body parts with multiple wavelength light enables the sensor 200 to spatially map the material properties of the body part.

[0021] An imaging system 254 is positioned within the frame and includes one or more cameras. As shown, the imaging system comprises a single camera 254 having a field of view facing upwards towards the sensor interface 230 for capturing images of or around the contact surface. A projection material (net shown) can be applied to the underside of the panel for displaying an image to be captured and scattering light. A controller or control system 260 is connected to the light source 250, a set of pressure sensors 252, and the camera 254. The controller 260 is configured to control the light source 250, e.g., control the color(s), intensity and/or duration of the light emitted by the light source. The controller 260 is also configured to receive signals from the camera 254 indicative of the images taken thereby and signals from the pressure sensors 252 indicative of the pressure exerted on the contact surface. A computer or computing device 270 having a display 272 is connected to the control system 260. The computing device 270 can be, for example, a desktop computer, smart phone, tablet, etc.

[0022] The sensor interface 230 is configured cooperate with the light source 250, pressure sensors 252, camera 254, and controller 260 to analyze interactions between a subject/individual and the sensor interface. In particular, the feet 256 of the individual interact with the contact surface of the sensor interface 230 over an interface zone 258. The imaging system 254 provides image data that includes a plurality of image frames based on a frame rate at which the frames are acquired by the imaging system. The computing device 270 can be programmed with instructions executable by one or more processors of the computing device, including lighting controls programmed to provide instructions to the controller 260 for controlling operating parameters of the light source 250. The computing device 270 can be coupled to the sensor interface 230 and controller 260 by a physical connection, e.g., electrically conductive wire, optical fibers, or by a wireless connection, e.g., Wi-Fi, Bluetooth, near field communication, or the like. In one example, the controller 260 and computing device 270 can be external to the sensor interface 230 and be coupled therewith via a physical and/or wireless link.

Alternatively, the controller 260 and computing device 270 can be integrated into the sensor interface 230.

[0023] When the system 200 is operating, the light source 250 emits multiple wavelength light into the cross-section of the panel. This light is trapped within the panel and travels therethrough as denoted generally by the representative light lines. When the individual, or an article worn by the individual, such as a shoe (not shown), makes contact with the contact surface, the total internal reflection is frustrated at the interface zone 258 and directed generally towards the camera 254.

[0024] The camera 254 images this reflection and sends the image data to the computing device 270, which can generate a composite light reflection map across the entire contact surface, In other words, the computing device 270 receives image signals from the camera 254, interprets/'anaiyzes those signals, and generates a composite map illustrating the reflection of iight off the foot 256 and along the contact surface 242. Since multiple wavelengths of light are emitted by the light source 250, the composite map illustrates how multiple wavelengths of light, e.g., RGB iight, interacts with the foot 256 on the contact surface.

[0025] At the same time, the pressure sensors 252 send signals to the computing device 270 indicative of the pressure exhibited by each foot 256 on the contact surface. The individual can perform multiple interactions with the sensor interface 230 such that multiple data sets are compiled. The interactions can be repeating the same motion, e.g., standing on one foot 256, standing on the other foot, jumping, stepping on and/or off the contact surface, etc., or performing multiple, different motions.

[0026] Returning to FIG. 1 , an imager 104 that captures an image of a region of interest associated with the patient while the patient is standing on the sensor interface. It will be appreciated that the imager 104 can be distinct from any imager associated with the pressure sensor 102, such as the imaging system 54 of FIG. 2. In practice, the imager 104 can include, for example, a radiographic imaging system, an angiographic imaging system, or a computed tomography (CT) imaging system. It will be appreciated that the imager 104 can be positioned to allow for a clear view of the region of interest while the patient is standing on the pressure sensor. In one implementation, using the pressure sensor 200 described in FIG. 2, X-ray images can be taken through the glass panel of the pressure sensor. In one example, to encourage a more natural stance from the patient, the patient can be encouraged, during the procedure, to interact with an application that queries the patients about current symptoms and other relevant medical data to provide a set of patient reported outcome data. The application can be loaded onto a mobile device belonging to the patient in advance of the imaging, provided to the via mobile device, such as a tablet, or displayed or projected on a screen within view of the patient, with the patient encouraged to interact via a wireless input device or vocal responses.

[0027] A predictive model 106 receives a representation of the first region of the pressure distribution map and a representation of the second region of the pressure distribution map and determines a value representing a stance of the patient. A representation of a given region of the pressure distribution map can include, for example, intensity values associated with some or all of the pixels within the given region or values extracted from the intensity values associated with the pixels within the given regions, such as descriptive statistics, histogram bins of the intensity values, and other local or global image features. In one implementation, the value representing the stance of the patient is a categorical parameter indicating if the patient is in a balanced stance, that is, if the patient’s weight is supported substantially equally by both feet. It will be appreciated that a patient’s weight is supported “substantially equally” when it would be expected that the impact of the load through each foot on respective images taken on the corresponding side of the body would be within a predetermined tolerance.

[0028] Alternatively, the value representing the patient’s stance can be a categorical value representing whether the patient is in a natural stance. To determine a natural stance for the patient, the patient can be distracted, for example, by a request for patient reported outcome data, while standing on the pressure sensor 102, and the natural stance of the patient can be measured as a baseline. The categorical value can then indicate the similarity of a later stance of the patient with this measured baseline, for example, to allow an image to be captured at the imager 104 in the patient’s natural stance. In another implementation, the value representing a patient’s stance can be a matrix of parameters extracted from the pressure map and used for correction of an image captured at the imager 104 or correction of values extracted from the image to account for imbalance or a variation from a natural baseline in the patient’s stance.

[0029] It will be appreciated that the predictive model 106 can utilize one or more pattern recognition algorithms, implemented, for example, as classification and regression models, each of which analyze data from the pressure sensor 104 to assign the value or values to the patient. Where multiple classification and regression models are used, the predictive model 106 can include an arbitration element can be utilized to provide a coherent result from the various algorithms. Depending on the outputs of the various models, the arbitration element can simply select a class from a model having a highest confidence, select a plurality of classes from all models meeting a threshold confidence, select a class via a voting process among the models, or assign a numerical parameter or parameters based on the outputs of the multiple models. Alternatively, the arbitration element can itself be implemented as a classification model that receives the outputs of the other models as features and generates one or more output classes or continuous parameters for the patient.

[0030] The predictive model 106, as well as any constituent models, can be trained on training data associated with known patient stances. Training data can include, for example, the representations of the first and second regions of the pressure map for a plurality of patients, each labeled with a stance assumed by the patient. The training process of the predictive model will vary with its implementation, but training generally involves a statistical aggregation of training data into one or more parameters associated with the output classes. For rulebased models, such as decision trees, domain knowledge, for example, as provided by one or more human experts, can be used in place of or to supplement training data in selecting rules for classifying a user using the input data. Any of a variety of techniques can be utilized for the models, including support vector machines, regression models, self-organized maps, k-nearest neighbor classification or regression, fuzzy logic systems, data fusion processes, boosting and bagging methods, rule-based systems, or artificial neural networks.

[0031] In one example, the predictive model 106 performs a comparison between the regions of the pressure distribution map representing the individual’s feet via a two-dimensional (2-D) cross correlation method, such a 2-D fast Fourier transform (FFT) correlation method, performed on the mirrored image of one foot and the original image of the contralateral to match the contact areas while the subject is standing on one foot or both feet simultaneously. The mechanical properties, contact area patterns, and pressure distribution patterns can then be compared between the two feet to generate the value representing the stance of the patient.

[0032] Regardless of the specific model employed, the value or values generated at the predictive model 106 can be provided to an output interface 108 that provides the value representing the stance of the patient to one of a user, an imaging system, and an image post-processing model. In one example, the output interface 108 provides the value to a user, such as a clinician performing the imaging, at an associated display (not shown). In another example, the output interface 108 provides the value to the imager 104 to instruct the imager to capture the image of the region of interest. For example, the imager 104 can be configured to automatically capture the image when the categorical parameter assumes a value indicating that the patient is in either a balanced or natural stance.

[0033] FIG. 3 illustrates another example of a system 300 for performing weight bearing imaging in accordance with an aspect of the present invention. The system 300 includes a pressure sensor 302 with a surface for receiving the feet of a patient while the patient is standing. The pressure sensor 302 generates a pressure distribution map having a first region associated with a first foot of the patient and a second region associated with a second foot of the patient. The pressure sensor 302 can be implemented as an optical pressure sensor, such as that described in FIG. 2, a pressure mat, or a similar device. An imager 304 is configured to capture an image of a region of interest associated with the patient while the patient is standing on the sensor interface. The imager 304 can include, for example, a radiological imaging system, an angiographic imaging system, or a computed tomography (CT) imaging system.

[0034] A predictive model 306 receives a representation of the first region and a representation of the second region and determines a set of pressure parameters for use in post-processing of the image that are provided, via an output interface 308, to an image post-processing model 310. In one implementation, the set of pressure parameters represents a difference between the first and second regions of the pressure distribution map. In another implementation, the set of pressure parameters represents a deviation of the patient’s stance from a baseline value, which can be either a baseline specific to the patient or an average baseline determined from a training set of healthy patients. In a further implementation, the set of pressure parameters includes a series of derived values representing either the pressure distribution map globally or the individual regions, including a total contact area, a total force, locations representing the center of pressure, center of force, and the center of the contact area, and standard deviations of the contact area, pressure, and force.

[0035] The image post-processing model 310 supplements the image or a set of values extracted from the image to account for the patient’s stance according to the set of pressure parameters received from the predictive model. The image postprocessing model 310 is implemented as one or more pattern recognition algorithms, each of which analyze some or all of the extracted set of pressure parameters and provide an output that supplements the image generated at the image 304. Where multiple classification or regression models are used, an arbitration element can be utilized to provide a coherent result from the plurality of models. The training process of a given classifier will vary with its implementation, but training generally involves a statistical aggregation of training data into one or more parameters associated with the output class. The training process can be accomplished on a remote system and/or on a local device, such as a wearable or portable device. The training process can be achieved in a federated or non-federated fashion. For rulebased models, such as decision trees, domain knowledge, for example, as provided by one or more human experts or extracted from existing research data, can be used in place of or to supplement training data in selecting rules for classifying a user using the extracted features. Any of a variety of techniques can be utilized for the classification algorithm, including support vector machines, regression models, selforganized maps, fuzzy logic systems, data fusion processes, boosting and bagging methods, rule-based systems, or artificial neural networks.

[0036] An SVM classifier can utilize a plurality of functions, referred to as hyperplanes, to conceptually divide boundaries in the N-dimensional feature space, where each of the N dimensions represents one associated feature of the feature vector. The boundaries define a range of feature values associated with each class. Accordingly, an output class and an associated confidence value can be determined for a given input feature vector according to its position in feature space relative to the boundaries. In one implementation, the SVM can be implemented via a kernel method using a linear or non-linear kernel.

[0037] An ANN classifier comprises a plurality of nodes having a plurality of interconnections. The values from the feature vector are provided to a plurality of input nodes. The input nodes each provide these input values to layers of one or more intermediate nodes. A given intermediate node receives one or more output values from previous nodes. The received values are weighted according to a series of weights established during the training of the classifier. An intermediate node translates its received values into a single output according to a transfer function at the node. For example, the intermediate node can sum the received values and subject the sum to a binary step function. A final layer of nodes provides the confidence values for the output classes of the ANN, with each node having an associated value representing a confidence for one of the associated output classes of the classifier. Another example is utilizing an autoencoder to detect outliers in clinical parameters as an anomaly detector to identify when various parameters are outside their normal range for an individual.

[0038] Many ANN classifiers are fully connected and feedforward. A convolutional neural network, however, includes convolutional layers in which nodes from a previous layer are only connected to a subset of the nodes in the convolutional layer. Recurrent neural networks are a class of neural networks in which connections between nodes form a directed graph along a temporal sequence. Unlike a feedforward network, recurrent neural networks can incorporate feedback from states caused by earlier inputs, such that an output of the recurrent neural network for a given input can be a function of not only the input but one or more previous inputs. As an example, Long Short-Term Memory (LSTM) networks are a modified version of recurrent neural networks, which makes it easier to remember past data in memory.

[0039] A rule-based classifier applies a set of logical rules to the extracted features to select an output class. Generally, the rules are applied in order, with the logical result at each step influencing the analysis at later steps. The specific rules and their sequence can be determined from any or all of training data, analogical reasoning from previous cases, or existing domain knowledge. One example of a rule-based classifier is a decision tree algorithm, in which the values of features in a feature set are compared to corresponding threshold in a hierarchical tree structure to select a class for the feature vector. A random forest classifier is a modification of the decision tree algorithm using a bootstrap aggregating, or “bagging” approach. In this approach, multiple decision trees are trained on random samples of the training set, and an average (e.g., mean, median, or mode) result across the plurality of decision trees is returned. For a classification task, the result from each tree would be categorical, and thus a modal outcome can be used.

[0040] In one example, the image post-processing model 310 receives the set of pressure parameters and generates a matrix of parameters that correct a set of clinical values, such as areas and volumes of structures and regions of clinical interest, extracted from the image. In this implementation, the image postprocessing model 310 can be trained on sets of pressure parameters taken from healthy patients assuming various stances and appropriate correction values for images taken of those patients. In one example, the correction can be applied via multiplication of the matrix of parameters to a vector comprised of the extracted clinical values. In another implementation, each of the image from the imager 304 and the set of pressure parameters are provided to the image post-processing model 310, which is implemented a deep learning system, such as a convolutional neural network. In this implementation, the image post-processing model 310 can be trained on images of patients assuming various stances, with accompanying sets of pressure parameters representing those stances, paired with images of the patients in a natural or balanced stance. Example stances can include the natural stance, a stance with most or all of the patient’s weight on the right foot, a stance with most or all of the patient’s weight on left foot, a stance with the patient leaning to the right, a stance with the patient leaning to the left, a stance with the patient leaning forward, a stance with the patient leaning backward, a stance simulating genu valgum, a stance simulating genu varum, a stance with flexion of one or both legs, a stance simulating ankle valgum, a stance simulating ankle varum, a stance with ankle inversion, and a stance with ankle eversion. The resulting output can be one of an image of the patient corrected for deviations from a natural or balanced stance or a set of clinical parameters for the image that are corrected for any deviation in the patient’s stance. [0041] In another example, the image post-processing model 310 can receive the image from the imager 304 and the set of pressure parameters and assign a continuous or categorical parameter representing the presence or absence of a disorder of the patient’s lower extremities. For example, the assigned parameter can be a continuous parameter that corresponds to a likelihood that the patient has some disorder of the lower extremities, a likelihood that the patient has a specific disorder, a severity of a disorder, a current or predicted response to treatment for a disorder, or a progression of an existing disorder. In another example, the image postprocessing model 310 can assign a categorical parameter that corresponds to ranges of the likelihoods described above, the presence or predicted presence of a specific disorder, categories representing changes in symptoms associated with a disorder (e.g., “improving”, “stable, “worsening”) , or categories representing a current or predicted response to treatment. In this implementation, the image postprocessing model 310 can be trained on images of the patient and sets of pressure parameters taken from patients with known status relative to a given disorder or set of disorders.

[0042] Disorders that can be evaluated in this implementation can include diabetic neuropathy, foot vascular insufficiency, diabetic wound, early stage neuropathies and vascular insufficiencies, rigid and flaccid flat foot, cavus foot, cavovarus deformities, Peroneal tendon injuries and impingement, tibial tendon injuries, posterior tibial tendon insufficiency, progressive flat foot deformities, charcot-marie tooth deformities, Achilles tendon ruptures, subtle Lisfranc instability, syndesmotic instability, plantar fasciitis, plantar plate injuries, turf toe, L5 and S1 nerve injuries, drop foot including peroneal nerve injuries and disc herniation, 3-D alterations in the syndesmotic joint, 3-D alterations in the Lisfranc joint including the medial cuboidsecond metatarsus, and medial cuboid-intermediate cuboid, and valgus and varus deformities in the knee. The generated parameter can be stored in a non-transitory computer readable medium, for example, as part of a record in an electronic health records database, or displayed to a clinician at a display 312 and used to suggest a treatment or course of action to the patient.

[0043] In view of the foregoing structural and functional features described above, an example method will be better appreciated with reference to FIGS. 4-6. While, for purposes of simplicity of explanation, the example methods of FIGS. 4-6 are shown and described as executing serially, it is to be understood and appreciated that the present examples are not limited by the illustrated order, as some actions could in other examples occur in different orders, multiple times and/or concurrently from that shown and described herein. Moreover, it is not necessary that all described actions be performed to implement a method in accordance with the invention.

[0044] FIG. 4 illustrates one example of a method 400 for weight bearing imaging. At 402, a pressure distribution map is generated for a patient standing on a pressure sensor. The pressure distribution map has a first region associated with a first foot of a patient and a second region associated with a second foot of the patient. At 404, a value representing a stance of the patient is determined from a representation of the first region and a representation of the second region. In one example, the value representing the stance of the patient is a set of pressure parameters representing the stance of the patient. In another example, the value is a categorical value indicating whether the patient’s stance is balanced.

[0045] At 406, an image of a region of interest associated with the patient is captured at an imager while the patient is standing on the sensor interface. In one example, the image is captured at the imager automatically in response to the value representing the stance of the patient, for example, if the value indicates that the patient is in a balanced stance. At 408, the value representing the stance of the patient to one of a display, the imager, and an image post-processing model. For example, each of a set of pressure parameters and the image can be provided to an image post-processing model to generate a corrected image at the image postprocessing model, representing an expected appearance of the first image if the stance of the patient had been balanced. Alternatively, each of the set of pressure parameters and the image can be provided to the image post-processing model to generate a continuous or categorical parameter representing a disorder of the lower extremities of the patient from the image of the region of interest and the set of pressure parameters. In another example, the image post-processing model can receive the set of pressure parameters and generate a matrix of correction values that can be applied to a set of clinical parameters extracted from the image of the region of interest to correct the set of clinical parameters for a deviation of the stance of the patient from a balanced stance. Regardless of the example, the output of the post-processing component, if any, can be displayed to a clinician with any of the pressure distribution map, the image, and the value or values generated at the predictive model.

[0046] FIG. 5 illustrates one example of a method 500 for capturing an image of a region of interest of a patient. At 502, a pressure distribution map representing the patient’s stance is generated. For example, the patient can be instructed to stand on a pressure mat or an optical pressure sensor such as that described in FIG. 2. At 504, the pressure distribution map is analyzed at a predictive model to determine if the patient is standing with a balanced stance. If the patient’s stance in not balanced (N), the patient is instructed to adjust their stance at 506, and the method returns to 502 to capture a new pressure distribution map. If the patient’s stance is balanced (Y), the method advances to 508, where an image of the region of interest is captured at an associated imager. In one example, a clinician is alerted that a balanced stance has been achieved, and the clinician operates the imager to capture the image of the region of interest. In another example, the imager receives a signal from the predictive model or an associated system indicating that the patient’s stance is balanced and takes the picture automatically in response to the determination that the patient’s stance is balanced.

[0047] An important caveat in the diagnosis of subtle foot and ankle conditions is the physical examinations and patients' reported outcomes. Often, patients might have certain symptoms such as pain, tenderness, decreased range of motion, and stiffness. Reporting these signs and symptoms can help the clinicians pay more attention to the specific locations on the images and assess them with greater concern. While patients are undergoing weightbearing imaging studies, they are standing still for a few minutes. During this time, they can be asked about their symptoms and any specific complaint they might have. Moreover, distracting the patient from his or her posture and stance and the way he or she puts the feet on the sensors can help obtain less biased inputs that are closer to the patient’s natural stance. In other words, while the patient is being interviewed, he or she will be distracted from the way of standing and any compensation posture he or she might use to reduce the pain or inconvenience, allowing him or her to assume a more natural stance. Accordingly, during imaging, the patient can be provided with access to a smart application that will automatically ask the patient about the common symptoms he or she might have in the lower extremities.

[0048] FIG. 6 illustrates one example of a method 600 for capturing an image of a region of interest of a patient during acquisition of patient’s reported outcome data. At 602, a patient is instructed to stand on a pressure sensor, such as a pressure mat or an optical pressure sensor such as that described in FIG. 2. At 604, the patient is given access to a smart application for collecting data about the patient’s symptoms in the lower extremities, for example, via the patient’s mobile device, a provided mobile device, or a display in the room with the imaging system. In one example, the questions ask the patient about various symptoms, which can include pain (e.g., scored 1 -10), decreased range of motion (e.g., mild/moderate/severe), bruising (e.g., yes/no), tenderness to touch (e.g., yes/no), changes in pain with more activity (e.g., increases with activity/irrelevant to activity (constant)). At 606, a pressure distribution map for the patient and an image of the region of interest can be captured while the patient is distracted by interacting with the smart application, and thus presumably assuming a natural stance. The captured image and pressure distribution map can then be displayed to a clinician at 608.

[0049] FIG. 7 is a schematic block diagram illustrating an exemplary system 700 of hardware components capable of implementing examples of the systems and methods disclosed herein. The system 700 can include various systems and subsystems. The system 700 can be a personal computer, a laptop computer, a workstation, a computer system, an appliance, an application-specific integrated circuit (ASIC), a server, a server BladeCenter, a server farm, etc.

[0050] The system 700 can include a system bus 702, a processing unit 704, a system memory 706, memory devices 708 and 710, a communication interface 712 (e.g., a network interface), a communication link 714, a display 716 (e.g., a video screen), and an input device 718 (e.g., a keyboard, touch screen, and/or a mouse). The system bus 702 can be in communication with the processing unit 704 and the system memory 706. The additional memory devices 708 and 710, such as a hard disk drive, server, standalone database, or other non-volatile memory, can also be in communication with the system bus 702. The system bus 702 interconnects the processing unit 704, the memory devices 706-710, the communication interface 712, the display 716, and the input device 718. In some examples, the system bus 702 also interconnects an additional port (not shown), such as a universal serial bus (USB) port.

[0051] The processing unit 704 can be a computing device and can include an application-specific integrated circuit (ASIC). The processing unit 704 executes a set of instructions to implement the operations of examples disclosed herein. The processing unit can include a processing core.

[0052] The additional memory devices 706, 708, and 710 can store data, programs, instructions, database queries in text or compiled form, and any other information that may be needed to operate a computer. The memories 706, 708 and 710 can be implemented as computer-readable media (integrated or removable), such as a memory card, disk drive, compact disk (CD), or server accessible over a network. In certain examples, the memories 706, 708 and 710 can comprise text, images, video, and/or audio, portions of which can be available in formats comprehensible to human beings. Additionally or alternatively, the system 700 can access an external data source or query source through the communication interface 712, which can communicate with the system bus 702 and the communication link 714.

[0053] In operation, the system 700 can be used to implement one or more parts of a system for performing weight bearing imaging, such as those illustrated in FIGS. 1 -3. Computer executable logic for implementing the system resides on one or more of the system memory 706, and the memory devices 708 and 710 in accordance with certain examples. The processing unit 704 executes one or more computer executable instructions originating from the system memory 706 and the memory devices 708 and 710. The term "computer readable medium" as used herein refers to a medium that participates in providing instructions to the processing unit 704 for execution. This medium may be distributed across multiple discrete assemblies all operatively connected to a common processor or set of related processors.

[0054] Implementation of the techniques, blocks, steps, and means described above can be done in various ways. For example, these techniques, blocks, steps, and means can be implemented in hardware, software, or a combination thereof. For a hardware implementation, the processing units can be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, other electronic units designed to perform the functions described above, and/or a combination thereof.

[0055] Also, it is noted that the embodiments can be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart can describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations can be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in the figure. A process can correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.

[0056] Furthermore, embodiments can be implemented by hardware, software, scripting languages, firmware, middleware, microcode, hardware description languages, and/or any combination thereof. When implemented in software, firmware, middleware, scripting language, and/or microcode, the program code or code segments to perform the necessary tasks can be stored in a machine- readable medium such as a storage medium. A code segment or machineexecutable instruction can represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a script, a class, or any combination of instructions, data structures, and/or program statements. A code segment can be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, and/or memory contents. Information, arguments, parameters, data, etc. can be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, ticket passing, network transmission, etc. [0057] For a firmware and/or software implementation, the methodologies can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions can be used in implementing the methodologies described herein. For example, software codes can be stored in a memory. Memory can be implemented within the processor or external to the processor. As used herein the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other storage medium and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.

[0058] Moreover, as disclosed herein, the term "storage medium" can represent one or more memories for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums for storing information. The term "machine-readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels, and/or various other storage mediums capable of storing that contain or carry instruction(s) and/or data.

[0059] What have been described above are examples. It is, of course, not possible to describe every conceivable combination of components or methodologies, but one of ordinary skill in the art will recognize that many further combinations and permutations are possible. Accordingly, the disclosure is intended to embrace all such alterations, modifications, and variations that fall within the scope of this application, including the appended claims. As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on.