Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PATIENT RISK STRATIFICATION BASED ON BODY COMPOSITION DERIVED FROM COMPUTED TOMOGRAPHY IMAGES USING MACHINE LEARNING
Document Type and Number:
WIPO Patent Application WO/2019/051358
Kind Code:
A1
Abstract:
A system and method for determining patient risk stratification is provided based on body composition derived from computed tomography images using segmentation with machine learning. The system may enable real-time segmentation for facilitating clinical application of body morphological analysis sets. A fully-automated deep learning system may be used for the segmentation of skeletal muscle cross sectional area (CSA). Whole-body volumetric analysis may also be performed. The fully-automated deep segmentation model may be derived from an extended implementation of a Fully Convolutional Network with weight initialization of a pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis.

Inventors:
DO SYNHO (US)
FINTELMANN FLORIAN (US)
LEE HYUNKWANG (US)
Application Number:
PCT/US2018/050176
Publication Date:
March 14, 2019
Filing Date:
September 10, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MASSACHUSETTS GEN HOSPITAL (US)
International Classes:
G06F19/00
Foreign References:
US20100088264A12010-04-08
Other References:
LEE ET AL.: "Pixel-level deep segmentation: artificial intelligence quantifies muscle on computed tomography for body morphometric analysis", JOURNAL OF DIGITAL IMAGING, vol. 30, no. 4, 26 June 2017 (2017-06-26) - 1 August 2017 (2017-08-01), pages 487 - 498, XP036288400, Retrieved from the Internet [retrieved on 20181113], DOI: 10.1007/s10278-017-9988-z
Attorney, Agent or Firm:
LARSON, Michael, B. (US)
Download PDF:
Claims:
CLAIMS

1. A method for generating a report of a risk stratification for a subject using medical imaging, the method comprising:

a) acquiring an image of a region of interest of the subject; b) segmenting the image using a deep segmentation artificial intelligence network to generate a segmented image that distinguishes between at least two different materials in the image at a pixel-level;

c) calculating a morphological age of the subject using the segmented image; and

d) generating a report of risk stratification of the subject at least based upon the calculated morphological age.

2. The method of claim 1 wherein the artificial intelligence network is a fully convolutional neural network adapted to segment the image at a pixel-level.

3. The method of claim 1 further comprising measuring at least one of a Dice Similarity Coefficient (DSC) or a cross sectional area (CSA) error between a ground truth and a predicted segmentation of the image and selecting the artificial intelligence network from a plurality of networks based on a minimizing of the DSC or CSA.

4. The method of claim 3 wherein the artificial intelligence network is a fully convolutional neural network (FCN) and the plurality of networks include different layered FCNs including at least an FCN of 32 layers, an FCN of 16 layers, an FCN of 8 layers, an FCN of 4 layers, and an FCN of 2 layers.

5. The method of claim 1 wherein the image is a computed tomography (CT) image.

6. The method of claim 1 further comprising converting the image to a grayscale image by converting from Hounsfield Units (HU) of the image using a grayscale.

7. The method of claim 6 wherein the converting includes setting grayscale values below a selected window width to zero and setting grayscale values above a window width to a maximum (2BIT-1), wherein BIT represents the available number of bits per pixel, and wherein grayscale values within the window width are determined using a slope (2BIT-l/window width).

8. The method of claim 1 wherein calculating the morphological age of the subject includes quantifying a cross sectional area of the segmented image.

9. The method of claim 1 wherein generating the report of risk stratification of the subject includes determining at least one of a preoperative risk, a prediction of death, a tolerance to major surgery or chemotherapy, morbidity, mortality, or an expected length of hospital stay.

10. The method of claim 1 wherein the artificial intelligence network is trained with at least one labelled image.

11. The method of claim 1 wherein the segmenting is performed in real-time.

12. A system for generating a report of a risk stratification for a subject from medical images, the system comprising:

a computer system configured to:

i) acquire a medical image of a region of interest of the subject; if) generate a segmented image by segmenting the image using a pixel-level deep artificial intelligence network to distinguish between at least two different materials in the image;

iii) calculate a morphological age of the subject using the segmented image; and

iv) generating a report of risk stratification of the subject based upon the calculated morphological age.

13. The system of claim 12 wherein the artificial intelligence network is a fully convolutional neural network adapted to segment the image at a pixel-level.

14. The system of claim 12 further comprising measuring at least one of a Dice Similarity Coefficient (DSC) or a cross sectional area (CSA) error between a ground truth and a predicted segmentation of the image and selecting the artificial intelligence network from a plurality of networks based on a minimizing of the DSC or CSA.

15. The system of claim 14 wherein the artificial intelligence network is a fully convolutional neural network (FCN) and the plurality of networks include different layered FCNs including at least an FCN of 32 layers, an FCN of 16 layers, an FCN of 8 layers, an FCN of 4 layers, and an FCN of 2 layers.

16. The system of claim 12 wherein the image is a computed tomography (CT) image.

17. The system of claim 12 further comprising converting the image to a grayscale image by converting from Hounsfield Units (HU) of the image using a grayscale.

18. The system of claim 17 wherein the converting includes setting grayscale values below a selected window width to zero and setting grayscale values above a window width to a maximum (2BIT-1), wherein BIT represents the available number of bits per pixel, and wherein grayscale values within the window width are determined using a slope (2BIT-l/window width).

19. The system of claim 12 wherein calculating the morphological age of the subject includes quantifying a cross sectional area of the segmented image.

20. The system of claim 12 wherein generating the report of risk stratification of the subject includes determining at least one of a preoperative risk, a prediction of death, a tolerance to major surgery or chemotherapy, morbidity, mortality, or an expected length of hospital stay.

21. The system of claim 12 wherein the artificial intelligence network is trained with at least one labelled image.

22. The system of claim 12 wherein the computer system is configured to perform the segmenting in real-time.

Description:
PATIENT RISK STRATIFICATION BASED ON BODY COMPOSITION DERIVED FROM COMPUTED TOMOGRAPHY IMAGES USING MACHINE LEARNING

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Patent Application

Serial No. 62/555,771 filed on Sept. 8, 2017, and entitled "Patient risk stratification based on body composition derived from computed tomography images using machine learning."

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH

[0002] N/A

BACKGROUND

[0003] Deep learning has demonstrated enormous success in improving diagnostic accuracy, speed of image interpretation, and clinical efficiency for a wide range of medical tasks, ranging from the interstitial pattern detection on chest CT to bone age classification on hand radiographs. Particularly, a data-driven approach with deep neural networks has been actively utilized for several medical image segmentation applications, ranging from segmenting brain tumors on magnetic resonance images, organs of interest on CT to segmenting the vascular network of the human eye on fundus photography. This success is attributed to its capability to learn representative and hierarchical image features from data, rather than relying on manually engineered features based on knowledge from domain experts.

[0004] Image segmentation, also known as pixel-level classification, is the process of partitioning all pixels in an image into a finite number of semantically non-overlapping segments. In medical imaging, image segmentation has been considered a fundamental process for various medical applications including disease diagnosis, prognosis, and treatments. In particular, muscle segmentation on computed tomography (CT) for body composition analysis has emerged as a clinically useful risk stratification tool in oncology, radiation oncology, intensive care and surgery. Cadaver studies have established muscle cross sectional area (CSA) at the level of the third lumbar vertebral (L3) body as a surrogate marker for lean body muscle mass. These studies applied semi-automated threshold-based segmentation with pre-defined Hounsfield Unit (HU) ranges to separate lean muscle mass from fat. However, segmentation errors require manual correction based on visual analysis by highly-skilled radiologists. As a result, semi-automated body composition analysis on large datasets is impractical due to the expense and time required. Thus, there is a role for automated tissue segmentation in order to bring body composition analysis into clinical practice.

[0005] Adipose tissue segmentation on CT images may be performed as fat can be thresholded with a consistent HU range (-190 to -30). Muscle segmentation is less straightforward as muscle and neighboring organs have overlapping HU values (-29 to 150). Few published strategies exist for automated muscle segmentation with various approaches. One series of publications focused on segmentation of a single muscle (psoas major). Another has studied segmentation involving a deformable shape model based on the ideal muscle appearance with fitting based on a statistical deformation model (SDM). Another study attempted to segment a 3D body CT data set with 7 segmentation classes including fat and muscle by classifying each class using random forest classifiers when given 16 image features extracted from statistical information and filter responses. All these attempts require sophisticated hand-crafted features to define knowledge-based parameters and select constraints for well-formed statistical shape and appearance models. As a result, these approaches cannot be generalized.

[0006] Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of morphometric age is time intensive and requires highly trained experts.

SUMMARY OF THE DISCLOSURE

[0007] The present disclosure addresses the aforementioned drawbacks by providing a system and method for determining patient risk stratification based on body composition derived from computed tomography images using machine learning. The system may enable real-time segmentation of muscle and fat tissue, for example, facilitating clinical application of body morphological analysis sets. In one configuration, a fully-automated deep learning system for the segmentation of skeletal muscle cross sectional area (CSA) is proposed. In an example configuration, the system may use an axial computed tomography (CT) image taken at the third lumbar vertebra (L3) for the CSA. A fully-automated deep segmentation model may be derived from an extended implementation of a Fully Convolutional Network with weight initialization of a pre- trained model, such as an ImageNet model, followed by post processing to eliminate intramuscular fat for a more accurate analysis.

[0008] In one configuration, a method for generating a report of a risk stratification for a subject using medical imaging is provided. The method includes acquiring an image of a region of interest of the subject and segmenting the image using a deep segmentation artificial intelligence network to generate a segmented image that distinguishes between at least two different materials in the image at a pixel-level. A morphological age may be calculated for the subject using the segmented image and a report of risk stratification may be generated for the subject at least based upon the calculated morphological age.

[0009] In some configurations, the artificial intelligence network may be a fully convolutional neural network adapted to segment the image at a pixel-level. The method may include measuring at least one of a Dice Similarity Coefficient (DSC) or a cross sectional area (CSA) error between a ground truth and a predicted segmentation of the image and selecting the artificial intelligence network from a plurality of networks based on a minimizing of the DSC or CSA. The plurality of networks may include different layered FCNs including atleastan FCN of 32 layers, an FCN of 16 layers, an FCN of 8 layers, an FCN of 4 layers, and an FCN of 2 layers. The image may be a computed tomography (CT) image. The image may be converted to a grayscale image by converting from Hounsfield Units (HU) of the image using a grayscale. Converting may include setting grayscale values below a selected window width to zero and setting grayscale values above a window width to a maximum (2BIT-1), where BIT represents the available number of bits per pixel, and where grayscale values within the window width are determined using a slope (2BIT-l/window width).

[0010] In some configurations, the method includes calculating a morphological age of the subject by quantifying a cross sectional area of the segmented image. A report of risk stratification of the subject may be generated by determining at least one of a preoperative risk, a prediction of death, a tolerance to major surgery or chemotherapy, morbidity, mortality, or an expected length of hospital stay. The artificial intelligence network may also be trained with at least one labelled image and the segmenting may be performed in real-time.

[0011] In one configuration, a system is provided for generating a report of a risk stratification for a subject from medical images. The system includes a computer system configured to: i) acquire a medical image of a region of interest of the subject; ii) generate a segmented image by segmenting the image using a pixel-level deep artificial intelligence network to distinguish between at least two different materials in the image; iii) calculate a morphological age of the subject using the segmented image; and iv) generating a report of risk stratification of the subject based upon the calculated morphological age.

[0012] The foregoing and other aspects and advantages of the present disclosure will appear from the following description. In the description, reference is made to the accompanying drawings that form a part hereof, and in which there is shown by way of illustration a preferred embodiment. This embodiment does not necessarily represent the full scope of the invention, however, and reference is therefore made to the claims and herein for interpreting the scope of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] FIGS. 1A and IB illustrate an example CT system that can be configured to operate one configuration of the present disclosure.

[0014] FIG. 2 is a flowchart depicting one configuration of the present disclosure, in which a pixel-level deep segmentation Artificial Intelligence network is deployed.

[0015] FIG. 3 is a schematic for one configuration of the artificial intelligence network in the form of a fully convolutional neural network that may feature different layers.

[0016] FIG. 4 is a graph of one configuration for converting between Hounsfield

Units and grayscale.

[0017] FIG. 5A is a graph of Dice Similarity Coefficient (DSC) from one example application of the present disclosure. [0018] FIG. 5B is a graph of cross sectional area (CSA) error for the example with

FIG. 5A.

[0019] FIG. 6A is a graph of Dice Similarity Coefficient (DSC) from one example application of the present disclosure.

[0020] FIG. 6B is a graph of cross sectional area (CSA) error for the example with

FIG. 6A.

DETAILED DESCRIPTION

[0021] Muscle cross sectional area has been shown to correlate with a wide-range of posttreatment outcomes. However, integration of muscle CSA measurements in clinical practice have been limited by the time required to generate this data. By dropping calculation time from 1800 seconds to less than one second, research into new applications for morphometric analysis may be drastically sped up. CT is an essential tool in the modern healthcare arena with approximately 82 million CT examinations performed in the US in 2016. In lung cancer in particular, the current clinical paradigm has been on lesion detection and disease staging with an eye toward treatment selection. However, accumulating evidence suggests that CT body composition data could provide objective biological markers to help lay the foundation for the future of personalized medicine. Aside from preoperative risk stratification for surgeons, morphometric data may be used to predict death in radiation oncology and medical oncology patients.

[0022] In one configuration, a system and method for determining patient risk stratification is provided based on body composition derived from computed tomography images using machine learning. The system may enable real-time segmentation of muscle and fat tissue, for example, facilitating clinical application of body morphological analysis sets. In one configuration, a fully-automated deep learning system for the segmentation of skeletal muscle cross sectional area (CSA) is proposed. In an example configuration, the system may use an axial computed tomography (CT) image taken at the third lumbar vertebra (L3) for the CSA. In another configuration, whole-body volumetric analysis may be performed. A fully-automated deep segmentation model may be derived from an extended implementation of a Fully Convolutional Network with weight initialization of a pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis.

[0023] In some configurations, varying window level (WL), window width (WW), and bit resolutions may be performed with post processing in order to better understand the effects of the parameters on the model performance. An example model, fine-tuned on 250 training images and ground-truth labels, achieves 0.93±0.02 Dice Similarity Coefficient (DSC) and 3.68±2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. The fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D data sets.

[0024] In some configurations, muscle segmentation maps may be used for ground-truth labeling during training, testing, and verification. Muscle segmentation maps may be manually tuned and reformatted into acceptable input for convolutional neural networks (CNN). The axial images and their corresponding color-coded images showing segmentation between muscle, internal space, and background may serve as original input data and ground truth labels, respectively. The main challenge for muscle segmentation is accurate differentiation of muscle tissue from neighboring organs due to their overlapping HU ranges. A boundary between organs and muscle may be manually drawn, or an automated routine may be used, setting the inside region as additional segmentation class ("Inside") in an effort to train the neural network to learn distinguishing features of muscle for a precise segmentation from adjacent organs. In one configuration, color-coded label images may be used and assigned to predefined label indices, including 0 (black) for "Background", 1 (red) for "Muscle", and 2 (green) for "Inside", before passing through CNNs for training.

[0025] Referring particularly now to FIGS. 1A and IB, an example of an x-ray computed tomography ("CT") imaging system 100 is illustrated. The CT system includes a gantry 102, to which at least one x-ray source 104 is coupled. The x-ray source 104 projects an x-ray beam 106, which may be a fan-beam or cone-beam of x-rays, towards a detector array 108 on the opposite side of the gantry 102. The detector array 108 includes a number of x-ray detector elements 110. Together, the x-ray detector elements 110 sense the projected x-rays 106 that pass through a subject 112, such as a medical patient or an object undergoing examination, that is positioned in the CT system 100. Each x-ray detector element 110 produces an electrical signal that may represent the intensity of an impinging x-ray beam and, hence, the attenuation of the beam as it passes through the subject 112. In some configurations, each x-ray detector 110 is capable of counting the number of x-ray photons that impinge upon the detector 110. In some configurations the system can include a second x-ray source and a second x-ray detector (not shown) operable at a different energy level than x-ray source 104 and detector 110. Any number of x-ray sources and corresponding x-ray detectors operable at different energies may be used, or a single x-ray source 104 may be operable to emit different energies that impinge upon detector 110. During a scan to acquire x-ray projection data, the gantry 102 and the components mounted thereon rotate about a center of rotation 114 located within the CT system 100.

[0026] The CT system 100 also includes an operator workstation 116, which typically includes a display 118; one or more input devices 120, such as a keyboard and mouse; and a computer processor 122. The computer processor 122 may include a commercially available programmable machine running a commercially available operating system. The operator workstation 116 provides the operator interface that enables scanning control parameters to be entered into the CT system 100. In general, the operator workstation 116 is in communication with a data store server 124 and an image reconstruction system 126. By way of example, the operator workstation 116, data store sever 124, and image reconstruction system 126 may be connected via a communication system 128, which may include any suitable network connection, whether wired, wireless, or a combination of both. As an example, the communication system 128 may include both proprietary or dedicated networks, as well as open networks, such as the internet.

[0027] The operator workstation 116 is also in communication with a control system 130 that controls operation of the CT system 100. The control system 130 generally includes an x-ray controller 132, a table controller 134, a gantry controller 136, and a data acquisition system (DAS) 138. The x-ray controller 132 provides power and timing signals to the x-ray source 104 and the gantry controller 136 controls the rotational speed and position of the gantry 102. The table controller 134 controls a table 140 to position the subject 112 in the gantry 102 of the CT system 100.

[0028] The DAS 138 samples data from the detector elements 110 and converts the data to digital signals for subsequent processing. For instance, digitized x-ray data is communicated from the DAS 138 to the data store server 124. The image reconstruction system 126 then retrieves the x-ray data from the data store server 124 and reconstructs an image therefrom. The image reconstruction system 126 may include a commercially available computer processor, or may be a highly parallel computer architecture, such as a system that includes multiple-core processors and massively parallel, high-density computing devices. Optionally, image reconstruction can also be performed on the processor 122 in the operator workstation 116. Reconstructed images can then be communicated back to the data store server 124 for storage or to the operator workstation 116 to be displayed to the operator or clinician.

[0029] The CT system 100 may also include one or more networked workstations

142. By way of example, a networked workstation 142 may include a display 144; one or more input devices 146, such as a keyboard and mouse; and a processor 148. The networked workstation 142 may be located within the same facility as the operator workstation 116, or in a different facility, such as a different healthcare institution or clinic.

[0030] The networked workstation 142, whether within the same facility or in a different facility as the operator workstation 116, may gain remote access to the data store server 124 and/or the image reconstruction system 126 via the communication system 128. Accordingly, multiple networked workstations 142 may have access to the data store server 124 and/or image reconstruction system 126. In this manner, x-ray data, reconstructed images, or other data may be exchanged between the data store server 124, the image reconstruction system 126, and the networked workstations 142, such that the data or images may be remotely processed by a networked workstation 142. This data may be exchanged in any suitable format, such as in accordance with the transmission control protocol ("TCP"), the internet protocol ("IP"), or other known or suitable protocols.

[0031] Referring to FIG. 2, a flowchart for one configuration of a fully-automated deep segmentation system for image segmentation is depicted. The process begins with images being acquired at step 210. Images may be acquired with a CT system as described in FIGs. 1A and IB. The acquired images may be converted from Hounsfield Units (HU) to grayscale at step 220. Labelled images, where muscle and organs or "inside" pixels have been identified may be used to train the artificial intelligence segmentation network prior to processing the acquired images may be performed at step 230. A test image may be processed for verification or quality control purposes at 240, which may also be converted from HU to grayscale at 250. The test subset may be visually analyzed to identify any needed corrections. The trained pixel-level deep segmentation artificial intelligence network is used to segment the acquired images at step 260. An optimal combination of window settings and bit depth per pixel may be established with post processing to correct for any erroneous segmentation at step 270. "Optimal" may be based upon a desired level of maximizing image contrast or minimizing noise level, where the ability to distinguish between objects or between pixels that represent different material (such as muscle or organs) is optimized for a user. A cross sectional area quantification is determined at step 280 based upon the processed images. This cross sectional area quantification data is used to calculate a morphological age for a patient at step 290, which may be reported or displayed for a user with a risk stratification for the patient at step 295.

[0032] In one configuration, the segmentation model may be based on a fully convolutional network (FCN). Segmentation may be performed on muscle, organ tissue, background, adipose tissue, and the like. The advantages of an FCN are, first, that a set of convolutional structures enables learning highly representative and hierarchical abstractions from whole image input without excessive use of trainable parameters thanks to the usage of shared weights. Second, fine-tuning the trainable parameters of the FCN after weights are initialized with a pretrained model from a large-scale dataset allows the network to find the global optimum with a fast convergence of cost function when given a small training dataset. Third, the FCN may intentionally fuse different levels of layers by combining coarse semantic information and fine appearance information to maximize hierarchical features learned from earlier and later layers. However, a person of ordinary skill in the art will appreciate that other deep learning networkss may be used, with several having been validated for natural image segmentation applications.

[0033] Error correction may be performed by subsequent FCN layers, such as to correct for incomplete muscle segmentation, incorrect organ segmentation and subcutaneous edema mischaracterized as muscle. Incomplete muscle segmentation may result from muscle being partly excluded from the appropriate region of the image. Incorrect organ segmentation may result from organs being partly included in the muscle region. Errors may be caused by overlapping HUs between muscle and adjacent organs and variable organ textural appearance. Edema may be mischaracterized when the radiographic appearance of edema, particularly in obese patients, has a similar HU range to muscle, leading to higher CSA than expected. Extensive edema tends to occur in critically ill patients, leading to potentially falsely elevated CSA in patients actually at higher risk for all interventions.

[0034] Referring to FIG. 3, FCN-32s, FCN-16s, FCN-8s fuse coarse-grained and finegrained features and upsample them at stride 32, 16, and 8, for further precision. Prior implementations of FCN describe further fusion of earlier layers beyond pool3; however, this may only result in minor performance gains. Since muscle segmentation requires finer precision than stride 8, the process may be extended to FCN-4s and FCN-2s by fusing earlier layers further to meet the required detail and precision.

[0035] Referring to FIG. 4, one configuration for converting from HU to grayscale is depicted. Medical images contain 12 to 16 bits per pixel, ranging from 4,096 to 65,536 shades of gray per pixel. A digital CT image has a dynamic range of 4,096 gray levels per pixel (12 bits per pixel), far beyond the limits of human perception; the human eye is only capable of differentiating roughly 32 shades of gray. Displays used for diagnostic CT interpretation support at most 8 bits per pixel, corresponding to 256 gray levels per pixel. To compensate for these inherent physiologic and technical limitations, images displayed on computer monitors can be adjusted by changing the window level (WL) 410 and window width (WW) 420, followed by assigning values outside the window range to minimum (0) or maximum (2 BIT -1) value 430. WL 410— the center of the window range— determines which HU values are converted into gray levels. WW determines how many of HU values are assigned to each gray level, related to the slope (2 BIT -1/WW) 440 of the linear transformation shown in FIG. 4. Bit depth per pixel, (BIT), is the available number of bits per pixel. BIT determines how many shades of gray are available per pixel. Optimal window setting configuration is dependent on the HUs of the region of interest (ROI) and the intrinsic image contrast and brightness. These settings are ultimately workarounds for the constraints of human perception. However, computer vision does not necessarily have these limitations.

[0036] CT images have been converted to grayscale previously with the commonly-used HU range for the target tissue or organ without studying the effect of window settings on the performance of their algorithms. While recent work has identified that image quality distortions limit the performance of neural networks in computer vision systems, the effect of window setting and bit resolution on image quality is often overlooked in medical imaging machine learning. In one configuration, the effects of window and BIT settings on segmentation performance are accounted for by sweeping different combinations of window configurations and bit depth per pixel.

[0037] In one configuration, a comparison measure is used that utilizes the Dice

Similarity Coefficient (DSC) to compare the degree of overlap between the ground truth segmentation mask and the FCN-derived mask, calculated as Eq. 1. 2 x \ground truth n predict]

DSC = -r^ = j j -i- 1 (1)

\ground _ truth + \predict\

An additional comparison measure may be the cross sectional area (CSA) error, calculated as Eq. 2. This represents a standardized measure of the percentage difference in area between the ground-truth segmentation mask and the FCN-derived mask. (2)

\ground _truth\

[0038] Muscle tissue HUs do not overlap with adipose tissue HUs. As a result, a binary image of fat regions extracted using HU thresholding can be utilized to remove intramuscular fat incorrectly segmented as muscle.

[0039] In one configuration, the segmentation model may be trained by a stochastic gradient descent (SGD) routine with momentum. A minibatch size may be selected to optimize training efficiency. In one example, a minibatch size of 8 may be selected, with a fixed learning rate of 10-10, momentum 0.9, and a weight decay of 10-12. Training and validation losses may converge if the models are trained for a sufficient period of epochs, such as 500 epochs. The model may be selected to evaluate performance on a held-out test subset. The model may be trained using any appropriate system, such as a Devbox (NVIDIA Corp, Santa Clara, CA), which has four TITAN X GPUs with 12GB of memory per GPU.

[0040] After training, segmentation may be performed using any appropriate hardware, such as a single TITAN X GPU. The time for performing segmentation may vary based upon the number of images and other parameters. For example, a segmentation took 25 seconds on average for 150 test images, corresponding to only 0.17 seconds per image. This represents a substantial time advantage over conventional segmentation routines. Accurate segmentation of muscle tissue by the semi-automated HU thresholding method requires roughly 20-30 minutes per slice on average. Other algorithms have been proposed that required between 1 to 3 minutes per slice. The ultra- fast deployment time of the present disclosure can allow for real-time segmentation in clinical practice. "Real-time" includes being able to process an image stack or set in a period of time, such as less than 1 minute or in less than 30 seconds or in less than 25 seconds, that allows for being able to display the segmentation results along with the original images for a user.

[0041] In some configurations, descriptive data may be presented as percentages for categorical variables and with standard deviation (SD) for continuous variables. In one example, a two-tailed statistical test with the alpha level set at 0.05 may be used. A Student's t-test for normally distributed values may also be used. Dichotomous variables may be compared using any appropriate test, such as the Mann Whitney U test, and ordinal variables may be compared using a test such as the Kruskal Wallis test. Inter- analyst agreement may be quantified with intraclass correlation coefficients (ICC). All statistical analyses may performed using any appropriate software package, such as the STATA software (version 13.0, StataCorp, College Station, Texas).

[0042] The fully convolutional neural network may be adapted to suit the desired level of granularity, or the extent to which the original image is subdivided. To identify the best performing fully convolutional network, multiple models may be trained and evaluated. In one example, five models of increasing granularity— FCN-32s, FCN-16s, FCN-8s, FCN-4s, and FCN-2s— were trained and evaluated using a test dataset at (40,400) and 8 bits per pixel by measuring the Dice Similarity Coefficient (DSC) and CSA error between ground truth and predicted muscle segmentation. These results may be compared to the HU thresholding method, such as by selecting HU ranging from -29 to 150 to represent lean muscle CSA. [0043] In one configuration, segmenting adipose tissue may be performed in a similar manner as described above, since fat can be thresholded within a unique HU range [-190 to -30]. Previously, an outer-muscle boundary to segment HU thresholded adipose tissue into Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT) has been used. However, precise boundary generation is dependent on accurate muscle segmentation. Using the current segmentation method with a subsequent adipose tissue thresholding system could quickly and accurately provide VAT and SAT values in addition to muscle CSA. Visceral adipose tissue has been implicated in cardiovascular outcomes and metabolic syndrome, and accurate fat segmentation would increase the utility of segmentation systems beyond cancer prognostication.

[0044] In an example study, 400 patients with an abdominal CT and lung cancer treated with either surgery or systemic therapy between 2007 and 2015 were identified in an institutional database. The surgical cohort (tumor stages I, II and III) represented a cross section of all patients who underwent lung cancer resection, while the medical cohort were patients who received chemotherapy (tumor stage IV). Only examinations with intravenous contrast were included to ensure consistency of HU values. 400 examinations of 200 females and 200 male patients were included in the study, as detailed in Table 1. A test subset of 150 cases was created for evaluating network performance. Patient characteristics of the entire cohort (n = 400) and the test subset (n = 150) are shown in Table 1. Note that there is no statistically significant difference between the entire cohort and the test subset.

Table 1

Height, mean (SD), cm 168 (10) 168 (10) 0.70

Weight, mean (SD), kg 77 (18) 79 (19) 0.16

Lung cancer treatment, no. (%) 0.78

Systemic therapy 227 (57) 86 (57)

Surgery 173 (43) 64 (43)

Lung cancer stage, no. (%) 0.84 1 102 (26) 38 (25)

II 33 (8) 10 (7)

III 38 (10) 16 (1 1)

IV 227 (57) 86 (57)

[0045] In the present example, images were acquired for routine clinical care with a tube current of approximately 360 mA, and a kV of approximately 120. Scanners were calibrated daily using manufacturer-supplied phantoms to ensure consistency in attenuation measurements in accordance with manufacturer specifications. Full resolution 512x512 pixel diagnostic quality CT examinations were loaded onto a research workstation running OsiriX without downsampling (Pixmeo, Bernex, Switzerland). Segmentation maps of skeletal muscle CSA atthe level of L3 were created on a single axial image using semi-automated threshold-based segmentation (thresholds -29 to +150 HU). Analyzed muscles included the transversus abdominis, external and internal abdominal obliques, rectus abdominis, erector spinae, psoas major and minor and quadratus lumborum. A research assistant blinded to all other data created the segmentation maps, which were corrected as necessary by a fellowship -trained board-certified radiologist. A subset of the images were randomly selected and then re-analyzed by a second research assistant with an inter-analyst agreement of 0.998. These muscle segmentation maps were used for ground-truth labeling during training, testing, and verification.

[0046] Differences in body habitus could represent a confounding feature if the network were to be presented unbalanced examples, particularly because prior work has demonstrated that obese patients have higher image noise. To minimize this possibility, patients in the example study were categorized into eight groups based on gender and body mass index (BMI). We randomly selected 25 male and 25 female patients from the groups with normal weight, overweight, and obese in order to create a subsetof 150 cases to be withheld for testing. All underweight cases were included in the training dataset without being used for testing due to their small number. The other 250 cases were used for training. We chose the best model out of several trained models by selecting the last model after the loss became converged for a sufficiently long period of training time, approximately 500 epochs. The best CNN was evaluated using the held-out test datasets to determine how much the predicted muscle regions overlap with the ground truth. In order to make a fair comparison, the same seed value was used for the random selection from the test dataset for each experiment.

[0047] In the current example study, to identify the best performing fully convolutional network, five models of increasing granularity— FCN-32s, FCN-16s, FCN- 8s, FCN-4s, and FCN-2s— were trained and evaluated using the test dataset at (40,400) and 8 bits per pixel by measuring the Dice Similarity Coefficient (DSC) and CSA error between ground truth and predicted muscle segmentation. These results were compared to the HU thresholding method, selecting HU ranging from -29 to 150 to represent lean muscle CSA.

[0048] Subsequently, the performance of the best FCN model (FCN-2s) was compared with seven different combinations of window settings for each bit depth per pixel— (40,400), (40, 240), (40,800), (40,1200), (-40,400), (100,400), and (160,400) expressed in (WL,WW) and 8, 6, and 4 bit resolutions per pixel. The selected window ranges cover the HU range of lean tissue [-29 to 150] for a fair comparison to see if partial image information loss degrades model performance. These window settings contain extreme window ranges as well as typical ones. For example, the window setting (40,240) has a range of -80 to 160 HU values, which corresponds to almost the HU range of lean muscle, while the configuration (40,1200) converts all HU values between -560 and 1240 into shades of gray resulting in low image contrast.

[0049] Referring to FIGs. 5A and 5B, the five different FCN models were compared to the previously described HU thresholding method. All numbers are reported as mean±SD. Performance was evaluated using the Dice Similarity Coefficient (DSC) in FIG. 5A and muscle CSA error in FIG. 5B. Even the most coarse-grained FCN model (FCN-32s) achieved 0.79±0.06 of DSC and 18.27±9.77% of CSA error, markedly better than the HU thresholding method without human tuning. Performance increased as the number of features of different layers were fused. The most fine-grained FCN model achieved DSC of 0.93 and CSA error of 3.68% on average, representing a 59% improvement in DSC and an 80% decrease in CSA error when compared to the most coarse-grained model.

[0050] Referring to FIGs. 6A and 6B, results of the systematic experiment comparing seven different combinations of window settings for each bit depth per pixel are presented. Performance of FCN-2s when input images are generated with different window settings (WL, WW) for each bit depth per pixel (BIT). The selected window settings were (40,400), (-40,400), (100,400), (160,600), (40,240), (40,800), and (40,1200). DSC, shown in FIG. 6A, and CSA error, shown in FIG. 6B, were not meaningfully influenced by changes in window ranges as long as 256 gray levels per pixel (bit8) were available. When 6-bit depth per pixel was used, performance was similar compared to the results of 8-bit cases. However, model performance deteriorated when 8-bit pixels were compressed to 4-bit pixels.

[0051] The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.