Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATIC IDENTIFICATION AND SEGMENTATION OF TARGET REGIONS IN PET IMAGING USING DYNAMIC PROTOCOL AND MODELING
Document Type and Number:
WIPO Patent Application WO/2017/106837
Kind Code:
A1
Abstract:
A continuous dynamic positron emission tomography (PET) assembly for imaging a target region of a subject. The assembly includes a radioactive tracer isotope injector configured to administer a radioactive isotope into the subject and a scintillator crystal configured to absorb ionizing radiation from the subject and emit scintillator light. The scintillator crystal undertakes the absorption substantially at the same time of the start of administering the radioactive isotope. The assembly also includes a photo detector in communication with the scintillator crystal, wherein the photodetector is configured to detect the emitted scintillation light as input and provide electrical signals as output. The assembly further includes a signal digitizing circuitry converting the output electrical signals into digital data. Moreover, the assembly includes a processor configured to receive the digital data and implement a model to convert the digital data into a three dimensional, tomographic image reconstruction.

Inventors:
LI YINLIN (US)
KUNDU BIJOY (US)
MAJEWSKI STANISLAW (US)
Application Number:
PCT/US2016/067535
Publication Date:
June 22, 2017
Filing Date:
December 19, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV VIRGINIA PATENT FOUNDATION (US)
International Classes:
A61B6/03; G01T1/00; G01T1/20; G06T15/00; G06T19/00
Domestic Patent References:
WO1993006560A11993-04-01
WO2015022354A12015-02-19
Foreign References:
US8000773B22011-08-16
US7490085B22009-02-10
US8909325B22014-12-09
US20110309256A12011-12-22
Attorney, Agent or Firm:
DECKER, Robert J. (US)
Download PDF:
Claims:
CLAIMS

We claim:

1. A continuous dynamic positron emission tomography (PET) assembly for imaging a target region of a subject, said assembly comprising:

a radioactive tracer isotope injector configured to administer a radioactive isotope into the subject;

a scintillator crystal configured to absorb ionizing radiation from the subject and emit scintillator light, wherein said scintillator crystal undertakes the absorption substantially at the same time of the start of administering the radioactive isotope;

a photo detector in communication with said scintillator crystal, wherein the photodetector is configured to detect the emitted scintillation light and provide electrical signals as output;

a signal digitizing circuitry converting the output electrical signals into digital data; and

a processor configured to receive the digital data and implement a model to convert the digital data into a three dimensional, tomographic image reconstruction.

2. The PET assembly of claim 1, wherein said model comprises an artificial Neural Network (ANN).

3. The PET assembly of claim 1, wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 0 to 30 minutes after the start of administering;

about 25 to 30 minutes after the start of administering; about 1 minute to about 10 minutes after the start of administering; or about 60 minutes after the start of administering.

4. The PET assembly of claim 1, wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

5. The PET assembly of claim 1, wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

6. The PET assembly of claim 5, wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions.

7. A method for continuous dynamic positron emission tomography (PET) imaging of a target region of a subject, said method comprising:

administering a radioactive tracer isotope into the subject;

absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of administering the radioactive isotope;

detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and

receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

8. The PET imaging method of claim 7, wherein said model comprises an artificial Neural Network (ANN).

9. The PET imaging method of claim 7, wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering; about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 60 minutes after the start of administering.

10. The PET imaging method of claim 7, wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

11. The PET imaging method of claim 7, wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject. 12. The PET imaging method of claim 11, wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions.

13. A non-transitory computer readable medium having computer program logic that when implemented enables one or more processors in a positron emission tomography (PET) assembly to generate continuous dynamic positron emission tomography (PET) images of a target region of a subject, said computer program logic comprising:

administering a radioactive tracer isotope into the subject;

absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of administering the radioactive isotope;

detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and

receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

14. The non-transitory computer readable medium of claim 13, wherein said model comprises an artificial Neural Network (ANN).

15. The non-transitory computer readable medium of claim 13, wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 60 minutes after the start of administering.

16. The non-transitory computer readable medium of claim 13, wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

17. The non-transitory computer readable medium of claim 13, wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

18. The non-transitory computer readable medium of claim 17, wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions.

Description:
Automatic Identification and Segmentation of Target Regions in PET Imaging Using

Dynamic Protocol and Modeling

RELATED APPLICATIONS

The present application claims benefit of priority under 35 U.S.C § 119 (e) from U.S. Provisional Application Serial No. 62/269,628, filed December 18, 2015, entitled "Dynamic TOFPET Breast Imager with Full Breast View and Related Methods Thereof, U.S.

Provisional Application Serial No. 62/362,914, filed July 15, 2016, entitled "Automatic Identification and Segmentation of Breast Tumors in PET Imaging Using Dynamic Protocol and Artificial Neural Network (ANN)", and U.S. Provisional Application Serial No.

62/435,645, filed December 16, 2016, entitled "Dynamic TOFPET Breast Imager with Full Breast View and Related Methods Thereof;" the disclosures of which are hereby incorporated by reference herein in their entirety.

FIELD OF THE INVENTION

The present invention relates generally to the field of automatic identification and segmentation of target region images in positron emission tomography (PET) imaging, whereby the imaging commences substantially at the same time of the start of administering the radioactive isotope to the subject, and more particularly while using a dynamic protocol and modeling.

INTRODUCTION

According to American Cancer Society, breast cancer is the most common cancer among American women. About 1 in 8 (12%) women in the US will develop invasive breast cancer during their lifetime. In 2015, it is estimated that among U.S. women there will be 231, 840 new cases of invasive breast cancer and 40,290 breast cancer deaths. Breast cancers that are found because they are palpable tend to be larger and are more likely to have already spread beyond the breast. Earlier stage detection of the breast cancer will increase the chances of cure and survival. The mammogram is the main test recommended by the American Cancer Society to find breast cancer early before they start to cause symptoms. Still, some breast cancers are not found by mammogram because radiologists can make negative diagnosis either because of the large number of mammograms to be analyzed or when coping with the inconclusive cases when inspecting noise contaminated images. In light of the above, a need arises for automatic diagnosis of breast cancer. The present inventors submit that automatic diagnosis of breast cancer could aid the accuracy, the speed and the workload of the disease screening based on mammographic images. Similarly, automatic imaging of other target regions could aid in the accuracy, speed and workload for other diagnoses. For that purpose, segmentation of breast tumor (or other target regions) can play a significant role.

OVERVIEW

An aspect of an embodiment of the present invention provides, among other things, a method, system, and computer readable medium comprising an Artificial Neural Network (ANN) combined with time activity curves (TAC) of each voxel to segment breast tumor (or other target region) in vivo using the dedicated MAMmography with Molecular Imaging positron emission tomography MAMMI PET .

An aspect of an embodiment includes the capability to segment the tumor from the background of healthy tissue, reduce the image noise and design a reliable, accurate, and efficient tumor segmentation tool that could be used in a clinic. In an approach of an embodiment, the method first obtained the time activity curves (TACs), from regions of interest (ROI) drawn on positron emission tomography (PET) images, of typical pixels in tumor and non-tumor regions with average filter and least-squares fitting to improve the signal noise ratio (SNR). Second, the ANN network was established and trained using the normalized feature vectors obtained from the known typical TACs. Then the trained ANN network was used to segment the tumor regions of other PET images, from the same patient or other patients, after data filtering and smoothing. The related experimental results show good performance of ANN with back-propagation (BP) algorithm in image segmentation of breast tumor and background.

An aspect of an embodiment of the present invention provides, among other things, a continuous dynamic positron emission tomography (PET) assembly for imaging a target region of a subject. The assembly may comprise: a radioactive tracer isotope injector configured to administer a radioactive isotope into the subject; a scintillator crystal configured to absorb ionizing radiation from the subject and emit scintillator light, wherein said scintillator crystal undertakes the absorption substantially at the same time of the start of administering the radioactive isotope; a photo detector in communication with said scintillator crystal, wherein the photodetector is configured to detect the emitted scintillation light as input and provide electrical signals as output; a signal digitizing circuitry converting the output electrical signals into digital data; and a processor configured to receive the digital data and implement a model to convert the digital data into a three dimensional, tomographic image reconstruction.

The model of an embodiment of the PET assembly may comprises an artificial Neural Network (ANN).

In an embodiment of the PET assembly, the undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope may include at least one of any combination of the following ranges of timing:

about 1 min to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering;

about 0 to 30 minutes after the start of administering;

about 25 to 30 minutes after the start of administering;

about 1 minute to about 10 minutes after the start of administering; or

about 60 minutes after the start of administering.

It is noted that the start time may be greater than 60 minutes.

In an embodiment of the PET assembly, the target region may comprises: breast, brain, head/neck, heart, liver, prostate, or extremities, etc.

In an embodiment of the PET assembly, the implementing the model may comprise using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

In an embodiment of the PET assembly, the ANN may be trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions. The term "about," as used herein, means approximately, in the region of, roughly, or around. When the term "about" is used in conjunction with a numerical range, it modifies that range by extending the boundaries above and below the numerical values set forth. In general, the term "about" is used herein to modify a numerical value above and below the stated value by a variance of 10%. In one aspect, the term "about" means plus or minus 10% of the numerical value of the number with which it is being used. Therefore, about 50% means in the range of 45%-55%. Numerical ranges recited herein by endpoints include all numbers and fractions subsumed within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.90, 4, 4.24, and 5). It is also to be understood that all numbers and fractions thereof are presumed to be modified by the term "about."

An aspect of an embodiment of the present invention provides, among other things, a method for continuous dynamic positron emission tomography (PET) imaging of a target region of a subject. The method may comprise: administering a radioactive tracer isotope into the subject; absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of

administering the radioactive isotope; detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

An aspect of an embodiment of the present invention provides, among other things, a non-transitory computer readable medium having computer program logic that when implemented enables one or more processors in a positron emission tomography (PET) assembly to generate continuous dynamic positron emission tomography (PET) images of a target region of a subject. The computer program logic may comprise: administering a radioactive tracer isotope into the subject; absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of administering the radioactive isotope; detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

Any of the components or modules referred to with regards to any of the present invention embodiments of the device discussed herein, may be integrally or separately formed with one another. Further, redundant functions or structures of the components or modules may be implemented.

Any of the components or modules may be a variety of widths and lengths as desired or required for operational purposes.

It should be appreciated that various sizes, dimensions, contours, rigidity, shapes, flexibility and materials of any of the components or portions of components in the various embodiments of the device discussed throughout may be varied and utilized as desired or required. Similarly, locations and alignments of the various components may vary as desired or required. Moreover, modes and mechanisms for connectivity or interchangeability may vary.

It should be appreciated that the device and related components of the device discussed herein may take on all shapes along the entire continual geometric spectrum of manipulation of x, y and z planes to provide and meet the environmental, and structural demands and operational requirements. Moreover, locations, connections and alignments of the various components may vary as desired or required.

These and other objects, along with advantages and features of various aspects of embodiments of the invention disclosed herein, will be made more apparent from the description, drawings and claims that follow.

BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other objects, features and advantages of the present invention, as well as the invention itself, will be more fully understood from the following description of preferred embodiments, when read together with the accompanying drawings.

The accompanying drawings, which are incorporated into and form a part of the instant specification, illustrate several aspects and embodiments of the present invention and, together with the description herein, serve to explain the principles of the invention. The drawings are provided only for the purpose of illustrating select embodiments of the invention and are not to be construed as limiting the invention.

FIG. 1 provides reconstructed images showing Tumor region, Non-tumor region 1, and Non-Turn or2 from experimental Dataset 1 of Frame 14, Slice 55.

FIG. 2 graphically represents the uptake value measured in micro Ci/cc of the region or target over time in minutes from experimental Dataset 1. FIG. 3 provides reconstructed images showing Tumor region, Non-tumor region 1, and Non-Tumor2 from experimental Dataset 2 of Frame 14, Slice 91.

FIG. 4 graphically represents the uptake value measured in micro Ci/cc of the region or target over time in minutes from experimental Dataset 2.

FIG. 5 is a schematic diagram of one neuron structure of ANN.

FIGS. 6-7 provides reconstructed images showing segmentation result of Dataset 2 with training of Dataset 1. In particular, FIG. 6 is segmented images of slice 79 and FIG. 7 is segmented images of slice 91. The regions surrounded with the white contours were identified as tumors.

FIGS. 8-9 provides reconstructed images showing segmentation result of Dataset 1 with training of Dataset 2. In particular, FIG. 8 is segmented images of slice 43 and FIG. 9 is segmented images of slice 55. The regions surrounded with the white contours were identified as tumors.

FIG. 10 is a schematic diagram of hardware on which an aspect of an embodiment of the present invention can be implemented.

FIG. 11 provides a schematic illustration an embodiment of a positron emission tomography (PET) assembly for an imaging system.

FIG. 12 provides a flow chart for the technique for using an embodiment of a positron emission tomography (PET) assembly for an imaging system.

FIG. 13 illustrates a block diagram of an example machine upon which one or more embodiments (e.g., discussed methodologies) can be implemented (e.g., run).

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

An aspect of an embodiment of the present invention provides, among other things, a system and method for applying an Artificial Neural Network (ANN) algorithm to dynamic positron electron imaging (PET) of a target region of a subject. An aspect of an embodiment of the present invention, for example, demonstrates applying (1) novel clinical protocol (imaging from the time of imaging agent injection) and (2) implementation of the Artificial Neural Network (ANN) algorithm. The present inventors have shown that by providing a system and method for applying (1) novel clinical protocol (e.g., imaging from the time of imaging agent injection) and (2) implementation of the Artificial Neural Network (ANN) algorithm, that it was shown that as early as 15-20 minutes post fludeooxyglucose (FDG) administration to the patient, segmentation of cancerous breast lesions from the background, based on the time activity curves (TAC) obtained from dynamic FDG breast PET imaging, was possible.

In an embodiment, the system and method includes dynamic imaging that was performed wherein the images were acquired from the point of injection up to 25-30 minutes post injection. The data is acquired in "list-mode", which means "time-stamped" data. When system and method reconstructs the image, it may get the first image in the first 10 seconds and so on

In an approach, for example, the scans were performed on the European Union and FDA approved dedicated breast PET imager - MAMmography with Molecular Imaging (MAMMI). The present inventors set forth that there is initial strong evidence that this is indeed a powerful adjunct diagnostic technique and system, in additional to standard static images, improving separation of cancer vs. benign vs. normal tissue.

An aspect of an embodiment of the present invention provides, among other things, a method, system, and computer readable medium for an automatic identification and segmentation of target regions in PET imaging using dynamic protocol and modeling algorithms.

An aspect of an embodiment of the present invention provides, among other things, a method, system, and computer readable medium for an automatic identification and segmentation of breast tumors in PET imaging using dynamic protocol and artificial neural network (ANN) algorithms.

EXAMPLES

Practice of an aspect of an embodiment (or embodiments) of the invention will be still more fully understood from the following examples and experimental results, which are presented herein for illustration only and should not be construed as limiting the invention in any way.

Example and/or Experimental Results Set No. 1

PET Imaging:

In an embodiment, the breast dedicated PET MAMMI has a ring shape composed of 12 detector heads. Each module contains a trapezoidal monolithic LYSO scintillator of 12 mm thickness, black painted, and coupled to a H8500 PSPMT. The transaxial field of view (FOV) is of about 170 mm and the static axial FOV 40 mm. FDA approved MAMMI PET is used in several clinical institutions not only for early breast cancer diagnosis, but also therapy monitoring.

Patients were instructed to fast for at least 6 hours before an injection of 18 F-FDG. An intravenous cannula was placed in the back of the hand, preferably in the opposite arm to the affected breast, and a blood sample, to determine absence of glycemia, was drawn before the administration. Blood glucose levels were required to be lower than lO mmol/1. Depending on the body mass index (BMI), a dose varying from 215 to 230 MBq was administered. Tracer injection was performed with the patient in prone position and with the affected hanging breast centered in the hole of the scanner. Vertical MAMMI-PET ring positioning for the dynamic scan was based on previously performed mammography or MRI. Only one ring position (±40mm) is possible in the dynamic mode. For selective visualization of the breast lesions, ten 60-second frames and three 300-second frames after radiotracer injection were reconstructed initially.

Feature Extraction of Time Activity Curves:

Time Activity Curves (TAC) vector consisting of tracer concentration of a pixel on a slice in the order of the scanning frame sequence was firstly generated. In order to minimize the image noise, an averaging filter was used with summing up the value of each pixel with approximately 10 slices to improve the signal to noise ratio (S R). After the curves were fitted using least-squares method, TAC of each pixel was generated for the whole scan duration. Each TAC reflects the rate of FDG uptake of each pixel. After filtering and smoothing the curve, the feature vector P of each pixel was formed by the normalized uptake value in every frame's time of TAC and the length of P equal to the frame number, which determined the input number of ANN.

There were two datasets of PET images obtained from two different clinical cases. After drawing ROIs and preprocessing data with filtering, smoothing and normalization, pixel TACs from tumor and non-tumor regions with two datasets were extracted and shown in FIGS. 1-94. In FIG. 2, the pixel TACs from tumor region (solid line denoted as "tumor") have higher value of FDG uptake with time and the curves have relatively steep upslopes. But in non-tumor regions (dotted line denoted as "non-tumor 1" and dashed line denoted as "non-tumor 2"), the TACs have a relatively gentler slope and some of them have even a downward trend (as seen with dashed line denoted as "non-tumor 2"). In fact the FDG uptake rates of tumor and non-tumor regions are clearly different. But in another dataset shown in FIG. 4, the curves for the non-tumor 1 (dotted line denoted as "non-tumor 1") are overlapping with the tumor region (solid line denoted as "tumor") and it is difficult to distinguish the tumor and non-tumor regions with a single coefficient of FDG uptake.

However, the present inventors observe slight differences in the TAC amplitude, peak and valley values, standard deviations and frame distribution. The features of tumor or non- tumor regions represented by the TACs cannot simply be quantified by one or few more parameters. So the feature vector which signified the relationship between time and FDG uptake was extracted from TACs and was used as the input to BP- ANN to segment the tumor region.

For an aspect of an embodiment of the proposed method, two kinds of data are needed - training data and test data. Training data consists of the typical pixels of tumor and non- tumor regions as confirmed with clinical diagnosis. Test data is for example pixel TAC from other 2D slices of the reconstructed 3D PET image of the breast.

Artificial Neural Network:

An ANN is a mathematical model emulating biological neural network. Considering the intended purpose of ANN, feed-forward neural networks with error back-propagation (BP) as a learning rule have extensive application and substantial advantages in comparison to other structures. BP neural network, applied extensively in pattern recognition problems has an input layer, an output layer and at least one hidden layer.

A neuron of ANN has a number of inputs P ([xl, x2, ... , xl5]) and an output (Y). The input vector elements are multiplied by weights vector (W), and the weighted values are fed to the summing junction. Their sum is the dot product (W.P). The neuron has a bias b, which is summed with the weighted inputs. The sum n is fed to an activation function f whose output determines the neuron output, as illustrated in FIG. 5 and Eq.1-3. The activation function used for all neurons was the tan-sigmoid function shown in Eq.4.

Y = f(W- P + b) (2)

(3)

E p =→∑(tpi - Opi) 2 (4)

Where tpi is the expected output and Opi is the calculated output of the i neuron.

Results:

One dataset was chosen to train the ANN in order to get a series of parameters from which the segmentation of tumor region could be achieved. Only few dozens of typical pixels were used as ANN training sample and the rest of the pixels in all frames and slices were segmented according to it.

Firstly, dataset 1 was used as a training sample to segment dataset 2. By training the BP network with training database, the weights and the bias of the network were determined. Dataset 2, a clinical PET image (40x170 pixels) for breast tumor patient was then input into the designed BP network to test its segmentation performance and the segmentation result. The segmentation results of dataset 2 are shown in FIGS. 6-7.

Then, the training and testing databases were interchanged. The segmentation result of dataset 1 in slices 43 and 55 are shown in FIGS. 8-9 with the training data selected from ROI of dataset 2.

As shown in FIGS. 6-9, the suspected tumor regions were segmented as indicated by white contours in different 2D slices. The performance of the proposed method has shown promising results and could successfully segment patient lesion.

Conclusions:

An aspect of an embodiment includes, among other things, a method and system (and related computer readable medium) that provides innovative dynamic approach for image segmentation of breast cancer. An aspect of an embodiment demonstrates, among other things, an automatic method and system for accurately and quickly diagnosing breast tumors. The proposed method and system using FDG has shown promising results - whereby the method and system can successfully segment lesions and identify the tumor region in 3D space based on segmentation obtained in different 2D slices of the tissue volume.

An aspect of an embodiment includes, among other things, a method and system (and related computer readable medium) utilizing one or few typical data sets that will be treated as training samples, and then an ANN model will be developed to help physicians locate automatically other tumor regions patients.

Example and/or Experimental Results Set No. 2 In an approach, Time Activity Curves (TAC) vector consisting of tracer concentration in selected image pixels on generated 2D slices from the 3D reconstructed imaging agent (i.e., tracer) volume distribution may be obtained in the order of the scanning frame sequence firstly generated. The pixels for the analysis may be selected based on the tracer

concentration map - high intensity for potential cancer and low intensity as background reference. In order to minimize the image noise, an averaging filter may be used with summing up the value of each pixel with approximately 10 slices, for example (or other desirable magnitude), to improve the signal to noise ratio (S R). After the tracer

concentration curves are fitted using least-squares method, TAC of each pixel may be generated for the whole scan duration. Each TAC reflects the rate of FDG uptake of each selected pixel. After filtering and smoothing each curve, the feature vector P of each pixel may be formed by the normalized uptake value in every frame's time of TAC and the length of P equal to the frame number, which determines the input number of ANN.

There may be two datasets of PET images obtained from different clinical cases. After drawing regions of interest (ROI) and preprocessing data with filtering, smoothing and normalization, pixel TACs from tumor and non-tumor regions for the two datasets may be extracted. Typical TACs may be included in data set 1 Images may be reconstructed showing Tumor region, Non-tumor region 1, and Non-Turn or2 region from experimental Dataset 1. The pixel TACs from the tumor region may have a higher value of FDG uptake with time and the curves that may have relatively steep slopes. But in non-tumor regions, the TACs of ROIs may have a relatively gentle slope and some of them have actually a downward trend. Accordingly, in practice it may be demonstrated that the FDG uptake rate of tumor and non- tumor regions are clear to distinguish. So the "feature vector" which signified the relationship between time and FDG uptake may be used as the input of ANN to segment (separate) the tumor region.

One dataset may be chosen to "train" (calibrate) the ANN in order to get a series of parameters from which the segmentation of tumor region could be achieved. A few dozens of typical pixels may be used as an ANN training sample and the rest of the pixels in all image frames and slices may be segmented according to the training sample.

Dataset 1 may be used as a training sample to segment Dataset 2. By training the back projection network with training database, the weights and the biases of the network may be determined. Dataset 2, a clinical PET image for a breast tumor patient may then be inputted into the designed BP network to test its segmentation performance and the segmentation result. The segmentation results may be displayed such that any suspected tumor regions that were segmented may be indicated by white contours in different 2D slices. Indeed the performance of the proposed method has shown promising results and can successfully segment patient lesion.

Example and/or Experimental Results Set No. 3

An aspect of an embodiment may be applicable to positron emission tomography (PET) systems and methods and implemented with a variety of imaging modalities, including: ultrasound (US), magnetic resonance imaging (MRI), nuclear medicine, X-ray computed tomography (CT), single photon-emission computed tomography (SPECT), near- infrared tomography (NIRT), and optical imaging techniques including optical computed tomography (OCT), for example.

FIG. 10 is a schematic diagram of hardware on which an aspect of an embodiment of the present invention can be implemented. For example, the hardware 1000 includes a scanner 1002, a processor 1004 for performing the operations disclosed herein, and a display or other output 1006, and may be utilized as an aspect of an embodiment of the imaging system, method and computer readable medium.

Example and/or Experimental Results Set No. 4

FIG. 11 diagrammatically illustrates an imaging system 10 for imaging a subject 12.

However, the depiction of a subject (e.g., a patient) is merely an example. The subject 12 may be any target object, any, human, and so forth, in other embodiments. The imaging system 10 includes an imaging device 14, an imaging control system 16, a health care facility system 18, and a temperature control system 20. During operation, the imaging device 14 may be configured to image the subject 12 (e.g., patient) via one or more imaging modalities under control of the control system 16. The control system 16 may receive one or more inputs from the health care facility system 18 regarding operation of the device and output data related to imaging of the patient 12. The temperature control system 20 may coordinate with the control system 16 to facilitate cooling of one or more components of the imaging device 14 during operation.

In the illustrated embodiment, the imaging device 14 includes a frame 22 having an inner transaxial wall 24 and an outer transaxial wall 26. The inner transaxial wall 24 may define a transaxially extending bore 28 configured to receive a patient support 30. The patient support 30 may be configured to position the patient 12 within the transaxially extending bore 28, for example, via movement of the patient support 30 within, or into and out of, the frame 22, as indicated by arrow 32.

In the illustrated embodiment, the imaging system 10 may be a combined PET and MRI imaging system. However, it should be noted that in other embodiments, the imaging system 10 may combine PET with any other suitable imaging modality, such as X-ray CT or ultrasound. Indeed, the depicted MRI/PET system is merely an example.

In the depicted embodiment, the imaging device 14 includes components suitable for performing both MRI and PET. Specifically, the MRI portion of the device 14 may include a magnet 34 configured to generate a primary magnetic field. In some embodiments, the magnet 34 may be driven by a power source (not shown) provided, for example, by control system 16. One or more gradient coils 36 may be configured to generate magnetic gradient fields during imaging. A radio frequency (RF) coil 38 may generate RF pulses for exciting the nuclear spins and/or function as a receiving coil, depending on the given implementation. The arrangement of the magnet 34, the one or more gradient coils 36, and the RF coil 38 is subject to a variety of implementation-specific variations. However, in the illustrated example, the RF coil 38 is nested within the one or more gradient coils 36, which are nested within the magnet 34.

Centralized control circuitry 40 may control both the MRI and PET subsystems of the imaging system 10. With respect to the MRI sub-system, the control circuitry 40 may control the MRI components to generate a desired magnetic field and RF pulses and to process the generated signals. To that end, the control circuitry 40 may include one or more processors communicatively coupled to memory 42. The one or more processors (e.g.,

microprocessor s), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), etc.) may be configured to execute a control algorithm. By way of example, the control algorithm may be provided as machine-readable encoded instructions stored on a machine-readable medium, such as the memory 42, and may provide control signals for controlling operation of the imaging system 10. The control signals may control the imaging device 14 to selectively acquire MRI and/or PET data.

The memory 42 may be a tangible, non-transitory, machine readable medium. For example, the memory 42 may be volatile or non-volatile memory, such as read only memory (ROM), random access memory (RAM), magnetic storage memory, optical storage memory, or a combination thereof. Furthermore, a variety of control parameters may be stored in the memory 42 along with code configured to provide a specific output (e.g., enable MRI image acquisition, enable PET image acquisition, etc.) to the imaging device 14 during operation. The memory 42 may also store acquired image data, pulse sequences for different modes of operation, or any other parameters defining examination sequences performed by the MRI portion of the device 14. Further, in some embodiments, the processor(s) of the control circuitry 40 may also receive one or more inputs from one or more input devices 44, through which the user may choose a process and/or input desired parameters (e.g., which part of the body should be imaged, whether multiple or single modality operation is desired, etc.).

In some embodiments, a gradient coil controller 46 and a transmit/receive interface 48 may provide interfaces through which the control circuitry 40 may control the one or more gradient coils 36 and RF coil 38. For example, the gradient coil controller 46 may include amplification circuitry configured to drive current for the one or more gradient coils 36 under control of circuitry 40. For further example, the transmit/receive interface 48 may include amplification circuitry to drive the RF coil 38 during operation. In some embodiments, the RF coil 38 may be configured to both emit RF excitation pulses and receive responsive signals, and the transmit/receive interface 48 may include a switch configured to toggle the RF coil 38 between transmit and receive modes of operation.

In addition to the MRI subsystem, the imaging device 14 also includes one or more components that enable PET imaging. For example, a PET assembly 50 is disposed between the inner transaxial wall 24 and the outer transaxial wall 26 of the frame 22 of the imaging device 14. In the illustrated embodiment, the PET assembly 50 is disposed annularly between the one or more gradient coils 36 and the RF coil 38. However, in other

embodiments, the PET assembly 50 may be disposed in any other desired location within the frame 22.

Further, although the PET assembly 50 is illustrated as an integrative part of the frame 22 in the illustrated example, the PET assembly 50 may be configured as a removable insert in other embodiments. For example, the PET assembly 50 may be provided as part of a retrofit kit configured to retrofit an existing imaging system (e.g., MRI system or CT system) to endow the existing system with PET imaging capabilities. Such a retrofit kit may include the PET assembly 50 configured to be inserted into an existing frame 22 and/or software configured to be executed by the control circuitry 40 to enable control of the PET assembly and/or processing of the acquired data.

The PET assembly 50 may include an annular scintillation crystal coupled to one or more photodetectors, as discussed in more detail below, to enable the PET assembly 50 to function as a PET detector. To that end, the patient 12 may be administered through injection or inhalation a positron-emitting imaging agent 52 which will result in the production of a pair of annihilation photons. In some embodiments, the PET subsystem may generate images displaying the distributions of positron-emitting nuclides in the patient 12. To that end, the PET assembly 50 may operate on the principle of annihilation coincidence detection (ACD). In such embodiments, a positron is emitted by a nuclear transformation of a

radiopharmaceutical (e.g., radiation labeled imaging agent 52), and the positron annihilates with an electron to result in two 511 keV photons 54 emitted in opposite directions and detected by PET assembly 50. The single crystal scintillator in the PET assembly 50 may detect the photons 54 and produce visible scintillation photons detectable by photodetectors.

Detector acquisition circuitry 56 may be configured to control acquisition of the signals acquired by the PET assembly 50 in coordination with central control circuitry 40. Image processing circuitry 58 may process the acquired data from the detector acquisition circuitry 56. For example, the image processing circuitry 58 may include one or more processors configured to receive the PET image data and the MRI image data, and to overlay the PET data over the MRI data to generate a composite image.

In the illustrated embodiment, the centralized control circuitry 40 controls the PET and MRI subsystems of the imaging system 10. To that end, the control of the subsystems is coordinated to enable acquisition of PET and MRI data while the patient 12 is in the same position. The foregoing feature may offer the advantage of enabling reduction in imaging acquisition time (e.g., because the patient does not have to be moved between imaging modalities) and/or reduction in image artifacts due to patient movement.

In some embodiments, the temperature control system 20 may also be under control of the control circuitry 40 to enable cooling of one or more components of the imaging device 14. For example, the temperature control system 20 may include control circuitry 60 (e.g., one or more processors) communicatively coupled to control circuitry 40. The control circuitry 60 may access memory 62 (which may include components similar to memory 42 described above) to facilitate control of a fluid source. For example, the control circuitry 60 may control a pump to pump fluid from the fluid source 64 to one or more tubes in the PET assembly 50 to cool one or more components in the PET assembly 50, as described in more detail below.

Further, the imaging control system 16 may include one or more devices that facilitate interaction between a user and the imaging device 14. For example, an interface 66 may communicatively couple the control circuitry 40 to an operator workstation 68. The operator workstation 68 may be a general purpose or special computer including, for example, memory for storing pulse sequences, examination protocols, patient data, raw and/or processed image data, and so forth. The operator workstation 68 may receive one or more operator inputs via the user input devices 44. The user input devices 44 may include, but are not limited to, mobile devices (e.g., smartphones, tablets, laptops, etc.), keyboards, computer mice, etc. The operator workstation 68 may also be coupled to one or more local or remote image access and control systems 70, such as picture archiving and communication systems (PACS), teleradiography systems (TELERAD), etc.

Example and/or Experimental Results Set No. 5

FIG. 12 illustrates an embodiment for a technique and method for using a standalone PET imaging system 72 in accordance with a disclosed embodiment. That is, in this embodiment, the PET assembly 50 (see also FIG. 11) may form an integral or removably insertable part of a single modality system 72. As shown, in this embodiment, a radioactive isotope (e.g., radiation source 52 in FIG. 11) is disposed in the patient 12 (block 74). In some embodiments, the radioactive isotope may be targeted to a desired location(s) within the patient 12 by chemically binding it to a targeting ligand, such glucose, a peptide, small molecule, etc. Photons are emitted from the radioactive isotope 52 (block 76) and detected by one or more detection elements of the PET assembly 50.

In the illustrated embodiment, the PET assembly 50 includes a scintillator crystal 78. The crystal 78 may be formed in a variety of suitable shapes and may be formed from any suitable material. For example, the crystal 78 may include, but is not limited to, LYSO (Cerium-doped Lutetium Yttrium Orthosilicate), LaBn (Lanthanum Bromide), Nal(Tl) (Sodium Iodide), BGO (Bismuth Germanate), a combination thereof, or any other suitable scintillator material. The crystal 78 may be a dense material capable of converting a highly energy gamma ray (e.g., 511 keV in the case of positron emitters), and lower energy in the case of single gamma emitters, such as, but not limited to, 99m Tc, and U1 ln) into visible light. The visible light may be detected by one or more photodetectors 80. The one or more photodetectors 80, may include, but are not limited to, avalanche photo diodes, silicon photomultipliers (SiPMs), or any other suitable photodetector.

In the PET system 72, the PET assembly 50 is under control of the detector acquisition circuitry 56, as described above. Further, the image processing circuitry 58 is communicatively coupled to the detector acquisition circuitry 56 to receive and process the PET image data, as described in detail above. Likewise, the operator workstation 68 may be included in the PET system 72 to enable operator input. Additionally, an output device 82, such as a display or printer, may be configured to output the PET images generated during operation of the PET imaging system 72.

Example and/or Experimental Results Set No. 6

FIG. 13 illustrates a block diagram of an example machine 400 upon which one or more embodiments (e.g., discussed methodologies) can be implemented (e.g., run).

Examples of machine 400 can include logic, one or more components, circuits (e.g., modules), or mechanisms. Circuits are tangible entities configured to perform certain operations. In an example, circuits can be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner. In an example, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors (processors) can be configured by software (e.g., instructions, an application portion, or an application) as a circuit that operates to perform certain operations as described herein. In an example, the software can reside (1) on a non-transitory machine readable medium or (2) in a transmission signal. In an example, the software, when executed by the underlying hardware of the circuit, causes the circuit to perform the certain operations.

In an example, a circuit can be implemented mechanically or electronically. For example, a circuit can comprise dedicated circuitry or logic that is specifically configured to perform one or more techniques such as discussed above, such as including a special-purpose processor, a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In an example, a circuit can comprise programmable logic (e.g., circuitry, as encompassed within a general-purpose processor or other programmable processor) that can be temporarily configured (e.g., by software) to perform the certain operations. It will be appreciated that the decision to implement a circuit mechanically (e.g., in dedicated and permanently configured circuitry), or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations.

Accordingly, the term "circuit" is understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform specified operations. In an example, given a plurality of temporarily configured circuits, each of the circuits need not be configured or instantiated at any one instance in time. For example, where the circuits comprise a general-purpose processor configured via software, the general-purpose processor can be configured as respective different circuits at different times. Software can accordingly configure a processor, for example, to constitute a particular circuit at one instance of time and to constitute a different circuit at a different instance of time.

In an example, circuits can provide information to, and receive information from, other circuits. In this example, the circuits can be regarded as being communicatively coupled to one or more other circuits. Where multiple of such circuits exist

contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the circuits. In embodiments in which multiple circuits are configured or instantiated at different times, communications between such circuits can be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple circuits have access. For example, one circuit can perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further circuit can then, at a later time, access the memory device to retrieve and process the stored output. In an example, circuits can be configured to initiate or receive communications with input or output devices and can operate on a resource (e.g., a collection of information).

The various operations of method examples described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors can constitute processor-implemented circuits that operate to perform one or more operations or functions. In an example, the circuits referred to herein can comprise processor-implemented circuits.

Similarly, the methods described herein can be at least partially processor- implemented. For example, at least some of the operations of a method can be performed by one or processors or processor-implemented circuits. The performance of certain of the operations can be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In an example, the processor or processors can be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other examples the processors can be distributed across a number of locations. The one or more processors can also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). For example, at least some of the operations can be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs).)

Example embodiments (e.g., apparatus, systems, or methods) can be implemented in digital electronic circuitry, in computer hardware, in firmware, in software, or in any combination thereof. Example embodiments can be implemented using a computer program product (e.g., a computer program, tangibly embodied in an information carrier or in a machine readable medium, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers).

A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand- alone program or as a software module, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.

In an example, operations can be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Examples of method operations can also be performed by, and example apparatus can be implemented as, special purpose logic circuitry (e.g., a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)).

The computing system can include clients and servers. A client and server are generally remote from each other and generally interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that both hardware and software architectures require consideration. Specifically, it will be appreciated that the choice of whether to implement certain functionality in permanently configured hardware (e.g., an ASIC), in temporarily configured hardware (e.g., a combination of software and a programmable processor), or a combination of permanently and temporarily configured hardware can be a design choice. Below are set out hardware (e.g., machine 400) and software architectures that can be deployed in example embodiments.

In an example, the machine 400 can operate as a standalone device or the machine 400 can be connected (e.g., networked) to other machines.

In a networked deployment, the machine 400 can operate in the capacity of either a server or a client machine in server-client network environments. In an example, machine 400 can act as a peer machine in peer-to-peer (or other distributed) network environments. The machine 400 can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) specifying actions to be taken (e.g., performed) by the machine 400. Further, while only a single machine 400 is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

Example machine (e.g., computer system) 400 can include a processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 404 and a static memory 406, some or all of which can communicate with each other via a bus 408. The machine 400 can further include a display unit 410, an alphanumeric input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 411 (e.g., a mouse). In an example, the display unit 410, input device 417 and UI navigation device 414 can be a touch screen display. The machine 400 can additionally include a storage device (e.g., drive unit) 416, a signal generation device 418 (e.g., a speaker), a network interface device 420, and one or more sensors 421, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.

The storage device 416 can include a machine readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 424 can also reside, completely or at least partially, within the main memory 404, within static memory 406, or within the processor 402 during execution thereof by the machine 400. In an example, one or any combination of the processor 402, the main memory 404, the static memory 406, or the storage device 416 can constitute machine readable media.

While the machine readable medium 422 is illustrated as a single medium, the term "machine readable medium" can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that configured to store the one or more instructions 424. The term "machine readable medium" can also be taken to include any tangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term "machine readable medium" can accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine readable media can include non-volatile memory, including, by way of example, semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices;

magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.

The instructions 424 can further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, IP, TCP, UDP, HTTP, etc.).

Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., IEEE 802.11 standards family known as Wi-Fi®, IEEE 802.16 standards family known as WiMax®), peer-to-peer (P2P) networks, among others. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog

communications signals or other intangible medium to facilitate communication of such software.

Additional Examples

Example 1. A continuous dynamic positron emission tomography (PET) assembly for imaging a target region of a subject. The assembly may comprise: a radioactive tracer isotope injector configured to administer a radioactive isotope into the subject; a scintillator crystal configured to absorb ionizing radiation from the subject and emit scintillator light, wherein said scintillator crystal undertakes the absorption substantially at the same time of the start of administering the radioactive isotope; a photo detector in communication with said scintillator crystal, wherein the photodetector is configured to detect the emitted scintillation light and provide electrical signals as output; a signal digitizing circuitry converting the output electrical signals into digital data; and a processor configured to receive the digital data and implement a model to convert the digital data into a three dimensional, tomographic image reconstruction.

Example 2. The PET assembly of example 1, wherein said model comprises an artificial Neural Network (ANN).

Example 3. The PET assembly of example 1 (as well as subject matter in whole or in part of example 2), wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 0 to 30 minutes after the start of administering;

about 25 to 30 minutes after the start of administering;

about 1 minute to about 10 minutes after the start of administering; or

about 60 minutes after the start of administering.

Example 4. The PET assembly of example 1 (as well as subject matter of one or more of any combination of examples 2-3, in whole or in part), wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

Example 5. The PET assembly of example 1 (as well as subject matter of one or more of any combination of examples 2-4, in whole or in part), wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

Example 6. The PET assembly of example 5, (as well as subject matter of one or more of any combination of examples 2-4, in whole or in part), wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions. Example 7. An aspect of an embodiment of the present invention provides, among other things, a method for continuous dynamic positron emission tomography (PET) imaging of a target region of a subject. The method may comprise: administering a radioactive tracer isotope into the subject; absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of administering the radioactive isotope; detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

Example 8. The PET imaging method of example 7, wherein said model comprises an artificial Neural Network (ANN).

Example 9. The PET imaging method of example 7 (as well as subject matter in whole or in part of example 8), wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 60 minutes after the start of administering.

Example 10. The PET imaging method of example 7 (as well as subject matter of one or more of any combination of examples 8-9, in whole or in part), wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

Example 11. The PET imaging method of example 7 (as well as subject matter of one or more of any combination of examples 8-10, in whole or in part), wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

Example 12. The PET imaging method of example 11 (as well as subject matter of one or more of any combination of examples 8-10, in whole or in part), wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions.

Example 13. An aspect of an embodiment of the present invention provides, among other things, a non-transitory computer readable medium having computer program logic that when implemented enables one or more processors in a positron emission tomography (PET) assembly to generate continuous dynamic positron emission tomography (PET) images of a target region of a subject. The computer program logic may comprise: administering a radioactive tracer isotope into the subject; absorbing ionizing radiation from the subject and emitting scintillator light, wherein the absorption is substantially undertaken at the same time of the start of administering the radioactive isotope; detecting the emitted scintillation light and providing electrical signals as output; converting the output electrical signals into digital data; and receiving the digital data and implementing a model to convert the digital data into a three dimensional, tomographic image reconstruction.

Example 14. The non-transitory computer readable medium of example 13, wherein said model comprises an artificial Neural Network (ANN).

Example 15. The non-transitory computer readable medium of example 13 (as well as subject matter in whole or in part of example 14), wherein undertaking the absorption, by the scintillator crystal, substantially at same time of the start of administering the radioactive isotope includes at least one of the following ranges of timing:

about 1 minute to about 60 minutes after the start of administering;

about 5 minutes to about 30 minutes after the start of administering;

about 1 minute to about 25 minutes after the start of administering;

about 10 to about 30 minutes after the start of administering;

about 15 to about 20 minutes after the start of administering;

about 25 minutes after the start of administering;

about 30 to about 60 minutes after the start of administering; or

about 60 minutes after the start of administering.

Example 16. The non-transitory computer readable medium of example 13 (as well as subject matter of one or more of any combination of examples 14-15, in whole or in part), wherein the target region comprises: breast, brain, head/neck, heart, liver, prostate, or lower extremities.

Example 17. The non-transitory computer readable medium of example 13 (as well as subject matter of one or more of any combination of examples 14-16, in whole or in part), wherein implementing the model comprises using a trained artificial neural network (ANN) model for image segmentation of at least a part of the target region of the subject.

Example 18. The non-transitory computer readable medium of example 17 (as well as subject matter of one or more of any combination of examples 14-16, in whole or in part), wherein the ANN is trained at least in part based on time activity curves associated with pixels, from regions of interest in PET images, in tumor regions and non-tumor regions.

Example 19. The method of manufacturing any of the devices, systems, assemblies, or their components provided in any one or more of examples 1-6.

Example 20. The method of manufacturing any of the devices, materials, or their components provided in any one or more of examples 13-18.

REFERENCES

The devices, systems, compositions, computer program products, non-transitory computer readable medium, and methods of various embodiments of the invention disclosed herein may utilize aspects disclosed in the following references, applications, publications and patents and which are hereby incorporated by reference herein in their entirety (and which are not admitted to be prior art with respect to the present invention by inclusion in this section).

1. U.S. Patent Application Publication No. US 2008/0103391 Al, Dos Santos Varela,

J., "Tomography by Emission of Positrons (PET System), May 1, 2008.

2. U.S. Patent Application Publication No. US 2003/0128801 Al, Eisenberg, H., et al., "Multi-Modality Apparatus for Dynamic Anatomical, Physiological and Molecular

Imaging", July 10, 2003.

3. U.S. Patent No. 6,490,476 Bl, Townsend, D., et al., "Combined PET and X-Ray

CT Tomography and Method for Using Same", December 3, 2002.

4. U.S. Patent No. 5,825,031, Wong, W., et al., "Tomographic PET Camera With Adjustable Diameter Detector Ring", October 20 1988.

5. U.S. Patent Application Publication No. US 2008/0077005 Al, Piron, C, et al., "System and Method for Multimodality Breast Imaging", March 27, 2008.

6. U.S. Patent Application Publication No. US 2004/0260176 Al, Wollenweber, S., et al., "Systems and Methods for Correcting a Positron Emission Tomography Emission Image", December 23, 2004. 7. U.S. Patent No. 1,132,11 '4 B2, Majewski, S., "High Resolution Pet Breast Imager with Improved Detection Efficiency", June 8, 2010.

8. U.S. Patent No. 6,271,525 Bl, Majewski, et al., "Mini Gamma Camera, Camera System and Method of Use", August 7, 2001.

9. U.S. Patent No. 8,698,087 B2, Surti, et al., "Limited Angle Tomography with

Time-Of-Flight-PET, April 15, 2014.

10. U.S. Patent No. 6,946,658 B2, Tai,Y., "Method and Apparatus for Increasing Spatial Resolution of a Pet Scanner", September 20, 2005.

11. International Patent Application No. PCT/US2016/063534, Stuart S. Berr, et al., entitled: "Positron Emission Tomography Systems and Methods", filed November 23, 2016.

12. U.S. Patent Application Publication No. US 2008/0103391 Al, Dos Santos Varela, J., "Tomography by Emission of Positrons (PET) System", May 1, 2008.

13. U.S. Patent Application Publication No. US 2003/0128801 Al, Eisenberg, et al., "Multi-Modality Apparatus for Dynamic Anatomical, Physiological and Molecular Imaging", July 10, 2003.

14. U.S. Patent No. 6,490,476 Bl, Townsend, et al., "Combined PET and X-Ray CT Tomograph and Method for Using Same", December 3, 2002.

15. U.S. Patent No. 5,825,031, Wong, et al., "Tomographic PET Camera with Adjustable Diameter Detector Ring", October 20, 1998.

16. U.S. Patent Application Publication No. US 2008/0077005 Al, Piron, et al.,

"System and Method for Multimodality Breast Imaging", March 27, 2008.

17. U.S. Patent Application Publication No. US 2004/0260176 Al, Wollenweber, et al., "Systems and Methods for Correcting a Positron Emission Tomography Emission Image", December 23, 2004.

18. U.S. Patent Application Publication No. US 2011/0192982 Al, Henseler et al., "System and Method for Providing Depth of Interaction Detection Using Positron Emission Tomography", August 11, 2011.

19. U.S. Patent No. 6,921,901, Chai, et al., "Lutetium Yitrium Orthosilicate

Single Crystal Scintillator Detector", July 26, 2005.

In summary, while the present invention has been described with respect to specific embodiments, many modifications, variations, alterations, substitutions, and equivalents will be apparent to those skilled in the art. The present invention is not to be limited in scope by the specific embodiment described herein. Indeed, various modifications of the present invention, in addition to those described herein, will be apparent to those of skill in the art from the foregoing description and accompanying drawings. Accordingly, the invention is to be considered as limited only by the spirit and scope of the claims, including all

modifications and equivalents.

Still other embodiments will become readily apparent to those skilled in this art from reading the above-recited detailed description and drawings of certain exemplary

embodiments. It should be understood that numerous variations, modifications, and additional embodiments are possible, and accordingly, all such variations, modifications, and embodiments are to be regarded as being within the spirit and scope of this application. For example, regardless of the content of any portion (e.g., title, field, background, summary, abstract, drawing figure, etc.) of this application, unless clearly specified to the contrary, there is no requirement for the inclusion in any claim herein or of any application claiming priority hereto of any particular described or illustrated activity or element, any particular sequence of such activities, or any particular interrelationship of such elements. Moreover, any activity can be repeated, any activity can be performed by multiple entities, and/or any element can be duplicated. Further, any activity or element can be excluded, the sequence of activities can vary, and/or the interrelationship of elements can vary. Unless clearly specified to the contrary, there is no requirement for any particular described or illustrated activity or element, any particular sequence or such activities, any particular size, speed, material, dimension or frequency, or any particularly interrelationship of such elements. Accordingly, the descriptions and drawings are to be regarded as illustrative in nature, and not as restrictive. Moreover, when any number or range is described herein, unless clearly stated otherwise, that number or range is approximate. When any range is described herein, unless clearly stated otherwise, that range includes all values therein and all sub ranges therein. Any information in any material (e.g., a United States/foreign patent, United States/foreign patent application, book, article, etc.) that has been incorporated by reference herein, is only incorporated by reference to the extent that no conflict exists between such information and the other statements and drawings set forth herein. In the event of such conflict, including a conflict that would render invalid any claim herein or seeking priority hereto, then any such conflicting information in such incorporated by reference material is specifically not incorporated by reference herein.