Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROSTATE CANCER LOCAL STAGING
Document Type and Number:
WIPO Patent Application WO/2024/037922
Kind Code:
A1
Abstract:
Systems, methods, and computer programs disclosed herein relate to prostate cancer local staging based on multi-parametric magnetic resonance imaging images using a trained machine learning model.

Inventors:
LORIO SARA (GB)
URBAN KATHARINA (DE)
Application Number:
PCT/EP2023/071881
Publication Date:
February 22, 2024
Filing Date:
August 08, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BAYER AG (DE)
International Classes:
G06T7/11; G06T7/00
Foreign References:
EP3367331A12018-08-29
Other References:
OSCAR J PELLICER-VALERO ET AL: "Deep Learning for fully automatic detection, segmentation, and Gleason Grade estimation of prostate cancer in multiparametric Magnetic Resonance Images", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 February 2022 (2022-02-02), XP091144098
VENTE COEN DE ET AL: "Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE, USA, vol. 68, no. 2, 8 May 2020 (2020-05-08), pages 374 - 383, XP011832049, ISSN: 0018-9294, [retrieved on 20210120], DOI: 10.1109/TBME.2020.2993528
BELUE MASON J. ET AL: "Tasks for artificial intelligence in prostate MRI", vol. 6, no. 1, 6 February 2022 (2022-02-06), XP093020642, Retrieved from the Internet DOI: 10.1186/s41747-022-00287-9
A. STABILE ET AL.: "Multiparametric MRI for prostate cancer diagnosis: current status and future directions", NAT REV UROL, vol. 17, 2020, pages 41 - 61, XP036980594, DOI: 10.1038/s41585-019-0212-4
I. CAGLIC ET AL.: "Multiparametric MRI - local staging of prostate cancer and beyond", RADIOL ONCOL, vol. 53, no. 2, 2019, pages 159 - 170
O. J. PELLICER-VALERO ET AL.: "Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images", SCI REP, vol. 12, 2022, pages 2975
C. DE VENTE ET AL.: "Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI", IEEE TRANS BIOMED ENG., vol. 68, no. 2, February 2021 (2021-02-01), pages 374 - 383, XP011832049, DOI: 10.1109/TBME.2020.2993528
M. J. BELUEB. TURKBEY: "Tasks for artificial intelligence in prostate MRI", EUR RADIOL EXP, vol. 6, 2022, pages 33, XP093020642, DOI: 10.1186/s41747-022-00287-9
S. L. MOROIANU ET AL.: "Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning", CANCERS (BASEL, vol. 14, no. 12, 7 June 2022 (2022-06-07), pages 2821, XP093020646, DOI: 10.3390/cancers14122821
S. JADON: "A survey of loss functions for semantic segmentation", ARXIV:2006.14822V4, 3 September 2020 (2020-09-03)
K. JANOCHA ET AL.: "On Loss Functions for Deep Neural Networks in Classification", ARXIV:1702.05659V1, 2017
O. RONNEBERGER ET AL.: "International Conference on Medical image computing and computer-assisted intervention", 2015, SPRINGER, article "U-net: Convolutional networks for biomedical image segmentation", pages: 234 - 241
M.-Y. LIU ET AL.: "Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications", ARXIV:2008.02793
J. HENRY ET AL., PIX2PIX GAN FOR IMAGE-TO-IMAGE TRANSLATION, DOI: 10.13140/RG.2.2.32286.66887
J. HAUBOLD ET AL.: "Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network", EUR RADIOL, vol. 31, no. 8, 2021, pages 6087 - 6095, XP037503087, DOI: 10.1007/s00330-021-07714-2
K. HARA ET AL.: "Learning Spatio-temporal Features with 3D Residual Networks for Action Recognition", ARXIV:1708.07632, 2017
Attorney, Agent or Firm:
BIP PATENTS (DE)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method comprising: providing a trained machine learning model (MLM‘), receiving patient data, the patient data comprising a multi -parametric MRI image set (IS*) of an examination region comprising a prostate region of a male human patient, inputting the patient data into the trained machine learning model (MLM‘), wherein the trained machine learning model (MLM‘) comprises a first segmentation unit (SUi), a second segmentation unit (SU2), and a classification unit (CU), o wherein the first segmentation unit (SU i) is configured to receive the multi -parametric MRI image set (IR*) of the examination region comprising the prostate region of the male human patient and to generate one or more first segmented images (SL*) based on the one or more received images and model parameters, o wherein the second segmentation unit (SU2) is configured to receive the one or more first segmented images (SL*) and the multi -parametric MRI image set (IR*) of the examination region and to generate one or more second segmented images (SL*) based on the one or more first segmented images (SL*), the multi-parametric MRI image set (IR*) and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images (SL*), o wherein the classification unit (CU) is configured to assign the one or more second segmented images (SL*) to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model (MLM‘) a predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), outputting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), and/or storing the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*) on a data storage, and/or transmitting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*) to a remote computer system.

2. The method of claim 1, wherein the multi -parametric MRI image set (IS*) comprises one or more T2-weighted images and/or one or more apparent diffusion coefficient maps.

3. Hie method of claim 1, wherein the multi -parametric MRI image set (IS*) consists of one or more T2-weighted images and/or one or more apparent diffusion coefficient maps

4. The method of any one of claims 1 to 3, wherein each class corresponds to a prostate cancer local stage according to the tumor, nodes, and metastases staging system developed by the American Joint Committee on Cancer. 5. The method of any one of claims 1 to 4, wherein each of the first segmentation unit (SUi), second segmentation unit (SU2), and classification unit (CU) is trained separately.

6. The method of any one of claims 1 to 5, wherein each of the first segmentation unit (SUi), second segmentation unit (SU2), and classification unit (CU) is or comprises an artificial neural network.

7. The method of any one of claims 1 to 6, wherein the trained machine learning model (MLM‘) was trained on training data (TD), the training data (TD) comprising, for each reference patient of a multitude of reference patients, input data and target data, the input data comprising a multi -parametric MRI image set (IS) of an examination region comprising a prostate region of the reference patient, and the target data comprising one or more target images (SI) in which, if present, prostate gland, prostatic cancer lesions and extra-prostatic cancer lesions are segmented, and a prostate cancer local stage (PCS) for the reference patient.

8. The method of any one of claims 1 to 7, wherein the trained machine learning model (MLM‘) was trained in a training method, the training method comprises: receiving and/or providing a machine learning model (MLM), wherein the machine learning model (MLM) comprises the first segmentation unit (SUi), the second segmentation unit (SU2), and the classification unit (CU), o wherein the first segmentation unit (SUi) is configured to receive a multi -parametric MRI image set (IS) of an examination region comprising a prostate region of a male human and to generate one or more first segmented images (SIi) based on the one or more received images and model parameters, o wherein the second segmentation unit (SUi) is configured to receive the one or more first segmented images (SIi) and the multi-parametric MRI image set (IS) of the examination region and to generate one or more second segmented images (SI2) based on the first segmented images (SIi), the multi-parametric MRI image set (IS) and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images (SI2), o wherein the classification unit (CU) is configured to assign the one or more second segmented images (SI2) to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving and/or providing training data (TD), the training data (TD) comprising, for each reference patient of a multitude of reference patients, input data and target data, the input data comprising a multi -parametric MRI image set (IS) of an examination region comprising a prostate region of the reference patient, and the target data comprising one or more target images (SI) in which, if present, prostate gland, prostatic cancer lesions and extra-prostatic cancer lesions are segmented, and a prostate cancer local stage (PCS) for the reference patient, training the machine learning model (MLM), wherein the training comprises for each reference patient of the multitude of reference patients: o inputting the multi-parametric MRI image set (IS) of the reference patient into the first segmentation unit (SUi), o receiving one or more first segmented images (Sit) from the first segmentation unit (SUi) in which the prostate gland, if present, is segmented, o computing a first segmentation loss (Li), the first segmentation loss (Li) quantifying deviations between the one or more first segmented images (SIi) and the one or more target images (SI) of the segmented prostate gland, o inputting the one or more first segmented images (SIi) and the multi-parametric MRI image set (IS) of the reference patient into the second segmentation unit (SU2), o receiving one or more second segmented images (SI2) from the second segmentation unit (SU2) in which the prostatic and extra-prostatic cancer lesions, if present, are segmented, o computing a second segmentation loss (L2), the second segmentation loss (L2) quantifying deviations between the one or more second segmented images (SI2) and the one or more target images (SI) of the segmented prostatic and extra-prostatic cancer lesions, o inputting the one or more second segmented images (SI2) into the classification unit (CU), o receiving the predicted prostate cancer local stage (PCSp) from the classification unit, o computing a classification loss (L3), the classification loss (L3) quantifying deviations between the prostate cancer local stage (PCS) and the predicted prostate cancer local stage (PCSp), o modifying model parameters to reduce the first segmentation loss (Li), the second segmentation loss (L2), and the classification loss (L3), storing and/or outputting the model parameters and/or the trained machine learning model (MLM‘) and/or transmitting the model parameters and/or the trained machine learning model (MLM‘) to a remote computer system.

9. A computer system comprising: a processing unit; and a memory storing software instructions configured to perform, when executed by the processing unit, an operation, the operation comprising: receiving patient data, the patient data comprising a multi -parametric MRI image set (IS*) of an examination region comprising a prostate region of a male human patient, inputting the patient data into a trained machine learning model (MLM‘), wherein the trained machine learning model (MLM‘) comprises a first segmentation unit (SUi), a second segmentation unit (SU2), and a classification unit (CU), o wherein the first segmentation unit (SUi) is configured to receive the multi -parametric MRI image set (IR*) of the examination region comprising the prostate region of the male human patient and to generate one or more first segmented images (SIi*) based on the one or more received images and model parameters, o wherein the second segmentation unit (SU2) is configured to receive the one or more first segmented images (SIi*) and the multi -parametric MRI image set (IR*) of the examination region and to generate one or more second segmented images (SI2*) based on the one or more first segmented images (SIi*), the multi-parametric MRI image set (IR*) and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images (SI2*), o wherein the classification unit (CU) is configured to assign the one or more second segmented images (SL*) to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model (MLM‘) a predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), outputting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), and/or storing the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL* , SL*) on a data storage, and/or transmitting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SIi*, SL*) to a remote computer system.

10. A non-transitory computer readable medium having stored thereon software instructions that, when executed by a processing unit of a computer system, cause the computer system to execute the following steps: receiving patient data, the patient data comprising a multi -parametric MRI image set (IS*) of an examination region comprising a prostate region of a male human patient, inputting the patient data into a trained machine learning model (MLM‘), wherein the trained machine learning model (MLM‘) comprises a first segmentation unit (SUi), a second segmentation unit (SU2), and a classification unit (CU), o wherein the first segmentation unit (SUi) is configured to receive the multi -parametric MRI image set (IR*) of the examination region comprising the prostate region of the male human patient and to generate one or more first segmented images (SL*) based on the one or more received images and model parameters, o wherein the second segmentation unit (SU2) is configured to receive the one or more first segmented images (SL*) and the multi -parametric MRI image set (IR*) of the examination region and to generate one or more second segmented images (SL*) based on the one or more first segmented images (SL*), the multi-parametric MRI image set (IR*) and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images (SL*), o wherein the classification unit (CU) is configured to assign the one or more second segmented images (SL*) to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model (MLM‘) a predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), outputting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*), and/or storing the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*) on a data storage, and/or transmitting the predicted prostate cancer local stage (PCSp*) and optionally the one or more first and/or second segmented images (SL*, SL*) to a remote computer system.

Description:
Prostate Cancer Local Staging

FIELD

Systems, methods, and computer programs disclosed herein relate to prostate cancer local staging based on multi-parametric magnetic resonance imaging images using a trained machine learning model.

BACKGROUND

Prostate cancer staging is the process by which physicians categorize the risk of cancer having spread beyond the prostate. The presence of locally advanced prostate cancer, in the form of extracapsular extension, seminal vesicle invasion, or regional lymph node metastasis, can affect the choice of treatment. Accurate local staging of prostate cancer is critical for cancer treatment and patient management decisions.

Once patients are placed in prognostic categories, this information can contribute to the selection of an optimal approach to treatment. Prostate cancer stage can be assessed by imaging techniques or pathological specimens. Clinical staging usually occurs before the first treatment and tumor presence is determined through imaging and needle biopsy, while pathological grading is done after a biopsy is performed or the prostate is removed by looking at the cell types within the sample.

Multi-parametric magnetic resonance imaging (mpMRI) of the prostate is a novel promising tool for diagnosis of prostate cancer (A. Stabile et al. : Multiparametric MRI for prostate cancer diagnosis: current status and future directions, Nat Rev Urol 2020, 17: 41-61). mpMRI is recommended for local staging of prostate cancer (I. Caglic et al. : Multiparametric MRI - local staging of prostate cancer and beyond, Radiol Oncol. 2019, 53(2): 159-170).

Machine learning methods are used to detect and classify prostate cancer based on mpMRI images (EP3367331A1).

O. J. Pellicer-Valero et al. disclose a fully automatic system based on Deep Learning that performs localization, segmentation and Gleason grade group (GGG) estimation of prostate cancer lesions from prostate mpMRIs (O. J. Pellicer-Valero etal. '. Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images, Sci Rep 12, 2975 (2022)).

C. de Vente etal. disclose aneural network that simultaneously detects and grades prostate cancer tissue in an end-to-end fashion (C. de Vente et al. : Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI, IEEE Trans Biomed Eng. 2021 Feb;68(2):374-383).

M. J. Belue and B. Turkbey provide an overview of artificial intelligence (Al) models for prostate segmentation, anatomically segmenting cancer suspicious foci, detecting and differentiating clinically insignificant cancers from clinically significant cancers on a voxel-level, and classifying entire lesions into Prostate Imaging Reporting and Data System categoric s/Gleason scores (M. J. Belue and B. Turkbey: Tasks for artificial intelligence in prostate MRI, Eur Radiol Exp 6, 33 (2022)).

S. L. Moroianu et al. -, disclose a method for computational detection of extraprostatic extension on multiparametric MRI using deep learning (S. L. Moroianu et al. -. Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning, Cancers (Basel), 2022 Jun 7; 14(12):2821).

Based on the described state of the art, the technical task was to support physicians in local staging prostate cancer and to burden the patient as little as possible with surgical interventions without increasing the risk of misdiagnosis. SUMMARY

This task is solved by the subject matter of the independent patent claims. Preferred embodiments can be found in the dependent patent claims, the present description, and the drawings.

Therefore, in a first aspect, the present disclosure provides a method for training a machine learning model. The training method comprises: receiving and/or providing a machine learning model, wherein the machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi -parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving and/or providing training data, the training data comprising, for each reference patient of a multitude of reference patients, input data and target data, the input data comprising a multi- parametric MRI image set of an examination region comprising a prostate region of the reference patient, and the target data comprising one or more target images in which, if present, prostate gland, prostatic cancer lesions and extra-prostatic cancer lesions are segmented, and a prostate cancer local stage for the reference patient, training the machine learning model, wherein the training comprises for each reference patient of the multitude of reference patients: o inputting the multi -parametric MRI image set of the reference patient into the first segmentation unit, o receiving one or more first segmented images from the first segmentation unit in which the prostate gland, if present, is segmented, o computing a first segmentation loss, the first segmentation loss quantifying deviations between the one or more first segmented images and the one or more target images of the segmented prostate gland, o inputting the one or more first segmented images and the multi -parametric MRI image set of the reference patient into the second segmentation unit, o receiving one or more second segmented images from the second segmentation unit in which the prostatic and extra-prostatic cancer lesions, if present, are segmented, o computing a second segmentation loss, the second segmentation loss quantifying deviations between the one or more second segmented images and the one or more target images of the segmented prostatic and extra-prostatic cancer lesions, o inputing the one or more second segmented images into the classification unit, o receiving the predicted prostate cancer local stage from the classification unit, o computing the classification loss, the classification loss quantifying deviations between the prostate cancer local stage and the predicted prostate cancer local stage, o modifying model parameters to reduce the first segmentation loss, the second segmentation loss, and the classification loss, storing and/or outputting the model parameters and/or the trained machine learning model and/or transmiting the model parameters and/or the trained machine learning model to a remote computer system and/or using the trained machine learning model for predicting a prostate cancer local stage for a new patient.

In a second aspect, the present disclosure provides a computer-implemented method for predicting a prostate cancer local stage for a new patient using the trained machine learning model. The prediction method comprises: receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the new patient, inputting the patient data into the trained machine learning model, wherein the trained machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi -parametric MRI image set of the examination region and to generate one or more second segmented images based on the one or more first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more first and second segmented images for the new patient, outputting the predicted prostate cancer local stage and optionally the one or more first and second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and second segmented images to a remote computer system.

In another aspect, the present disclosure provides a computer system comprising: a processing unit; and a memory storing software instructions configured to perform, when executed by the processing unit, an operation, the operation comprising: receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the new patient, inputting the patient data into the trained machine learning model, wherein the trained machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi-parametric MRI image set of the examination region and to generate one or more second segmented images based on the one or more first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more first and second segmented images for the new patient, outputting the predicted prostate cancer local stage and optionally the one or more first and second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and second segmented images to a remote computer system.

In another aspect, the present disclosure provides a non-transitory computer readable medium having stored thereon software instructions that, when executed by a processing unit of a computer system, cause the computer system to execute the following steps: receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the new patient, inputting the patient data into the trained machine learning model, wherein the trained machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi -parametric MRI image set of the examination region and to generate one or more second segmented images based on the one or more first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more first and second segmented images for the new patient, outputting the predicted prostate cancer local stage and optionally the one or more first and second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and second segmented images to a remote computer system.

DETAILED DESCRIPTION

The invention will be more particularly elucidated below without distinguishing between the aspects of the disclosure (training method, prediction method, computer system, computer-readable storage medium). On the contrary, the following elucidations are intended to apply analogously to all the aspects of the disclosure, irrespective of in which context (training method, prediction method, computer system, computer-readable storage medium) they occur.

If steps are stated in an order in the present description or in the claims, this does not necessarily mean that the disclosure is restricted to the stated order. On the contrary 7 , it is conceivable that the steps can also be executed in a different order or else in parallel to one another, unless one step builds upon another step, this absolutely requiring that the building step be executed subsequently (this being, however, clear in the individual case). The stated orders are thus preferred embodiments of the invention.

As used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more” and “at least one.” As used in the specification and the claims, the singular form of “a”, “an”, and “the” include plural referents, unless the context clearly dictates otherwise. Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has”, “have”, “having”, or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based at least partially on” unless explicitly stated otherwise.

Some implementations of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all implementations of the disclosure are shown. Indeed, various implementations of the disclosure may be embodied in many different forms and should not be construed as limited to the implementations set forth herein: rather, these example implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

The present disclosure provides means for predicting a prostate cancer local stage for a patient. The prediction is made using a trained machine learning model.

Such a “machine learning model”, as used herein, may be understood as a computer implemented data processing architecture. The machine learning model can receive input data and provide output data based on that input data and on parameters of the machine learning model. The machine learning model can learn a relation between input data and output data through training. In training, parameters of the machine learning model may be adjusted in order to provide a desired output for a given input.

The process of training a machine learning model involves providing a machine learning algorithm (that is the learning algorithm) with training data to learn from. The term “trained machine learning model” refers to the model artifact that is created by the training process. The training data must contain the correct answer, which is referred to as the target. The learning algorithm finds patterns in the training data that map input data to the target, and it outputs a trained machine learning model that captures these patterns. In the training process, training data are inputted into the machine learning model and the machine learning model generates an output. The output is compared with the (known) target. Parameters of the machine learning model are modified in order to reduce the deviations between the output and the (known) target to a (defined) minimum.

In general, a loss function can be used for training, where the loss function can quantify the deviations between the output and the target. The loss function may be chosen in such a way that it rewards a wanted relation between output and target and/or penalizes an unwanted relation between an output and a target. Such a relation can be, e.g., a similarity, or a dissimilarity, or another relation.

If, for example, the output and the target are numbers, the loss function could be the difference between these numbers. In this case, a high absolute value of the loss function can mean that a parameter of the model needs to undergo a strong change.

In the case of vector-valued outputs, for example, difference metrics between vectors such as the root mean square error, a cosine distance, a norm of the difference vector such as a Euclidean distance, a Chebyshev distance, an Lp-norm of a difference vector, a weighted norm or any other type of difference metric of two vectors can be chosen. These two vectors may for example be the desired output (target) and the actual output.

In the case of higher dimensional outputs, such as two-dimensional, three-dimensional or higherdimensional outputs, for example an element-wise difference metric can be used. Alternatively or additionally, the output data may be transformed, for example to a one -dimensional vector, before computing a loss.

The modification of model parameters and the reduction of the loss can be done in an optimization procedure, for example in a gradient descent procedure.

The machine learning model of the present disclosure comprises at least one segmentation unit. The at least one segmentation unit is configured and trained to receive a multi-parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more segmented images based on the one or more received images and model parameters. In the one or more segmented images prostatic and extra-prostatic cancer lesions, if present, are segmented.

The term "segmentation" refers to the process of dividing an image into several segments, also known as image segments, image regions or image objects. Segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. From a segmented image, the localized objects can be separated from the background, visually highlighted (e.g.: colored), measured, counted, or otherwise quantified.

Segmentation involves assigning a label to each pixel/voxel of an image such that pixels/voxels with the same label have certain features in common.

For example, in a segmented image, where the prostate gland is segmented, all pixels/voxels representing the prostate gland have the same label. In a segmented image, where prostatic cancer lesions are segmented all pixels/voxels representing prostatic cancer lesions have the same label. In a segmented image, where extra-prostatic cancer lesions are segmented all pixels/voxels representing extra-prostatic cancer lesions have the same label.

For example, in the present case, a 4-class segmentation mask may be generated in which different labels are assigned to the pixels/voxels representing the prostate gland, prostatic cancer lesion(s), extra- prostatic cancer lesion(s), and any other tissue (background) (e.g., class 0 = background, class 1 = prostate gland, class 2 = prostatic cancer lesion, class 3 = extra-prostatic cancer lesion).

For example, in the present case, a 3 -class segmentation mask may be generated in which different labels are assigned to the pixels/voxels representing prostatic cancer lesion(s), extra-prostatic cancer lesion(s), and any other tissue (background) (e.g., class 0 = background, class 1 = prostatic cancer lesion, class 2 = extra-prostatic cancer lesion).

Segmentation can be done in stages. Segmentation can be done, e.g., in two steps: in a first step, the prostate gland can be segmented; in a second step, prostatic and extra-prostatic cancer lesions can be segmented. Segmentation can be done, e.g., in three steps: in a first step, the prostate gland can be segmented; in a second step, prostatic cancer lesions can be segmented; in a third step, extra-prostatic cancer lesions can be segmented. However, the segmentation can also be done in one step.

In a preferred embodiment, the machine learning model comprises two segmentation units, a first segmentation unit and a second segmentation unit.

The first segmentation unit is configured and trained to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more segmented images based on the one or more received images in which the prostate gland of the male human, if present, is segmented.

The one or more segmented images generated by the first segmentation unit can be a mask and/or a mask can be created therefrom. A “mask” is an image where pixel/voxel intensity values are zero, and others are non-zero. A prostate gland mask may be an image where intensity values of pixels/voxels representing extra-prostatic tissue are set to zero. An extra-prostatic region mask may be an image where intensity values of pixels/voxels representing a prostate gland are set to zero. Such masks can be used for masking other images: wherever the pixel/voxel intensity value is zero in the mask, then the pixel intensity of the resulting masked image will be set to zero. The prostate gland mask and/or the extra- prostatic region mask can be used to mask the multi -parametric MRI image set. They can be used to create an image set that is reduced to the prostate gland and/or they can be used to create an image set that is reduced to extra-prostatic regions. In this way, in a further step in which lesions are identified and segmented, it can be determined whether the lesions are located inside or outside the prostate.

The second segmentation unit can be configured and trained to receive the multi -parametric MRI image set of the male human and the one or more segmented images in which the prostate gland of the male human is segmented (and/or one or more masks created therefrom), and to generate one or more segmented images in which prostatic and extra-prostatic cancer lesions are segmented.

The one or more segmented images generated by the second segmentation model can then be used for classification (prediction of the prostate cancer local stage).

For this purpose, the machine learning model of the present disclosure comprises a classification unit. The classification unit is configured to assign the one or more segmented images to one of at least two classes, each class corresponding to a prostate cancer local stage.

The machine learning model of the present disclosure is trained on training data.

The training data comprise, for each reference patient of a multitude of reference patients, input data and target data. The term “multitude of reference patients” means at least 10, preferably at least 100 patients.

The term “reference” is used in this disclosure to distinguish the data used to train and/or validate the machine learning model from the data used to make predictions using the trained machine learning model. Thus, the data used to train and/or validate the machine learning model represent “reference patients” whereas data used for prediction purposes represent a new patient (i.e., a patient which is not a reference patient). Unless explicitly stated otherwise, the term “patient” includes both reference patients and new patients. However, the term “reference” is not to be understood in any other restrictive sense; this distinction serves only to prevent a clarity objection in patent grant proceedings.

Reference patients usually include patients with prostate cancer, patients with suspected prostate cancer, patients who have had their prostates removed due to prostate cancer, and/or healthy patients. Reference patients usually include those with various stages of prostate cancer. Usually, reference patients cover all stages that the machine learning model is trained to predict.

The input data comprise a multi-parametric MRI image set of an examination region comprising the prostate of the patient.

The multi-parametric MRI image set may include different types of MRI images and/or maps.

Each image and/or map is a representation of an examination region of a patient. The examination region usually comprises the prostate of the patient, or, in case of patients who have had the prostate removed, the region where the prostate was located prior to prostatectomy, and surrounding tissue, hereinafter collectively referred to as the prostate region.

The multi -parametric MRI (mpMRI) image set refers to a plurality of MRI images/maps acquired using various different MRI acquisition/image generation techniques. The different image channels acquired using different imaging techniques provide different information at locations in the prostate region of the patient. Corresponding pixel/voxel locations in the different image channels refer to the same location in the prostate region, and each pixel/voxel location has a vector of image values including a respective image value for each image channel.

In an advantageous embodiment, the mpMRI image set of the prostate region of a patient is a set of 2D, 3D or 4D (e.g., 3D plus time) mpMRI images and/or maps.

Any or all of the images and/or maps in the mpMRI image set of the patient's prostate region can be received directly from an MRI scanner used to acquire mpMRI images and/or maps. Alternatively, any or all of the images and/or maps in the mpMRI image set of the patient's prostate region can be provided by loading/retrieving previously acquired images and/or maps from a storage or memory of a computer system or receiving previously acquired images and/or maps via an electronic transmission from a remote computer system.

In an advantageous embodiment, the mpMRI image set includes a T2 -weighted pulse sequence (T2- weighted MRI image) that provides an overview of the prostate region.

The mpMRI image set can also include functional imaging, such as one or more diffusion weighted imaging (DWI) images depicting water molecule diffusion variations due to the microscopic tissue structures. DWI generates a set of images using different gradients (or b-values), which result in different reconstructed signal intensities.

The mpMRI image set can also include an apparent diffusion coefficient (ADC) map which can be generated from the DWI image set. The ADC map can be derived using the signal intensity changes of at least two b-values and provides a quantitative map demonstrating the degree of water molecule diffusion.

Additionally, dynamic contrast enhanced (DCE) MRI can be included in the overall acquisition of the mpMRI image set. In DCE MRI, a series of temporally fast T1 -weighted MRI images are acquired during rapid intravenous injection of a gadolinium -based contrast agent.

Prostate cancer tissues often induce some level of angiogenesis, which is followed by an increased vascular permeability as compared to normal prostatic tissue. A *™ 5 map can be generated from the DCE MRI image set and included in the mpMRI image set. K trans is a measure that provides an indicator of tissue permeability. mpMRI image sets can be obtained using a 1.5T or preferably a 3T scanner with or without an endorectal coil. Examples of MR imaging protocols are given, e.g., in I. Caglic et al. -. Multiparametric MRI - local staging of prostate cancer and beyond, Radiol Oncol. 2019, 53(2): 159-170.

Before the images and/or maps of an mpMRI image set are inputted into the machine learning model, they can be pre-processed.

Pre-processing usually comprise motion compensation and/or region-of-interest extraction. Prior to inputting the mpMRI image set into the machine learning model, the motion compensation can be performed on the mpMRI image set to compensate for any motion (e.g., patient movement) between the various MRI acquisitions (e.g., T2-weighted, DWI, DCE). In an advantageous implementation, 3D elastic registration is used in a cascade fashion to perform the motion compensation. In order to increase robustness, a pairwise registration can be performed between the T2-weighted MRI image and a corresponding low b-value image in the DWI image set, resulting in a computed deformation field. The computed deformation field can then be applied to compensate motion in the ADC parameter map. The computed defonnation field can also be applied to compensate motion in other images of the DWI image set, such as a high b-value DWI image. Similarly, a pairwise registration can be performed between the T2-weighted MR image and a late contrast-enhanced image representative of the DCE MRI image set, and the resulting computed deformation field can be applied to perform motion compensation in the Trcms map. In case the MRI scans used to acquire the mpMRI image set cover an area larger than the prostate region, a region-of-interest (ROI) surrounding the prostate can be extracted in the mpMRI image set. A predetermined size ROI mask can be applied to each slice of the images in the mpMRI image set to ensure that only the prostate and surrounding area in each image is considered for the prostate cancer stage prediction. For example, an 80 mm x 80 mm ROI mask can be applied to each slice. After the motion compensation and/or ROI extraction, the images in the mpMRI image set may then be reformatted into T2 -weighted image grid with a predetermined size that corresponds to size of the input channels of the machine learning model. For example, each image and/or map can be reformatted into a T2-weighted image grid with the size of 100 mm x 100 mm x 60 mm, which corresponds to roughly 200 x 200 pixels in each 2D slice.

Besides the mpMRI image set of a patient, the input data can comprise further patient data such as patient’s age, body size, body weight, body mass index, prostate-specific antigen (PSA) level, biopsy Gleason score, resting heart rate, heart rate variability, body temperature, lifestyle information about the life of the patient, such as consumption of alcohol, smoking, and/or exercise and/or the patient’s diet, information about a previous prostatectomy, medical intervention parameters such as regular medication, occasional medication, or other previous or current medical interventions and/or other information about the patient’s previous and current treatments and reported health conditions and/or combinations thereof.

The training data further comprise, for each reference patient of the plurality of reference patients, one or more MRI images in which, if present, prostate gland, prostatic and extra-prostatic cancer lesions are segmented. The segmentation can be done manually by a radiologist, for example. The one or more (manually) segmented images serve as target data for training the at least one segmentation unit.

The training data further comprise, for each reference patient of the plurality of reference patients, a prostate cancer local stage as target data for training the classification unit.

There are different systems for staging (prostate) cancer. The machine learning model can in principle be trained to predict the staging according to each system. The machine learning model can also be trained to predict staging according to different (more than one) systems.

The most widely used system for staging of prostate cancer is the tumor, nodes, and metastases (TNM) staging system developed by the American Joint Committee on Cancer (AJCC). In a preferred embodiment, the machine learning model is trained to make a prediction in accordance with the TNM staging system. Accordingly, the training data contains a TNM stage for each reference patient.

It is also possible to have only two stages, for example, one stage indicating that no tumor is present or that an existing tumor has not spread beyond the prostate, and a second stage indicating that the tumor has spread beyond the prostate.

The at least one segmentation unit and the classification unit can be trained jointly or separately from each other. Preferably, the at least one segmentation unit and the classification unit are trained separately from each other.

If the at least one segmentation unit comprises two or more segmentation units, each segmentation unit can be trained separately, or two or more segmentation unit can be trained jointly. Preferably, each segmentation unit is trained separately.

During training of a segmentation unit, one or more input images to be segmented are inputted into the segmentation unit and the segmentation unit generates one or more segmented images based on the input image(s) and model parameters. The one or more segmented images are compared with respective target image(s) and deviations between input image(s) and target image(s) can be quantified using a loss function. Model parameters can then be modified to reduce the segmentation loss.

During training of the classification unit, one or more segmented images are inputted into the classification unit and the classification assigns the one or more segmented images to one of at least two classes based on the inputed image(s) and model parameters, wherein each class corresponding to a prostate cancer local stage. The class to which the one or more segmented images is/are assigned is compared with the target prostate cancer local stage and deviations between the class and the target prostate cancer local stage can be quantified using a loss function. Model parameters can then be modified to reduce the classification loss.

The training procedure is schematically depicted in Fig. 1 for the example of a machine learning model comprising two segmentation units, SU1 and SU2, and a classification unit CU.

The machine learning model MLM is trained with training data. The training data comprise data from a multitude of reference patients. In Fig. 1 only one set of training data TD of one reference patient is shown, the training data TD comprising an mpMRI image set IS of the prostate region of the reference patient, one or more segmented images in which the prostate gland as well as prostatic and extra- prostatic lesions, if present, are segmented, and a prostate cancer stage PCS.

In the example depicted in Fig. 1, the mpMRI image set IS is inputed into a pre-processing unit PPU. The pre-processing unit PPU is configured to perform motion compensation and/or region-of-interest extraction.

The preprocessed images are then inputted into the first segmentation unit SUi. The first segmentation unit SUi is configured to generate one or more first segmented images SIi in which the prostate gland, if present, is segmented. Deviations between the one or more first segmented images SIi and the respective one or more segmented images SI of the training data (target data) are quantified by means of a first loss function Li. The calculated loss can be used to modify model parameters of the first segmentation unit SUi to reduce the deviations, e.g., to a pre-defined minimum. The dice loss function can be used as a loss function (see, e.g., S. Jadon: A survey of loss functions for semantic segmentation, arXiv:2006.14822v4 [eess.IV] 3 Sep 2020).

The one or more first segmented images SIi together with the preprocessed images are then inputed into the second segmentation unit SU2. The second segmentation unit SU2 is configured to generate one or more second segmented images SI2 in which prostatic and extra-prostatic lesions, if present, are segmented. Deviations between the one or more second segmented images SI2 and the respective one or more segmented images SI of the training data (target data) are quantified by means of a second loss function L2. The calculated loss can be used to modify 7 model parameters of the second segmentation unit SU2 to reduce the deviations, e.g., to a pre-defined minimum. The dice loss function can be used as a loss function (see, e.g., S. Jadon: A survey of loss functions for semantic segmentation, arXiv:2006.14822v4 [eess.IV] 3 Sep 2020).

The one or more second segmented images SI2 are then inputted into the classification unit CU. The classification unit CU is configured to assign the one or more second segmented images SI2 to one of at least two classes, each class corresponding to a prostate cancer local stage. Deviations between the class the one or more second segmented images SI2 are assigned to (the predicted prostate cancer local stage PCS p ) and the prostate cancer local stage PCS of the training data (target data) are quantified by means of a third loss function L3. The calculated loss can be used to modify model parameters of the classification unit CU to reduce the deviations, e.g., to a pre-defined minimum. Examples of loss functions include LI loss, L2 loss, structure similarity index measure (SSIM) or combination of the above to name a few. More details about loss functions may be found in the scientific literature (see e.g.: K. Janocha et al.: On Loss Functions for Deep Neural Networks in Classification, 2017, arXiv: 1702.05659vl [cs.LG],

Fig. 2 shows schematically, by way of example, the process of predicting a prostate cancer stage for a new patient using the trained machine learning model MLM‘. Patient data comprising an mpMRI image set IS* of the new patient are received. The mpMRI image set IS* is inputted into the pre-processing unit PPU. The pre-processing unit PPU is configured to perform motion compensation and/or region- of-interest extraction. The pre-processed images are then inputed into the first segmentation unit SUi that generates one or more first segmented images SIi* in which the prostate gland, if present, is segmented. The one or more first segmented images SIi* together with the pre-processed images are then inputted into the second segmentation unit SU2 that generates one or more second segmented images SI2* in which prostatic and extra-prostatic lesions, if present, are segmented. The one or more second segmented images SI2* are then inputted into the classification unit CU. The classification unit CU outputs a predicted prostate cancer local stage PCS p * for the new patient. The predicted prostate cancer local stage PCS p * for the new patient as well as the one or more first and/or second segmented images SIi* and/or SI2* can be displayed on a display, printed on a printer, stored on a data memory and/or transmitted to a remote computer system. The predicted prostate cancer local stage PCS p * for the new patient as well as the one or more first and/or second segmented images SII* and/or SI2* can be used by a radiologist to determine a therapy for the patient.

The machine learning model can be or comprise one or more artificial neural networks. An artificial neural network is a biologically inspired computational model. Such an artificial neural network usually comprises at least three layers of processing elements: a first layer with input neurons, an wth layer with at least one output neuron, and n-2 inner layers, where n is a natural number greater than 2. In such a network, the input neurons serve to receive the input data. If the input data constitutes or comprises an image, there is usually one input neuron for each pixel/voxel of the input image; there can be additional input neurons for additional input data such as data about the object represented by the input image, the type of image, the way the image was acquired, additional patient data and/or the like. The output neurons serve to output one or more values, e.g., a predicted prostate cancer local stage, a segmented image, and/or others.

The processing elements of the layers are interconnected in a predetermined pattern with predetermined connection weights therebetween. Each network node usually represents a calculation of the weighted sum of inputs from prior nodes and a non-linear output function. The combined calculation of the network nodes relates the inputs to the outputs.

When trained, the connection weights between the processing elements contain information regarding the relationship betw een the input data and the output data.

Each network node can represent a calculation of the weighted sum of inputs from prior nodes and a non-linear output function. The combined calculation of the network nodes relates the inputs to the outputs.

The network w eights can be initialized w ith small random values or with the weights of a prior partially trained network. The training data inputs are applied to the network and the output values are calculated for each training sample. The network output values can be compared to the target output values. A backpropagation algorithm can be applied to correct the weight values in directions that reduce the error between calculated outputs and targets. The process is iterated until no further reduction in error can be made or until a predefined prediction accuracy has been reached.

A cross-validation method can be employed to split the data into training and validation data sets. The training data set is used in the error backpropagation adjustment of the network weights. The validation data set is used to verify that the trained network generalizes to make good predictions. The best network weight set can be taken as the one that presumably best predicts the outputs of the test data set. Similarly, varying the number of network hidden nodes and determining the netw ork that performs best w ith the data sets optimizes the number of hidden nodes.

In a preferred embodiment, the one or more artificial neural networks is/are or comprise(s) one or more convolutional neural networks (CNN). A CNN is a class of artificial neural networks, most commonly applied to, e.g., analyzing visual imagery 7 . A CNN comprises an input layer w ith input neurons, an output layer with at least one output neuron, as well as multiple hidden layers betw een the input layer and the output layer.

The hidden layers of a CNN typically comprise convolutional layers, ReLU (Rectified Linear Units) layers, i.e. activation function, pooling layers, fully connected layers and normalization layers. The nodes in the CNN input layer can be organized into a set of "filters" (feature detectors), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the mathematical convolution operation with each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed with two functions to produce a third function. In convolutional network terminology, the first function of the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input of a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.

The objective of the convolution operation is to extract features (such as, e.g., edges from an input image). Conventionally, the first convolutional layer is responsible for capturing the low -level features such as edges, color, gradient orientation, etc. With added layers, the architecture adapts to the high- level features as well, giving a network which has the wholesome understanding of images in the dataset. Similar to the convolutional layer, the pooling layer is responsible for reducing the spatial size of the feature maps. It is useful for extracting dominant features with some degree of rotational and positional invariance, thus maintaining the process of effectively training of the model. Adding a fully -connected layer is a way of learning non-linear combinations of the high-level features as represented by the output of the convolutional part.

The machine learning model can be configured as a classification model. It can be configured to receive input data comprising a multi-parametric MRI image set of a patient, and to assign the input data to one of several classes, where each class usually corresponds to a prostate cancer stage.

However, the model can also be configured to generate and output one or more segmented images based on the input data.

For example, in the present case, the segmented image(s) may show the prostate, the area where the prostate was located before prostatectomy, the seminal vesicle, invasions into the seminal vesicle, lymph nodes, lymph node metastases, extracapsular extensions, extra-prostatic extension, invasions of the bladder neck, the external sphincter, the rectum, the levator muscles and/or the pelvic wall, and/or other objects segmented.

In the segmented image, features (regions) that have led to the predicted prostate cancer stage can be marked, e.g., by a defined color. For example, if extra-prostatic extensions occur in the multi-parametric MRI image set of a patient leading to a TNM-stage T3 classification, these extra-prostatic extensions can be marked in the at least one segmented image with a defined color representing the TNM-stage T3. In this way, the trained machine learning model performs simultaneous identification of prostate cancer signs and metastases and prostate cancer local staging.

The one or more segmentation units can be one or more generative neural network such as one or more image-to-image convolutional encoder-decoder.

For example, each segmentation unit can be based on a specific kind of convolutional architecture called U-Net (see e.g. O. Ronneberger et al.: U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, Springer, 2015, https://doi.org/10.1007/978-3-319-24574-4___28). The U-Net architecture consists of two main blocks, an encoding path and a decoding path. The encoding path uses convolutions, activation functions and pooling layers to extract image features, while the decoding path replaces the pooling layers with upsampling layers to project the extracted features back to pixel/voxel space, and finally recovers the image dimension at the end of the architecture. These are used in combination with activation functions and convolutions. Finally, the feature maps from the encoding paths can be concatenated to the feature maps in the decoding path in order to preserve fine details from the input data. For example, each segmentation unit can be or comprise a generative adversarial network (GAN), preferably a Pix2Pix GAN or a cycleGAN (see, e.g., M.-Y. Liu et al. : Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications , arXiv:2008.02793; J. Henry et al.: Pix2Pix GAN for Image-to-Image Translation, DOI: 10. 13140/RG.2.2.32286.66887; J. Haubold etal. : Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network, Eur Radiol. 2021, 31(8): 6087-6095, doi: 10.1007/s00330-021-07714-2).

The classification unit can be an image classification model. For example, the classification unit can be based on a 3D ResNetl8 architecture (see, e.g.: K. Hara et al.: Learning Spatio-temporal Features with 3D Residual Networks for Action Recognition, 2017, arXiv: 1708.07632 [cs.CV]).

Once the machine learning model is trained, the trained machine learning model can be stored on a memory 7 or storage of a computer system and used to a predict prostate cancer stage and/or perform simultaneous identification of prostate cancer signs and metastases and prostate cancer staging in newly received/inputted patient data including an mpMRI image set of a new patient. The term "new" means that the corresponding data have not normally been used in the training and/or validation process. However, this is not a requirement; it is possible that a patient's data has been used for training the machine learning model and (the same and/or other/different) data from the same patient is used for a prediction.

The patient data of the new patient are inputted into the trained machine learning model and the trained machine learning model generates and outputs a predicted prostate cancer stage and optionally one or more segmented images.

The predicted prostate cancer stage and/or the one or more segmented images can be outputted by displaying the prostate cancer stage and/or the one or more segmented images on a display device of a computer system and/or by printing the prostate cancer stage and/or the one or more segmented images via a printing device.

It is possible that the predicted prostate cancer stage is compared with one or more reference values. For example, if the predicted prostate cancer stage is below a reference value, this may mean that prostatectomy is recommended. If the predicted prostate cancer stage is greater than or equal to the reference value, it may mean that chemotherapy is recommended, possibly in combination with prostatectomy.

Fig. 3 shows schematically an embodiment of the method of training a machine learning model of the present disclosure in form of a flow chart.

The training method (100) comprises:

(110) receiving and/or providing a machine learning model, wherein the machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi -parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage,

(120) receiving and/or providing training data, the training data comprising, for each reference patient of a multitude of reference patients, input data and target data, the input data comprising a multi parametric MRI image set of an examination region comprising a prostate region of the reference patient, and the target data comprising one or more target images in which, if present, prostate gland, prostatic cancer lesions and extra-prostatic cancer lesions are segmented, and a prostate cancer local stage for the reference patient,

(130) training the machine learning model, wherein the training comprises for each reference patient of the multitude of reference patients:

(131) inputting the multi -parametric MRI image set of the reference patient into the first segmentation unit,

(132) receiving one or more first segmented images from the first segmentation unit in which the prostate gland, if present, is segmented,

(133) computing a first segmentation loss, the first segmentation loss quantifying deviations between the one or more first segmented images and the one or more target images of the segmented prostate gland,

(134) inputting the one or more first segmented images and the multi -parametric MRI image set of the reference patient into the second segmentation unit,

(135) receiving one or more second segmented images from the second segmentation unit in which the prostatic and extra-prostatic cancer lesions, if present, are segmented,

(136) computing a second segmentation loss, the second segmentation loss quantifying deviations between the one or more second segmented images and the one or more target images of the segmented prostatic and extra-prostatic cancer lesions,

(137) inputting the one or more second segmented images into the classification unit,

(138) receiving the predicted prostate cancer local stage from the classification unit,

(139) computing the classification loss, the classification loss quantifying deviations between the prostate cancer local stage and the predicted prostate cancer local stage,

(140) modifying model parameters to reduce the first segmentation loss, the second segmentation loss, and the classification loss,

(150) storing and/or outputting the model parameters and/or the trained machine learning model and/or transmitting the model parameters and/or the trained machine learning mode! to a remote computer system and/or using the trained machine learning model for predicting a prostate cancer local stage for a new patient.

Fig. 4 shows schematically, an embodiment of the method of predicting a prostate cancer local stage for a new patient using the trained machine learning model in form of a flow chart.

The prediction method (200) comprises:

(210) receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the new patient, (220) inputing the patient data into the trained machine learning model, wherein the trained machine learning model comprises a first segmentation unit, a second segmentation unit, and a classification unit, o wherein the first segmentation unit is configured to receive a multi -parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more first segmented images based on the one or more received images and model parameters, o wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi -parametric MRI image set of the examination region and to generate one or more second segmented images based on the one or more first segmented images, the multi-parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, o wherein the classification unit is configured to assign the one or more second segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage,

(230) receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more first and second segmented images for the new patient,

(240) outputing the predicted prostate cancer local stage and optionally the one or more first and second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and second segmented images to a remote computer system.

Further embodiments of the present disclosure are:

1. A method for training a machine learning model, the training method comprising: receiving and/or providing a machine learning model, wherein the machine learning model comprises at least one segmentation unit and a classification unit, wherein the at least one segmentation unit is configured to receive a multi-parametric MRI image set of an examination region comprising a prostate region of a male human and to generate one or more segmented images based on the one or more received images and model parameters, and wherein the classification unit is configured to assign the one or more segmented images to one of at least two classes based on model parameters, each class corresponding to a prostate cancer local stage, receiving and/or providing training data, the training data comprising, for each patient of a multitude of patients, input data and target data, the input data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the patient, and the target data comprising one or more target images in which, if present, prostate gland, prostatic cancer lesions and extra-prostatic cancer lesions are segmented, and a prostate cancer local stage for the patient, training the machine learning model, wherein the training comprises for each patient of the multitude of patients: o inputing the multi-parametric MRI image set of the patient into the at least one segmentation unit, o receiving one or more segmented images from the at least one segmentation unit in which, if present, prostatic and extra-prostatic cancer lesions are segmented, o computing a segmentation loss, the segmentation loss quantifying deviations between the one or more segmented images and the one or more target images, o inputting the one or more segmented images into the classification unit, o receiving a predicted prostate cancer local stage from the classification unit, o computing a classification loss, the classification loss quantifying deviations between the prostate cancer local stage and the predicted prostate cancer local stage, o modifying model parameters to reduce the segmentation loss and the classification loss, storing and/or outputting the model parameters and/or the trained machine learning model and/or transmitting the model parameters and/or the trained machine learning model to a remote computer system and/or using the trained machine learning model for predicting a prostate cancer local stage for a new patient. ethod according to the embodiment 1, wherein the machine learning model comprises a first segmentation unit and a second segmentation unit, wherein the first segmentation unit is configured to receive the multi -parametric MRI image set of the examination region and to generate one or more first segmented images based on the multi-parametric MRI image set and model parameters, wherein the prostate gland, if present is segmented in the one or more first segmented images, wherein the second segmentation unit is configured to receive the one or more first segmented images and the multi-parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images, the multi -parametric MRI image set and model parameters, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, wherein the training comprises for each patient of the multitude of patients: o inputting the multi-parametric MRI image set of the patient into the first segmentation unit, o receiving one or more first segmented images from the first segmentation unit in which the prostate gland, if present, is segmented, o computing a first segmentation loss, the first segmentation loss quantifying deviations between the one or more first segmented images and the one or more target images of the segmented prostate gland, o inputting the one or more first segmented images and the multi -parametric MRI image set of the patient into the second segmentation unit, o receiving one or more second segmented images from the second segmentation unit in which the prostatic and extra-prostatic cancer lesions, if present, are segmented, o computing a second segmentation loss, the second segmentation loss quantifying deviations between the one or more second segmented images and the one or more target images of the segmented prostatic and extra-prostatic cancer lesions, o inputting the one or more second segmented images into the classification unit, o receiving the predicted prostate cancer local stage from the classification unit, o computing the classification loss, the classification loss quantifying deviations between the prostate cancer local stage and the predicted prostate cancer local stage, o modifying model parameters to reduce the first segmentation loss, the second segmentation loss, and the classification loss,

3. The method according to the embodiment 1 or 2, wherein the multi -parametric MRI image set comprises one or more T2-weighted images and/or one or more apparent diffusion coefficient maps.

4. The method according to the embodiment 1 or 2, wherein the multi -parametric MRI image set consists of one or more T2 -weighted images and/or one or more apparent diffusion coefficient maps.

5. The method according to any one of the embodiments 1 to 4, wherein each class corresponds to a prostate cancer local stage according to the tumor, nodes, and metastases staging system developed by the American Joint Commitee on Cancer.

6. The method according to any one of the embodiments 1 to 5, wherein each of the at least one segmentation unit and classification unit is trained separately.

7. The method according to any one of the embodiments 1 to 6, wherein each of the at least one segmentation unit and classification unit is or comprises an artificial neural network.

8. A computer-implemented method for predicting a prostate cancer local stage for a patient, the prediction method comprising: providing a trained machine learning model, receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the patient, inputting the patient data into the trained machine learning model, wherein the trained machine learning model is configured and trained on training data to o segment prostatic and extra-prostatic cancer lesions and generate one or more segmented images based on the patient data, o assign the one or more segmented images to one of at least two classes, wherein each class corresponds to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more segmented images for the patient, outputting the predicted prostate cancer local stage and optionally the one or more segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more segmented images on a data storage, and/or transmiting the predicted prostate cancer local stage and optionally the one or more segmented images to a remote computer system.

9. The method according to the embodiment 8, wherein the machine learning model comprises a first segmentation unit and a second segmentation unit, wherein the first segmentation unit is configured and trained to receive a multi-parametric MRI image set of the examination region and to generate one or more first segmented images based on the multi -parametric MRI image set, wherein the prostate gland, if present is segmented in the one or more first segmented images, wherein the second segmentation unit is configured and trained to receive the one or more first segmented images and the multi-parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images and the multi-parametric MRI image set, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, the method comprising: inputting the multi-parametric MRI image set into the first segmentation unit, receiving one or more first segmented images from the first segmentation unit, inputting the multi-parametric MRI image set and the first segmented images into the second segmentation unit, receiving one or more second segmented images from the second segmentation unit, inputting the one or more second segmented images into the classification unit, receiving from the classification unit a predicted prostate cancer local stage, outputting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images to a remote computer system.

10. The method according to the embodiment 8 or 9, wherein the trained machine learning model was trained in a training method according to any one of the embodiments 1 to 7.

11. A computer system comprising: a processor; and a memory storing an application program configured to perform, when executed by the processor, an operation, the operation comprising: receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the patient, inputting the patient data into a trained machine learning model, wherein the trained machine learning model is configured and trained on training data to o segment prostatic and extra-prostatic cancer lesions and generate one or more segmented images based on the patient data, o assign the one or more segmented images to one of at least two classes, wherein each class corresponds to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more segmented images for the patient, outputting the predicted prostate cancer local stage and optionally the one or more segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more segmented images to a remote computer system.

12. The computer system according to the embodiment 11, wherein the machine learning model comprises a first segmentation unit and a second segmentation unit, wherein the first segmentation unit is configured and trained to receive a multi-parametric MRI image set of the examination region and to generate one or more first segmented images based on the multi-parametric MRI image set, wherein the prostate gland, if present is segmented in the one or more first segmented images, wherein the second segmentation unit is configured and trained to receive the one or more first segmented images and the multi-parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images and the multi-parametric MRI image set, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, the operation comprising: inputting the multi-parametric MRI image set into the first segmentation unit, receiving one or more first segmented images from the first segmentation unit, inputting the multi-parametric MRI image set and the first segmented images into the second segmentation unit, receiving one or more second segmented images from the second segmentation unit, inputting the one or more second segmented images into the classification unit, receiving from the classification unit a predicted prostate cancer local stage, outputting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images to a remote computer system.

13. The computer system according to the embodiment 11 or 12, wherein the trained machine learning model was trained in a training method according to any one of the embodiments 1 to 7.

14. A non-transitory computer readable medium having stored thereon software instructions that, when executed by a processor of a computer system, cause the computer system to execute the following steps: receiving patient data, the patient data comprising a multi -parametric MRI image set of an examination region comprising a prostate region of the new patient, inputting the patient data into the trained machine learning model, wherein the trained machine learning model is configured and trained on training data to o segment prostatic and extra-prostatic cancer lesions and generate one or more segmented images, o assign the one or more segmented images to one of at least two classes, wherein each class corresponds to a prostate cancer local stage, receiving from the trained machine learning model a predicted prostate cancer local stage and optionally the one or more segmented images for the new patient, outputting the predicted prostate cancer local stage and optionally the one or more segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more segmented images to a remote computer system. 15. The non-transitory computer readable medium according to the embodiment 14, wherein the machine learning model comprises a first segmentation unit and a second segmentation unit, wherein the first segmentation unit is configured and trained to receive a multi-parametric MRI image set of the examination region and to generate one or more first segmented images based on the multi-parametric MRI image set, wherein the prostate gland, if present is segmented in the one or more first segmented images, wherein the second segmentation unit is configured and trained to receive the one or more first segmented images and the multi-parametric MRI image set of the examination region and to generate one or more second segmented images based on the first segmented images and the multi-parametric MRI image set, wherein the prostatic cancer lesions and extra-prostatic cancer lesions, if present, are segmented in the one or more second segmented images, wherein the software instructions cause the computer system to execute the following steps: inputting the multi-parametric MRI image set into the first segmentation unit, receiving one or more first segmented images from the first segmentation unit, inputting the multi-parametric MRI image set and the first segmented images into the second segmentation unit, receiving one or more second segmented images from the second segmentation unit, inputting the one or more second segmented images into the classification unit, receiving from the classification unit a predicted prostate cancer local stage, outputting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images, and/or storing the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images on a data storage, and/or transmitting the predicted prostate cancer local stage and optionally the one or more first and/or second segmented images to a remote computer system.

16. The non-transitory computer readable medium according to the embodiment 14 or 15, wherein the trained machine learning model was trained in a training method according to any one of the embodiments 1 to 7.

The operations in accordance with the teachings herein may be performed by at least one computer specially constructed for the desired purposes or general-purpose computer specially configured for the desired purpose by at least one computer program stored in atypically non-transitory 7 computer readable storage medium.

The term “non-transitory 7 ” is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or non-volatile computer memory 7 technology suitable to the application.

The term “computer” should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, embedded cores, computing system, communication devices, processors (e.g., digital signal processor (DSP)), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices.

The term “process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g., electronic, phenomena which may occur or reside e.g., within registers and/or memories of at least one computer or processor. The term processor includes a single processing unit or a plurality of distributed or remote such units. Fig. 5 illustrates a computer system (1) according to some example implementations of the present disclosure in more detail. The computer may include one or more of each of a number of components such as, for example, processing unit (20) connected to a memory (50) (e.g., storage device).

The processing unit (20) may be composed of one or more processors alone or in combination with one or more memories. The processing unit is generally any piece of computer hardware that is capable of processing information such as, for example, data, computer programs and/or other suitable electronic information. The processing unit is composed of a collection of electronic circuits some of which may be packaged as an integrated circuit or multiple interconnected integrated circuits (an integrated circuit at times more commonly referred to as a “chip”). The processing unit may be configured to execute computer programs, which may be stored onboard the processing unit or otherwise stored in the memory (50) of the same or another computer.

The processing unit (20) may be a number of processors, a multi -core processor or some other type of processor, depending on the particular implementation. Further, the processing unit may be implemented using a number of heterogeneous processor systems in which a main processor is present with one or more secondary processors on a single chip. As another illustrative example, the processing unit may be a symmetric multi-processor system containing multiple processors of the same type. In yet another example, the processing unit may be embodied as or otherwise include one or more ASICs, FPGAs or the like. Thus, although the processing unit may be capable of executing a computer program to perform one or more functions, the processing unit of various examples may be capable of performing one or more functions without the aid of a computer program. In either instance, the processing unit may be appropriately programmed to perform functions or operations according to example implementations of the present disclosure.

The memory (50) is generally any piece of computer hardware that is capable of storing information such as, for example, data, computer programs (e.g., computer-readable program code (60)) and/or other suitable information either on a temporary basis and/or a permanent basis. The memory 7 may include volatile and/or non-volatile memory 7 , and may be fixed or removable. Examples of suitable memory 7 include random access memory 7 (RAM), read-only memory 7 (ROM), a hard drive, a flash memory, a thumb drive, a removable computer diskette, an optical disk, a magnetic tape or some combination of the above. Optical disks may include compact disk - read only memory 7 (CD-ROM), compact disk - read/write (CD-R/W), DVD, Blu-ray disk or the like. In various instances, the memory may be referred to as a computer-readable storage medium. The computer-readable storage medium is a non-transitory device capable of storing information, and is distinguishable from computer-readable transmission media such as electronic transitory 7 signals capable of carrying information from one location to another. Computer-readable medium as described herein may generally refer to a computer-readable storage medium or computer-readable transmission medium.

In addition to the memory 7 (50), the processing unit (20) may also be connected to one or more interfaces for displaying, transmitting and/or receiving information. The interfaces may include one or more communications interfaces and/or one or more user interfaces. The communications interface(s) may be configured to transmit and/or receive information, such as to and/or from other computer(s), network(s), database(s) or the like. The communications interface may be configured to transmit and/or receive information by physical (wired) and/or wireless communications links. The communications interface(s) may include interface(s) (41) to connect to a network, such as using technologies such as cellular telephone, Wi-Fi, satellite, cable, digital subscriber line (DSL), fiber optics and the like. In some examples, the communications interface(s) may include one or more short-range communications interfaces (42) configured to connect devices using short-range communications technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g., IrDA) or the like.

The user interfaces may include a display (30). The display may be configured to present or otherwise display information to a user, suitable examples of which include a liquid crystal display (LCD), lightemitting diode display (LED), plasma display panel (PDP) or the like. The user input interface(s) (11) may be wired or wireless, and may be configured to receive information from a user into the computer system (1), such as for processing, storage and/or display. Suitable examples of user input interfaces include a microphone, image or video capture device, keyboard or keypad, joystick, touch-sensitive surface (separate from or integrated into a touchscreen) or the like. In some examples, the user interfaces may include automatic identification and data capture (AIDC) technology (12) for machine -readable information. This may include barcode, radio frequency identification (RFID), magnetic stripes, optical character recognition (OCR), integrated circuit card (ICC), and the like. The user interfaces may further include one or more interfaces for communicating with peripherals such as printers and the like.

As indicated above, program code instructions may be stored in memory, and executed by processing unit that is thereby programmed, to implement functions of the systems, subsystems, tools and their respective elements described herein. As will be appreciated, any suitable program code instructions may be loaded onto a computer or other programmable apparatus from a computer-readable storage medium to produce a particular machine, such that the particular machine becomes a means for implementing the functions specified herein. These program code instructions may also be stored in a computer-readable storage medium that can direct a computer, processing unit or other programmable apparatus to function in a particular manner to thereby generate a particular machine or particular article of manufacture . The instructions stored in the computer-readable storage medium may produce an article of manufacture, where the article of manufacture becomes a means for implementing functions described herein. The program code instructions may be retrieved from a computer-readable storage medium and loaded into a computer, processing unit or other programmable apparatus to configure the computer, processing unit or other programmable apparatus to execute operations to be performed on or by the computer, processing unit or other programmable apparatus.

Retrieval, loading and execution of the program code instructions may be performed sequentially such that one instruction is retrieved, loaded and executed at a time. In some example implementations, retrieval, loading and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Execution of the program code instructions may produce a computer-implemented process such that the instructions executed by the computer, processing circuitry or other programmable apparatus provide operations for implementing functions described herein.

Execution of instructions by processing unit, or storage of instructions in a computer-readable storage medium, supports combinations of operations for performing the specified functions. In this manner, a computer system (1) may include processing unit (20) and a computer-readable storage medium or memory' (50) coupled to the processing circuitry-’, where the processing circuitry is configured to execute computer-readable software instructions (60) stored in the memory'. It will also be understood that one or more functions, and combinations of functions, may be implemented by special purpose hardware - based computer systems and/or processing circuitry 7 which perform the specified functions, or combinations of special purpose hardware and program code instructions.