Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISUALIZATION OF IMAGE TRANSFORMATION
Document Type and Number:
WIPO Patent Application WO/2012/160012
Kind Code:
A1
Abstract:
There is provided a method comprising: obtaining first and second datasets representative of first and second images of an object at different times, respectively; obtaining a deformation field, in particular a three dimensional vector field, representative of changes between the first and second data sets, wherein the deformation field is generated by performing a rigid or non-rigid registration; generating one or more masks and/or segmentations for selecting elements of the first image; selecting elements of the first image, in particular contours, points, and/or dosimetric image portions; transforming the first dataset using the deformation field to project the selected elements onto the second image; visualizing the deformation field or previously specified portions thereof; processing the deformation field or previously specified portions thereof to obtain data representative of different predetermined types of deformation, in particular coarse and fine deformations; and visualizing the deformation field or one or more selected portions thereof, thereby to visualise the predetermined types of deformations separately and to enable a differentiation between changes in the patient's body and changes, in particular errors in the patient's position.

Inventors:
LODRON GERALD (AT)
URAY MARTINA (AT)
MAYER HEINZ (AT)
WINKLER PETER (AT)
Application Number:
PCT/EP2012/059331
Publication Date:
November 29, 2012
Filing Date:
May 21, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
JOANNEUM RES FORSCHUNGSGMBH (AT)
LODRON GERALD (AT)
URAY MARTINA (AT)
MAYER HEINZ (AT)
WINKLER PETER (AT)
International Classes:
G06T7/00; A61B6/00; A61N5/10
Domestic Patent References:
WO2009042952A12009-04-02
Foreign References:
US20090087124A12009-04-02
US20060002630A12006-01-05
Other References:
ALAN J LIPTON: "Local Application of Optic Flow to Analyse Rigid versus Non-Rigid Motion", CMU-RI-TR-99-13, TECHNICAL REPORTS, 1 January 1999 (1999-01-01), Pittsburgh, PA, pages 1 - 13, XP055006355, Retrieved from the Internet [retrieved on 20110906]
MARC TITTGEMEYER ET AL: "Visualising deformation fields computed by non-linear image registration", COMPUTING AND VISUALIZATION IN SCIENCE, vol. 5, no. 1, 1 July 2002 (2002-07-01), pages 45 - 51, XP055006473, ISSN: 1432-9360, DOI: 10.1007/s00791-002-0086-4
FERRANT M ET AL: "REAL-TIME STIMULATION AND VISUALIZATION OF VOLUMETRIC BRAIN DEFORMATION FOR IMAGE GUIDED NEUROSURGERY", PROCEEDINGS OF SPIE, THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING SPIE, USA, vol. 4319, 18 February 2001 (2001-02-18), pages 366 - 373, XP008011115, ISSN: 0277-786X, DOI: 10.1117/12.428076
STEF BUSKING ET AL: "Direct Visualization of Deformation in Volumes", COMPUTER GRAPHICS FORUM, vol. 28, no. 3, 1 June 2009 (2009-06-01), pages 799 - 806, XP055030814, ISSN: 0167-7055, DOI: 10.1111/j.1467-8659.2009.01471.x
Attorney, Agent or Firm:
RUMMLER, Felix et al. (Martiusstr. 5, Munich, DE)
Download PDF:
Claims:
Claims

1. A method comprising:

obtaining first and second datasets representative of first and second images, in particular CT images, of an object at different times, respectively;

obtaining a deformation field, in particular a three dimensional vector field, representative of changes between the first and second data sets, wherein the deformation field is generated by performing a rigid or non-rigid registration;

transforming the first dataset using the deformation field to project the first image or selected elements thereof onto the second image;

processing the deformation field or previously specified portions thereof to obtain data representative of different types of deformation, in particular deformations above and below predetermined thresholds corresponding to coarse and fine deformations, respectively; and

visualizing the deformation field or one or more selected portions thereof, thereby to visualize the types of deformations and to enable a differentiation between changes in the object itself and changes, in particular errors, in the position of the object.

2. The method of claim 1, further comprising:

filtering entries of the deformation field to obtain data representative of deformations having a predetermined strength and/or deformations affecting a region of a predetermined size.

3. The method of claim 1 or 2, further comprising:

transforming the first image on the basis of the filtered entries of the deformation field to obtain a succession of images representing different magnitudes of said deformations

4. The method of any preceding claim, comprising:

generating one or more masks and/or segmentations for selecting elements of the first image, wherein the selected elements may represent contours, points, and/or dosimetric image portions.

5. The method of any preceding claim, further comprising visualizing said types of deformation separately.

6. The method of any preceding claim, wherein the object is a patient.

7. The method of any preceding claim comprising:

generating a mask of the patient's body for use in the image registration;

using algorithms to simultaneously generate forward and backward deformation fields;

obtaining a rigid transformation attributable to rotation and/or translation of the object and a non-rigid transformation; attributable to actual deformations inside the object providing the deformation fields separately and as a combined deformation field; and

transforming the first dataset using the rigid transformation field and the non-rigid transformation field, or the combined deformation field to project the selected elements onto the second image.

8. The method of any preceding claim, comprising:

transforming segmentation images with a rigid, non-rigid or combined

transformation, for example a deformation field, obtained by a registration from a first image to a second image, wherein the segmentation image is represented by a first set of planar 2D polygons in 3D space;

generating a 3D volume (3D image matrix) from the set of 2D polygons;

transforming the volume using predetermined matrix transformation algorithms; intersecting the volume with planar planes; and

extracting the contours from the intersected planes to obtain a second set of planar 2D polygons.

9. The method of any preceding claim, comprising:

generating one or more statistical evaluations, in particular histograms, boxplot, mean, variance, of the deformation field or parts of it.

The method of any preceding claim comprising:

visualizing the transformation results as single images or as an image sequence; and superimposing resulting images, contours or points on the datasets.

11. The method of any preceding claim, comprising:

generating a forward and a backward/inverse transformation matrix describing a transformation of the first image into the second image or vice versa;

determining said deformation field from the transformation matrices; and additionally using the transformation matrices to determine the transformation of first image portions associated with the first image into the second image portions associated with the second image or vice versa.

12. The method of any preceding claim, wherein the visualisation step comprises generating a colour-coded representation of the absolute value of each of the entries of the deformation field.

13. The method of any preceding claim, wherein the deformation field contains 3D vectors, and wherein the step to visualise the deformation field comprises:

projecting each of the 3D vectors onto a 2D sectional plane;

generating a colour-coded representation of the projected 2D vectors based on the absolute value of the corresponding 3D vectors;

generating isocontours of the magnitude; and

displaying 2D vectors with different glyph-types (lines, arrows, etc.)

14. The method of claim 13, wherein the step to visualise the deformation field comprises displaying a fraction of the 2D vectors selected in accordance with a

predetermined selection criterion.

15. The method of any preceding claim, comprising:

filtering entries of the deformation field depending on the degree of deformation, in particular entries representative of selected types of deformation such as movements or changes of the object's anatomy.

16. The method of any preceding claim, comprising: filtering entries of the deformation field depending on the size of the region affected by the deformation, in particular entries representative of a deformation that affects a region whose size is not within a predetermined size range.

17. The method of claim 15or 16, comprising:

iteratively transforming the first image on the basis of filtered entries of the deformation field to obtain a succession of images representing different magnitudes of deformation.

18. The method of any preceding claim, comprising:

generating and displaying a histogram of the entries of the deformation field, in particular in respect of a selected region of the object or first or second images.

19. A computer system arranged to perform the method of any preceding claim.

20. A storage medium comprising computer-executable code that, when executed by a computer system, causes the computer system to perform the method of any of claims 1 to 18.

Description:
VISUALIZATION OF IMAGE TRANSFORMATION

Background

Methods are known, for example irradiation therapy methods, which expose areas of a patient with doses of irradiation. Such therapy is usually preceded by an irradiation planning session in which the region and dosage of irradiation is determined. This may involve obtaining CT image data of the patient. These steps may be repeated to examine the progress of the therapy. To do this successive CT images of the patient are generated, compared and analysed. Problems that may arise in this process include movements of patient, patient respiration, and size changes of relevant portions of the patient, in particular the tissue to be treated or tissue that is not to be exposed.

WO 2009/042952 Al describes an irradiation method using deformable image registration. The method includes obtaining a first image, obtaining a second image, determining a deformation field using the first and second images, and determining a transformation matrix from the deformation field. The transformation matrix may be used to position the patient relative to the radiation source to compensate for rotation and translation movements of the patient between the first and second images. Summary of the invention

The present invention is defined in claim 1. Features of preferred embodiments are recited by the dependent claims.

Brief description of the drawings

Fig.l illustrates a CT data set from the planning state, (a) and (c) with 3 superimposed structures (salivary glands and irradiation or tumour region) visualized as coloured contours and surfaces, (b) and (d) illustrate the superimposed dosage resulting from the planning shown as colour encoded image (b) and with colour encoded isocontours (d). Fig.2 illustrates different visualisation possibilities of a deformation field, (a) with a colour encoded image representing the absolute value (magnitude or amplitude) of the 3D vectors on current CT plane overlaid with global transparency on the original CT data set. (b) is the colour encoded isocontour representation of (a) without transparency, (c) and (d) are 2D projections of some 3D deformation vectors of current CT plane onto its plane wherein resulting 2D vectors are displayed as (c) lines and (d) as amplitude dependent magnified arrows. Also the colour in (c) and (d) refers to the local 3D deformation magnitude. Subfigure (e) is like (c) but shows only local transparency and (f) displays (e) as colour encoded image.

Fig.3 illustrates a sagittal visualization of a deformation field overlaid on the CT data set (a) with global opacity and (b) with transparency of small magnitudes, (c) and (d) represents a filtered deformation field wherein (c) only show deformations of structures smaller scale without "distracting" coarse ones, (d) displays this coarse filtered deformation caused by errors in patient-positioning (neck curvature change). Globally these visualizations can be interpreted as a change of the oesophagus and a patient positioning error, (e) and (f) illustrates a 3D visualization of the deformation on the planes whereby (f) is zoomed. Fig.4 illustrates a statistical evaluation of the deformation field with common statistical parameters like mean, standard deviation, root mean square, etc. Additionally the colour coded histogram of the deformation magnitude is painted wherein (a) refers to the entire patient and (b) to the irradiation (tumour) region. Globally the number of small deformations is larger than the number of large deformations; in the tumour region the changes have a mean value of 5.69 mm. (c) shows the volume change of different regions.

Fig. 5 illustrates a workflow in accordance with an embodiment of the invention.

Detailed description of the drawings

Introduction

Morphologic changes during radio oncological therapy cause changes in the dosage distribution as compared to the dosage distribution at the time of the initial radiotherapy planning. Such changes are quantified by applying the original irradiation plan onto data sets (e.g. CT or MRT images) that are obtained at a later stage. For this purpose the contours are applied by means of non-rigid image registration.

The morphology of the irradiated volume is exposed to various influences during the course of a series of treatments: shrinking or swelling of tissue, curvatures and torsions due to incorrect positioning or support of the patient. However, the choice of adequate corrective measures in adaptive radiotherapy (ART) requires knowledge of the type and magnitude of these sources of error.

Accordingly, in an embodiment the invention takes into account changes in the dosage distribution in the target volume and organs at risk, and static parameters in transformation matrixes in order to better judge the type of transformation and to make better predictions of dosimetric changes.

In particular, in an embodiment of the invention there is provided a software-implemented method for non-rigid image registration of CT data in selected regions of a patient, e.g. the ENT region. The method comprises calculation of transformation vectors representing translation and deformation of tissue between different data sets (CT images).

The transformation matrixes obtained by non-rigid registration of CT-based data sets are used to adapt the planning contours, parameterized and statistically evaluated. After applying the original irradiation plan onto the subsequent CT, the correlation of morphologic changes and dosage changes can be analyzed. Besides diverse statistical parameters the basis for an appropriate assessment of the data by an expert is the visualization of the deformation itself. Several representations of the deformation field and the transformed data support the interpretation of the modifications of the body.

In order to evaluate morphologic changes, voxel data of the transformation vector field preferably undergoes filtering (e.g. band pass, smoothing, etc.) and statistical processing, e.g. on the basis of mean values, standard deviations and vector lengths. In particular, morphologic changes, represented by mean vector lengths in filtered transformation vector fields, may be correlated with dosimetric changes. The transformation matrixes enable a visualisation of deformations of the entire scan volume (position inaccuracies) by using global filters or a visualisation of local size reduction and deformation processes by using local filters.

Thus, it is possible to analyze morphologic changes during a series of irradiation treatments and to non-rigidly register contours onto a current CT dataset. Thereby ART can be assisted, i.e. the adaptation of the irradiation and optimisation of the dosage plan in response to changes in the shape and position of the irradiated volume. Thus, the analysis of the deformation as result of the non-rigid registration enables a specific correction depending on the nature of the morphologic changes.

Description of a preferred embodiment

In an embodiment of the present invention there is provided a software-implementable method for improving the workflow and therapy in the field of irradiation-based tumour treatment. As in conventional systems, the method comprises generating CT images of patients to be treated, and planning the treatment on the basis of such images. The planning maybe manual, semi-automatic or automatic and involves the marking of regions of interest in the CT images (e.g. the tumour region, salivary gland etc. See Fig. 1). Thereby the irradiation system can be aligned in order to directly irradiate the tumour without undue exposure of important organs such as the salivary gland. The irradiation is then performed over a period of weeks or even months in successive sessions, wherein the anatomy of the patient may change (patients may lose weight; organs change their size and position; patients have different positions; etc.). Since a new planning (drawing of contours) can be difficult and time consuming, this is to be avoided unless there are significant changes in the patient's anatomy. This may result in inaccurate irradiation.

The present invention aims to provide a tool to distinguish between a change in position and an actual deformation. Thereby, unnecessary re-planning steps can be avoided, because change in a patient's positioning can be addressed by re-positioning the patient, whereas only actual deformations require a re -planning.

According to an embodiment of the present invention, a new dataset (usually CT image) is generated at regular intervals. By comparing a chronological staggered CT image(s) with the original CT image on which the planning was based, the planning contours can be automatically adapted to the current data set. By detecting the necessity of a re-planning, the accuracy of the irradiation can be optimised, resulting in higher chances of successfully treating the patient.

In addition, other data such as dosage distribution (e.g. to what dosage has the salivary gland been exposed, see also Fig. lb) can be transformed onto more recent CT data. Thereby, the sum of dosage data of each irradiation session becomes more accurate. When comparing the planning CT with the current CT, a so-called non-rigid registration is performed. Thereby, a deformation (vector) field is generated that deforms the planning CT so as to substantially obtain the current CT. Using the deformation field, the contours and/or dosage is transformed and applied to the new (current) data set.

Subsequently, instead of discarding the deformation field as usually done, it is visualised in order to display the position, magnitude and quality of changes. For this purpose the rigid portion of the deformation field (i.e. the portion associated with rotations and translations) is not displayed, as this does not contain any information that is sought at this stage. (Such information may reflect an inaccurate positioning of the patient, which may be useful at some other stage. However, in terms of evaluating the therapy, only the non-rigid portion is relevant).

Different methods may be used to visualize the vectors of the deformation field. In the present case, the deformation fields represent 3D vectors. As it is preferred to display 2D sections of CT datasets, the 3D vector datasets are intersected with a 2D plane (axial, coronal, saggital, oblique), thereby generating a 2D matrix/image containing 3D vectors (see Fig.3 e and f). However, it is preferred not to display these 3D vectors. Instead, in embodiments of the invention three alternative visualisation methods are implemented:

1. Absolute value (magnitude) of the 3D deformation vectors (see Fig.2 a)

2. Colour-coded isocontours of 1.

3. Projections of the 3D vectors onto the 2D intersection place (resulting in 2D vectors) and simultaneous colour-coding by means of the new 2D magnitude or original 3D vector magnitude. Also the thickness of the arrowhead of the drawn vectors (not only the length) can be changed due to their magnitude.. Preferably, only a fraction of the vectors is displayed (e.g. every X th vector), see Fig. 2 b).

For physicians it is important to be able to distinguish between different kinds of deformations, e.g. has a patient been badly positioned by the physician (e.g. different neck curvature in different CT images), has an organ moved or has it changed its size. This is hardly visible from raw deformation fields. Accordingly, in an embodiment of the present invention, filtering in accordance with the strength (deformation magnitude, see Fig 3b) and/or the scale (object/structure deformation's spatial size, see Fig 3c and d) of the deformations is performed. Thereby, it is possible to selectively display different types of changes e.g. "strong" movements (e.g. shoulder movements) or minor movements (e.g. organ changes).

The deformation strength (small and large deformations) and scale (coarse and fine structures) can be adjusted separately e.g. across continuous intervals (minimum and maximum strength and scale), i.e. to display only deformations of objects at the size of a specific organ. In this case it is possible to filter in accordance with the interesting organ and with interesting amplitudes (e.g. position errors and small vector lengths < 1mm are currently uninteresting and undesired in visualization).

According to an embodiment of the invention, the planning CT is transformed by means of the filtered deformation field. This is done iteratively on the basis of different deformation scales. Thereby a "3D video" may be generated in which initially coarse spatial structure changes are displayed (neck movements, see Fig. 3b), followed by fine spatial structure movements (organ movements, see Fig. 3a). Thus, the video successively displays different kinds of changes wherein the time axis represents the scale or size of moving/deforming organs

Also, in an embodiment of the invention, a histogram of the deformation vectors is displayed (see Fig. 4a), and to restrict this to a selected region (e.g. a tumour region, see Fig. 4b). According to an embodiment of the invention, inverse mapping is employed in order to transform volume data (such as contours). That is, vectors are calculated that point from the current data set to the original data set. This may be done through image registration. However, for the purposes of visualisation it may be more intuitive to display vectors that point from the original dataset to the target (current) dataset. This may be done over a second registration, per vector field inversion or by registration algorithms which generate the forward and the inverse deformation field simultaneously. This forward deformation field can also be used to transform continuous point data instead of volume data. To eliminate rigid content from the non-rigid transformation field (for visualization) it is also possible to make a rigid registration step before the non-rigid registration itself. This way, point data can be transformed while both deformation fields can be visualized.

It may be required that the volume data represent arrays of 2D polygons that reside in the same spatial plane as the CT data. For this reason a suitable transformation may be undesirably complicated (e.g. transformed polygons are no longer in the same spatial plane as the z-axis, i.e. new polygons must be generated). To address this, in an embodiment of the invention, a new volume is generated from the 2D curves, and the new volume is transformed and intersected with the resulting new planes. In addition, image processing steps to smoothen frayed regions may be applied.

In accordance with an embodiment of the invention, the method comprises determining a forward deformation field, a transformation matrix, and an inverse deformation field. The forward and inverse deformation fields are processed to eliminate a rigid portion thereof attributable to rotation and/or translation of the object (patient).

In particular, first a segmentation of 1 st and 2 nd CT images is made to obtain two input images and two binary segmentation images. From these four images the transformation matrix (4x4 matrix containing translation and rotation) is determined. With the transformation matrix, the two input images and the two segmentation images, the forward deformation field and the inverse deformation field are determined.

More particularly, first the transformation matrix is determined, and subsequently the deformation field and the inverse deformation field are determined. The transformation matrix is also used for image transformation (together with the inverse deformation field), as only the transformation matrix contains information representative of movements caused by translation or rotation of the patient. Furthermore the transformation matrix, the forward (for continuous point data) and the inverse deformation fields (for volume data) are used for RT structure transformation.

In this embodiment, the deformation field or the inverse deformation field on its own is insufficient for the purpose of object transformation. In other words, at least the transformation matrix and one of the deformation fields is required. Fig. 5 illustrates a workflow in accordance with an embodiment of the invention. In a first step, a 1 st CT image is generated. In a second step, an irradiation planning is performed on the basis of the 1 st CT image. In a third step, the irradiation is performed. In a fourth step, a second CT image is generated. In a fifth step, the 1 st and 2 nd images are processed, as described above.

Further embodiments

According to an embodiment there is provided a method comprising: obtaining at least first and second CT images of an object, in particular a patient or a portion thereof; performing a non-rigid registration of the first and second images to determine a deformation field describing a deformation of the object from the first CT image to the second CT image; using the deformation field to determine a transformation of first image portions associated with the first CT image into second image portions associated with the second CT image, wherein the first and second image portions may represent selected contours and/or dosages, in particular contours of organs or irradiation regions and irradiation dosages, respectively; and generating one or more images to visualise the deformation field, wherein the one or more images may be superimposed on the first and/or second CT images. The method may further comprise processing the deformation field to eliminate a rigid portion thereof attributable to rotation and/or translation of the object.

The method may further comprise generating a transformation matrix describing a transformation of the first CT image into the second CT image; determining said deformation field from the transformation matrix; and additionally using the transformation matrix to determine the transformation of the first image portions associated with the first CT image into the second image portions associated with the second CT image. The method may further comprise determining an inverse deformation field describing a deformation of the object from the second CT image to the first CT image; and additionally using the inverse deformation field to determine the transformation of the first image portions associated with the first CT image into the second image portions associated with the second CT image. The method may further comprise making a segmentation of the first and second CT images, thereby generating first and second segmentation images; determining said transformation matrix from the first and second CT images and the first and second segmentation images; and determining said deformation field and said inverse deformation field from the first and second CT images, the first and second segmentation image, and the transformation matrix.

Preferably, the visualisation step comprises generating a colour-coded representation of the absolute value of each of the entries of the deformation field.

Preferably, the deformation field contains 3D vectors, and wherein the step to visualise the deformation field comprises: projecting each of the 3D vectors onto a 2D sectional plane; and generating a colour-coded representation of the projected 2D vectors based on the absolute value of the corresponding 3D vectors.

Preferably, the step to visualise the deformation field comprises displaying a fraction of the 2D vectors selected in accordance with a predetermined selection criterion. The method may further comprise filtering entries of the deformation field depending on the degree of deformation, in particular entries representative of selected types of deformation such as movements or changes of the object's anatomy.

The method may further comprise filtering entries of the deformation field depending on the size of the region affected by the deformation, in particular entries representative of a deformation that affects a region whose size is not within a predetermined size range.

The method may further comprise iteratively transforming the first image on the basis of filtered entries of the deformation field to obtain a succession of images representing different magnitudes of deformation.

The method may further comprise generating and displaying a histogram of the entries of the deformation field, in particular in respect of a selected region of the object or first or second images. Preferably, the first image represents an array of 2D polygons, wherein the method comprises: generating a volume from the 2D polygons; transforming the volume; and intersecting the volume with one or more predetermined planes.

It will be appreciated that the above described embodiments are described as examples only, and that modifications to these embodiments are included within the scope of the appended claims.