Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE DATA PROCESSING DEVICE
Document Type and Number:
WIPO Patent Application WO/2017/198518
Kind Code:
A1
Abstract:
An image data processing device (100) comprising a transformation unit (102) configured to receive two distinct sets of image data (104, 106) containing respective image features and pertaining to an identical external object, and to perform a partial volume correction algorithm using both sets of image data to obtain a partial volume corrected set of image data containing a partial- volume corrected image feature and. It further comprises a processing unit (108) configured to determine mapped image data based on the partial- volume corrected set of image data, to receive sample mapped image data (110), to identify significant mapped internal contrast features of the mapped image feature that differ in a statistically significant matter form corresponding mapped internal contrast features of the statistical population and to provide output information that depends on the identified significant mapped internal contrast features.

Inventors:
WENZEL FABIAN (NL)
ZAGORCHEV LYUBOMIR GEORGIEV (NL)
STEHLE THOMAS HEIKO (NL)
MEYER CARSTEN (NL)
BERGTHOLDT MARTIN (NL)
Application Number:
PCT/EP2017/061246
Publication Date:
November 23, 2017
Filing Date:
May 11, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKLIJKE PHILIPS NV (NL)
International Classes:
G06T7/00
Domestic Patent References:
WO2014139024A12014-09-18
WO2009134820A22009-11-05
WO2009065079A22009-05-22
Foreign References:
US20090164132A12009-06-25
US20110199084A12011-08-18
Attorney, Agent or Firm:
FAIRLEY, Peter, Douglas et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. An image data processing device (100) comprising:

a transformation unit (102) which is configured to:

- receive first (104) and second (106) mutually registered sets of image data representing measurement data acquired by different measurement techniques from an identical external object of an object type, the respective sets of image data containing at least one respective image feature (202. a, 202.b) with respective internal contrast features (204.b, 206.b, 208.b), wherein the first set of image data represents a first image having a higher spatial resolution than a second image represented by the second set of image data;

- assign, based on at least one predetermined segmentation criterion, segmentation information to the image data of the first set of image data, the segmentation information representing a segmentation of the first image into at least two image segments; and

- perform a partial- volume correction algorithm using the first and the second sets of image data including the segmentation information, and provide a partial- volume corrected set of image data (200. c) representing an image with a partial- volume corrected image feature (202. c) and a corrected spatial resolution higher than that of the second image; and

a processing unit (108), which is configured to:

- receive the partial- volume corrected set of image data (200. c);

- determine mapped image data (300. a) pertaining to the external object according to a predetermined mapping rule transforming the partial- volume corrected image feature (202. c) into a mapped image feature (302. a) having a predetermined reference shape (304).

- receive sample image data (110) pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type, the sample mapped image features (302.b) having the reference shape (304);

- perform, using the received mapped image data (300. a) and the received sample image data (110), a statistical comparison of the internal contrast features (312) of the mapped image feature with respect to the corresponding sample of mapped internal contrast features of the statistical population; and

- provide output data depending on result of the statistical comparison.

2. The image data processing device of claim 1 , wherein the processing unit is configured to

- identify, using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population; and

- provide output data depending on the identified significant mapped internal contrast features.

3. The image data processing device of claim 1, wherein the transformation unit is further configured, before determining and providing the mapped image data, to

- identify and select image points of the partial- volume corrected image feature using a predetermined image data selection criterion, and

- determine the mapped image data using only the selected image data.

4. The image data processing device of claim 1, wherein the first set of image data is magnetic resonance image data acquired by a magnetic resonance imaging technique or computed tomography image data acquired by a computed tomography scan technique, and wherein the second set of image data is positron emission tomography image data acquired by a positron emission tomography imaging technique.

5. The image data processing device of claim 1, wherein the first and second sets of image data, the partial- volume corrected set of image data, the mapped image data, and the sample image data comprise a respective plurality of voxels in a three dimensional coordinate grid, and wherein the processing unit is configured to identify the significant mapped internal contrast features voxel-wise.

6. The image data processing device of claim 2, wherein

- the processing unit is further configured to provide inverse-mapped image data of the identified significant mapped internal contrast feature by applying to the mapped image data of the identified significant mapped internal contrast features a predetermined second mapping rule, which is inverse with respect to the mapping rule applied to the partial- volume corrected image data;

and wherein the image data processing device further comprises:

- an output user interface, which receives the inverse- mapped data and is configured to provide a graphical representation of the inverse mapped image data for graphical output together with the partial- volume corrected image feature.

7. A method (600) for operating an image data processing device, the method comprising:

- receiving (602) first and second mutually registered sets of image data representing measurement data acquired by different measurement techniques from an identical external object of an object type, the respective sets of image data containing at least one respective image feature with respective internal contrast features, wherein the first set of image data represents a first image having a higher spatial resolution than a second image represented by the second set of image data;

- assigning (604), based on at least one predetermined segmentation criterion, segmentation information to the image data of the first set of image data, the segmentation information representing a segmentation of the first image into at least two image segments;

- performing (606) a partial- volume correction algorithm using the first and the second sets of image data including the segmentation information,

- providing (608) a partial- volume corrected set of image data representing an image with a partial- volume corrected image feature and a corrected spatial resolution higher than that of the second image; and

- determining (610) mapped image data according to a predetermined mapping rule transforming the partial- volume corrected image feature into a mapped image feature having a predetermined reference shape

- receiving (612) sample image data pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type, the sample mapped image features having the reference shape;

- performing, using the received mapped image data (300. a) and the received sample image data (110), a statistical comparison of the internal contrast features (312) of the mapped image feature with respect to the corresponding sample of mapped internal contrast features of the statistical population; and

- providing output data depending on result of the statistical comparison.

8. The method of claim 7, wherein performing a statistical comparison further comprises:

- identifying (614), using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population.

9. The method of claim 7, further comprising, before determining the mapped image data:

- identifying and selecting image points of the partial- volume corrected image feature (502) using a predetermined image data selection criterion; and

- determining the mapped image data (504) using only the selected image data.

10. The method of claim 8 further comprising:

- providing inverse-mapped image data of the identified significant mapped internal contrast feature by applying to the mapped image data of the identified significant mapped internal contrast features a predetermined second mapping rule, which is inverse with respect to the mapping rule applied to the partial- volume corrected image data; and

- providing a graphical representation of the inverse mapped image data for graphical output together with the partial- volume corrected image feature. 11. A computer program for controlling an image data processing device, the computer program comprising executable code for executing the method of claim 7 when executed by a processor of a computer.

Description:
Image data processing device

FIELD OF THE INVENTION

The present invention relates to an image data processing device, a method for operating an image data processing device and to a computer program. BACKGROUND OF THE INVENTION

Statistical testing and analysis performed by comparing contrast features of image data with a sample image data that forms a statistical population is a very powerful tool in various fields such as, but not limited to, medical imaging. The Society of Nuclear Medicine Procedure Guideline for FDG PET Brain Imaging (Version 1.0, approved February 8, 2009) by Alan D. Waxman et al., states in Section I, that owing to advancements in image processing technology developed in brain mapping research, automated or semi-automated brain mapping techniques can be applied to routine clinical studies and that in conjunction with pixel-wise comparison to normal data or database, individual statistical maps may be used to provide additional information to conventional image interpretation. The diagnostic accuracy may nevertheless be also affected by differences in image characteristics between individual cases and cases included in the normal database.

WO 2014/139024 Al describes planning, navigation and simulation systems and methods for minimally invasive therapy in which the planning method and system uses patient specific pre-operative images. The planning system allows for multiple paths to be developed from the pre-operative images, and scores the paths depending on desired surgical outcome of the surgery and the navigation systems allow for minimally invasive port based surgical procedures, as well as craniotomies in the particular case of brain surgery.

WO 2009/134820 A2 describes a method and apparatus for magnetic source magnetic resonance imaging. The method includes collecting energy signals from an object, providing additional information of characteristics of the object, and generating the image of the object from the energy signals and from the additional information such that the image includes a representation of a quantitative estimation of the characteristics. The additional information may comprise predetermined characteristics of the object, a magnitude image generated from the object, or magnetic signals collected from different relative orientations between the object and the imaging system. The image is generated by an inversion operation based on the collected signals and the additional information. The inversion operation minimizes a cost function obtained by combining the data extracted from the collected signals and the additional information of the object. Additionally, the image is used to detect a number of diagnostic features including microbleeds, contract agents and the like.

WO 2009/065079 A2 describes devices, systems and techniques that relate to detecting volumetric changes in imaged structures over time, including devices, systems and techniques that enable precise registration of structures (e.g., brain areas) after large or subtle deformations, allowing precise registration at small spatial scales with a low boundary contrast.

SUMMARY OF THE INVENTION

It is an object of the present invention to provide an improved device and method capable of applying statistical testing analysis techniques to image data showing different image features compared to those of images included in a reference database.

According to a first aspect of the present invention, an image data processing device is presented. The image data processing device comprises a transformation unit and a processing unit. The transformation unit is configured to:

receive first and second mutually registered sets of image data representing measurement data acquired by different measurement techniques from an identical external object of an object type, the respective sets of image data containing at least one respective image feature with respective internal contrast features, wherein the first set of image data represents a first image having a higher spatial resolution than a second image represented by the second set of image data;

assign, based on at least one predetermined segmentation criterion, segmentation information to the image data of the first set of image data, the segmentation information representing a segmentation of the first image into at least two image segments; and

perform a partial- volume correction algorithm using the first and the second sets of image data including the segmentation information, and provide a partial- volume corrected set of image data representing an image with a partial- volume corrected image feature and a corrected spatial resolution higher than that of the second image.

The processing unit is configured to:

receive the partial- volume corrected set of image data; determine mapped image data pertaining to the external object according to a predetermined mapping rule transforming the partial- volume corrected image feature into a mapped image feature having a predetermined reference shape;

receive sample image data pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type, the sample mapped image features having the reference shape;

perform, using the received mapped image data and the received sample image data, a statistical comparison of the internal contrast features of the mapped image feature with respect to the corresponding sample of mapped internal contrast features of the statistical population; and

provide output data depending on result of the statistical comparison.

The image data processing device of the first aspect of the present invention is therefore suitably configured to use the complementary information obtained from two mutually registered sets of image data that represent an identical external object to perform a partial- volume correction algorithm based on both sets of image data and to statistically compare mapped internal contrast features by comparing the mapped image to sample data image data. The sets of image data are acquired by different measurement techniques and therefore provide information pertaining to different characteristics of the object. The first set of image data represents a first image that has a higher spatial resolution that a second imaged represented by the second set of image data, meaning that a shape of the object represented by the image feature is presented more faithfully by the first set of image data than by the second set of image data. This is the caused by the different resolution of the measurements techniques applied to acquired the sets of image data. In the case of the second image set, the lower spatial resolution of the image represented by the image data can be due to partial- volume effects (PVE) in the form of, for example, cross contamination between different image regions caused by a point spread function of a measurement system performing the measurement technique used to acquire the second set of image data.

Another source of partial- volume effects is a low pixel or voxel resolution of the second set of image data, especially in cases where a pixel or voxel covers image regions pertaining to different image features. The image feature represented by the set of image data is distinguishable based on a, e.g., a given RGB component intensity range or a brightness value range. Besides, the image feature further contains internal contrast features, which differentiate regions within the folded image feature showing, for example, different RGB component intensity sub-ranges or different brightness values sub-ranges. Other reasons for a lower resolution are not due to technical limitations of a given imaging modality, but may be an effect of practical limitations. Take as an example a reduced scanning time used when imaging with an imaging modality that as such is capable for providing high-resolution image data after a scanning time that is long enough for obtaining such high spatial resolution. The present invention allows turning this into an advantage in that it can be used to actually limit scan times in obtaining image data, since low-resolution images obtained with such limited scan time can be enhanced in this regard by post-processing in accordance with the present invention.

In order to assign segmentation information to the image data of the first set of image data, the image feature is segmented or parcellated into at least two image segments. This may be done by identifying certain known reference features which are common to most of the objects of the object type and allows for a significant spatial correspondence of the image features of both sets of image data.

Further the image date processing device is configured to perform a partial- volume correction algorithm to combine the higher spatial resolution of the first set of image data with the measurement data represented by the second set of image data. There are several algorithms that are suitably configured to perform a partial- volume correction and are known to the person skilled in the art. These can be divided in three major subgroups: image enhancement techniques, which primarily rely on recovering resolution directly from the second set of image data, image-domain correction techniques, which rely on information from the first set of image data to determine the appropriate correction, and projection based correction.

The result of applying the partial- volume correction algorithm to the first and the second set of image data is partial- volume corrected set of image data that represents an image with a partial- volume corrected image feature. The partial- volume corrected set of image data has a corrected spatial resolution that is higher than that of the second image.

The processing unit receives the partial- volume corrected set of image data representing an image with a partial- volume corrected image feature from the transformation unit and determines mapped image data according to a predetermined mapping rule. This mapping rule transforms the partial- volume corrected image feature into a mapped image feature having a predetermined reference shape.

The processing unit is thus configured to transform the shape of the partial- volume corrected image feature to a predetermined reference shape that includes mapped image features. The predetermined mapping rule is suitably designed to map the internal contrast features (i.e., the regions within the image feature that have different color or brightness values ranges), into the mapped image feature. This is preferably performed in a way that a position of a given mapped internal contrast feature within the mapped image feature can be tracked back to the position of the corresponding internal contrast feature of the partial- volume corrected image feature. Since the mapping rule transforms the shape of the image feature into the reference shape, it also changes a shape of the internal contrast features. The result of this mapping is mapped image data representing a mapped image feature that has mapped internal contrast features that have positions within the mapped image feature related via the mapping rule to positions the corresponding internal contrast features of the partial- volume corrected contrast features.

The processing unit also receives sample image data pertaining to a plurality of sample mapped image features that belong to a statistical population of further external objects of the object type. Since the sample mapped image features also have the same reference shape, the processing unit is advantageously configured to identify significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population. The sample image data thus forms a statistical population to which the transformed image data can be compared in search of predetermined statistical deviations. This identification of significant mapped internal contrast features can be performed due to the fact that both the image data and the sample image data have been equivalently transformed and both sets of image data share the same reference shape, regardless of the original shape of the image feature, which is likely to be different and thus not matching.

Therefore, the image data processing device is suitably configured to provide a partial- volume corrected set of image data representing an image with a higher spatial resolution as the image represented by the second set of image data, to map the partial- volume corrected contrast feature onto a mapped image feature that has a reference shape and to compare the mapped image feature with sample mapped image features of objects of the same object type even in cases where the image features of the individual objects of the same object type do not have the same shape.

The processing unit is also configured to provide output data depending on comparison result.

In the following, embodiments of the first aspect of the present invention will be presented. In one application case, the processing unit is configured to identify, using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population; and to provide output data depending on the identified significant mapped internal contrast features.

In some embodiments, the transformation unit and the processing unit share a common housing, whereas in other embodiments they can be separate units connected to each other. In these embodiments, the transfer of the partial- volume corrected set of image data from the transformation unit to the processing unit is performed via a patch cable or a wireless communication system. In other embodiments, the transformation unit and the processing unit are not connected to each other and the transfer of the partial- volume corrected set of image data from the transformation unit to the processing unit is performed via a data storage device that is configured to be connected to the transformation unit and to receive and store the partial- volume corrected set of image data, and to be connected and provide the stored partial- volume corrected set of image data to the processing unit.

In some embodiments, the first and second mutually registered sets of image data are acquired simultaneously or sequentially within a short time window. This allows reducing a potential mismatch between the two sets of image data caused by e.g.

morphological variations of the object under measurement such as, but not limited to, a change of size or of orientation of the object.

In some embodiments the processing unit is also configured to identify at least one landmark feature of the folded image feature using at least a section of the folded image feature and a set of pre-stored landmark template features. The landmark features included in the landmark template features represent certain features that are thought to be present in partial- volume corrected image feature regardless of its actual shape. The processing unit provides the mapped image data using the partial- volume corrected set of image data and the identified landmark features according to a predetermined mapping rule In some

embodiments, the mapping rule transforms the partial- volume corrected image feature into an inflated image feature having the reference shape and further maps the internal contrast features onto the inflated image feature.

In some embodiments of the image data processing device, the transformation unit is further configured to identify and select image points of the partial- volume corrected image feature using a predetermined image data selection criterion and determine the mapped image data using only the selected image data. The predetermined image data selection criterion is, in some embodiments, based on a contrast value difference within a set of image data. The contrast is in some embodiments a brightness contrast on a grayscale and in other embodiments an intensity contrast of a given color component, such as for example, red, green or blue color components in a RGB color code. Based on this predetermined image data selection criterion the transformation unit identifies and selects image points of the partial- volume corrected image feature and then determines the mapped image data using only the selected image data. These embodiments are therefore configured to provide mapped image data of only parts of the image with a partial-volume corrected image feature that are selected based on a predetermined image data selection criterion. In some embodiments, the selected image data just comprises the partial- volume corrected image feature. In other embodiments, the selected image data comprise only parts of the partial- volume corrected image feature that are determined based for example on a contrast difference. Only the selected image data is then mapped according to the predetermined mapping rule. These embodiments are thus advantageously configured to reduce the computational complexity associated to the provision of mapped image data by mapping only relevant regions of the image represented by the image data.

Among the imaging modalities that can profit from the image data processing of the present invention is positron emission tomography (PET), which is a nuclear medicine imaging technique that is used to observe metabolic processes in an object by detecting pairs of gammy rays emitted indirectly by a tracer. PET scans tend to appear blurred due to a number of reasons such as the limited resolution of the PET-image acquisition equipment, or in cases where the shape of the measured object is convoluted thus having a folded shape which cannot be fully resolved in the second set of image data due to the limited resolution of the measurement equipment. PET further provides information regarding tracer uptake and metabolic activity in an image feature in the form of internal contrast features. For example, those areas of the object where a metabolic activity is taking place will uptake a larger quantity of the tracer and thus more gamma rays will be emitted from those areas. The information regarding this exemplary metabolic activity is only available from the second set of image data, i.e. the PET image data. In order to increase the spatial resolution of a PET set of image data, a partial- volume correction is performed using an alternative set of image data obtained by magnetic resonance imaging techniques or by computer tomography scan techniques. These alternative sets of image data represent an image having a higher spatial resolution than the image represented by the PET set of image data. By performing the partial- volume correction, a partial- volume corrected set of image data is obtained, which has a corrected spatial resolution higher than that of the PET set of image data. On the other hand, and in cases where the shape of the object is highly variable when compared to other objects of the same object type (e.g. cerebral cortex), a statistical analysis of a partial-volume corrected set of image data by direct comparison with a sample image data belonging to a statistical population of further objects of the same object type is not possible. This is due to the fact that the differences in shape between objects, which were not resolved in PET images due to its low resolution, are now distinguishable in the partial- volume corrected set of image data. Therefore, it would advantageous to be able to perform statistical analysis on PET sets of image data that have been partial- volume corrected to enhance their spatial resolution.

It should be noted that use of PET image data is an advantageous, but non- restrictive example and that the image data processing device of the present invention is applicable in a similar manner for image data obtained using other imaging modalities. For instance, another application case involves a statistical comparison of fractional anisotropy values obtained by a low-resolution diffusion magnetic resonance (MR) imaging modaility for white matter areas in the brain with reference data as imaged using a high-resolution anatomical MR modality, in order to identify areas with white matter lesions by detection of abnormal fractional anisotropy. Thus, in such embodiments, the image data processing device is advantageously configured to provide a statistical comparison of fractional anisotropy values for white matter areas in the brain. In these embodiments high-resolution anatomical MR data is compared to reference data to identify areas with white matter lesions based on abnormal fractional anisotropy.

Application of the present invention is not restricted to brain imaging. Another exemplary application case from the field cardiology is a statistical comparison of blood flow values from Dopper ultrasound data forming a low-resolution imaging modality in blood vessels to normative data which have been previously segmented in high resolution, for instance by MR imaging, in order to identify areas of abnormal blood flow. The present application case forms an example of an embodiment of the present invention that makes use of image data obtained by at least one non-tomographic image modality.

In some embodiments the image data processing device is advantageously configured to process the first and the second mutually registered sets of image data wherein the first set of image data is magnetic resonance image data acquired by magnetic resonance imaging technique or computed tomography image data acquired by computed tomography scan technique, and wherein the second set of image data is positron emission tomography image data acquired by positron emission tomography imaging technique. These

embodiments are suitably configured to perform a partial- volume correction algorithms using magnetic resonance image data or computed tomography image data as a first set of image data and positron emission tomography image data as a second set of image data, to map the partial- volume corrected image feature onto a mapped image feature having a reference shape and to statistically compare the mapped image features to a plurality of sample mapped image features belonging to a statistical population of external objects of the object type.

In some embodiments, first and second sets of image data, the partial- volume corrected set of image data, the mapped image data sample image data, and the sample transformed image data comprise a respective plurality of voxels in a three dimensional coordinate grid, and wherein the processing unit is configured to identify the significant mapped internal contrast features voxel- wise. These sets of image data are divided in voxels that have a respective position in a three dimensional coordinate grid. Each voxel has a respective voxel value indicative of, for example, an amount of brightness or an amount of color component intensity. The voxels values of the voxels forming the mapped image data are then compared to those respective voxels of the sample image data to identify the significant mapped internal contrast features. In some embodiments only the voxels forming the mapped image feature are used for the identification of the significant mapped internal contrast features.

In some embodiments according to the first aspect of the present invention, the processing unit is further configured to provide inverse-mapped image data of the identified significant mapped internal contrast feature by applying to the mapped image data of the identified significant mapped internal contrast features a predetermined second mapping rule. This second mapping rule is inverse with respect to the mapping rule applied to the partial- volume corrected image data, so that the partial- volume corrected image feature can be reestablished from the mapped image data. The image data processing device of these embodiments further comprise an output user interface, which receives the inverse-mapped image data and is configured to provide a graphical representation of the inverse mapped image data for graphical output together with the partial- volume corrected image feature. These embodiments are thus advantageously configured to output a graphical representation of the significant mapped internal contrast features based on the partial- volume corrected image data received, i.e. indicating on this image data, the significant mapped internal contrast features. In other embodiments, the output user interface receives the mapped image data and is configured to provide a graphical representation of the mapped image data for graphical output together with the reference shape.

According to a second aspect of the present invention a method for operating an image data processing device is presented. The method comprises:

- receiving first and second mutually registered sets of image data (402) representing measurement data acquired by different measurement techniques from an identical external object of an object type, the respective sets of image data containing at least one respective image feature with respective internal contrast features, wherein the first set of image data represents a first image having a higher spatial resolution than a second image represented by the second set of image data; an object having a folded shape and forming an folded image feature in view of adjacent surroundings external to the object, the folded image feature further containing internal contrast features;

assigning, based on at least one predetermined segmentation criterion, segmentation information to the image data of the first set of image data, the segmentation information representing a segmentation of the first image into at least two image segments identifying at least one landmark feature of the folded image feature using at least a section of the folded image feature and a set of pre-stored landmark template features;

performing a partial- volume correction algorithm using the first and the second sets of image data including the segmentation information,

- providing a partial- volume corrected set of image data representing an image with a partial- volume corrected image feature and a corrected spatial resolution higher than that of the second image; and

determining mapped image data according to a predetermined mapping rule transforming the partial- volume corrected image feature into a mapped image feature having a predetermined reference shape

receiving sample image data pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type, the sample mapped image features having the reference shape;

identifying, using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population; and

providing output data depending on the identified significant mapped internal contrast features, providing transformed image data using the received image data and the identified landmark features, the transformed image data being determined according to a predetermined transformation rule transforming the folded image feature into an inflated transformed image feature having an unfolded reference shape and mapping the internal contrast features to form mapped internal contrast features contained by the transformed image feature, the transformed image data thus representing a transformed object having the unfolded reference shape;

receiving sample image data representing sample transformed objects that form a statistical population of sample transformed objects having the unfolded reference shape, each sample transformed object having respective sample mapped internal contrast features;

performing, using the received mapped image data and the received sample image data, a statistical comparison of the internal contrast features of the mapped image feature with respect to the corresponding sample of mapped internal contrast features of the statistical population; and

- providing output data depending on result of the statistical comparison.

The method of the second aspect of the present invention shares the advantages of the image data processing device of the first aspect.

In the following, embodiments of the second aspect of the present invention will be described.

In one embodiment, performing a statistical comparison further comprises identifying significant mapped internal contrast features that differ from corresponding sample mapped internal contrast features of the statistical population in a statistically significant manner. Preferably, in this embodiment, output data is thus subsequently provided which is depending on to the identified significant mapped internal contrast features.

In some embodiments of the method according to the second aspect of the invention, the method further comprises, before determining the mapped image data, identifying and selecting image points of the partial- volume corrected image feature using a predetermined image data selection criterion, and determining the mapped image data using only the selected image data.

Other embodiments of the method of the second aspect further comprise providing inverse-mapped image data of the identified significant mapped internal contrast feature by applying to the mapped image data of the identified significant mapped internal contrast features a predetermined second mapping rule, which is inverse with respect to the mapping rule applied to the partial- volume corrected image data, and providing a graphical representation of the inverse mapped image data for graphical output together with the partial- volume corrected image feature

The different embodiments of the methods of the second aspect of the present invention share the advantages of the embodiments of the image data processing device of the first aspect of the present invention.

According to a third aspect of the present invention, a computer program is presented. The computer program is configured to control an image data processing device, the computer program comprising executable code for executing the method of the second aspect of the present invention or one of its embodiments when executed by a processor of a computer.

The computer program of the third aspect shares the advantages of the method of the second aspect of the invention.

It shall be understood that the image data processing device of claim 1 , the method for operating an image data processing device of claim 7 and the computer program of claim 11 have similar and/or identical preferred embodiments, in particular, as defined in the dependent claims.

It shall be understood that a preferred embodiment of the present invention can also be any combination of the dependent claims or above embodiments with the respective independent claim.

These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

In the following drawings:

Fig. 1 shows an exemplary block diagram of an embodiment of an image data processing device.

Fig. 2a shows an exemplary image represented by a first set of image data. Fig. 2b shows an exemplary image represented by a second set of image data. Fig. 2c shows an exemplary partial- volume corrected image represented by partial- volume corrected image data.

Fig. 3a shows an exemplary reference shape with mapped internal contrast features. Fig. 3b shows the exemplary reference shape with other mapped internal contrast features.

Fig. 4 shows another exemplary partial- volume corrected image represented by partial- volume corrected image data.

Fig. 5 shows another exemplary reference shape with mapped internal contrast features.

Fig. 6 shows a flow diagram of an exemplary embodiment of a method for operating an image data processing device. DETAILED DESCRIPTION OF EMBODIMENTS

Fig. 1 shows a schematic block diagram of an exemplary image data processing device 100. The description of this embodiment will be with reference to Figs, laic and Figs. 3a-3b which show respective images represented by different sets of image data as it will be explained in the following. The image data processing device comprises a transformation unit 102 which is configured to receive first 104 and second 106 sets of image data representing measurements acquired by different measurement techniques from an identical external object of an object type. Fig 2. a shows an image 200. a of an object represented by a first set of image data whereas Fig. 2b shows another image 200. b of the same object represented by a second set of image data. Both images contain a respective image feature 202. a and 202.b. Image 200.b also presents internal contrast features, symbolized in Fig. 2b by different filling patterns. Three exemplary internal contrast features are labeled 204.b, 206.b and 208.b in Fig. 2b. The first and the second sets of image data are acquired by different measurement techniques. For example, the first and the second sets of image data are, in a particular embodiment, acquired by magnetic resonance imaging and by positron emission tomography imagining techniques respectively. The image represented in Fig. 2a has a higher spatial resolution that the image shown in Fig. 2b. Thus, a convoluted shape (e.g. a shape having folds and grooves) of the object is represented in a more accurate manner by the image in Fig. 2a than by the image in Fig. 2b.

In some embodiments, the internal contrast features 204.b, 206.b and 208.b are distinguishable from each other and from adjacent surroundings based on an intensity range of a given color component (e.g. in a RGB color code), or on a brightness value range in a gray scale code. In some particularly advantageous embodiments, the image data comprises a plurality of pixels or voxels in a coordinate pixel or voxel grid, where each pixel or voxel has at least a respective pixel value or voxel value that is indicative of the intensity of a given color component or indicative of a brightness component in a gray scale.

The transformation unit 102 performs a partial- volume correction algorithm using both sets of image data and provides a partial- volume corrected set of image data representing an image 200. c having a partial- volume corrected image feature 202. c as shown in Fig. 2c. Image 200. c has a higher spatial resolution than image 200.b, and shows internal contrast features (e.g. 204. c 206. c and 208. c) that correspond to the internal contrast features 204.b, 206.b and 208.b of the image feature 202. a.

The image data processing device 100 further comprises a processing unit 108 that receives the partial- volume corrected set of image data 200. c and determines mapped image data 300. a as shown in Fig. 3a. The mapped image data pertains to the external object according to a predetermined mapping rule that transforms the partial- volume corrected image feature 202. c into a mapped image feature 302. a having a predetermined reference shape 304. The mapped image feature 302. a further contains mapped internal contrast features (e.g. 306, 308 and 310) that correspond to the internal contrast features 204. c, 206. c and 208. c of the partial- volume corrected image feature 202. c.

The processing unit 108 further receives sample image data 110 pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type. Fig. 3b shows exemplary mapped image data 300.b which represents a sample mapped image feature 302.b. The sample mapped image feature also has the reference shape 304. The fact that the mapped image feature 302. a and the sample image features 302.b share the same shape allows for a comparison of the internal contrast features in a statistical manner. The processing unit thus identifies, using the received mapped image data and the received sample image data, significant mapped internal contrast features 312 of the mapped image feature 302. a that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population in a statistically significant manner. In some embodiments this statistical analysis is performed voxel-wise, by comparing each voxel value of the mapped image feature with the voxel values of the sample mapped image features of the sample image data 110.

The sample image data represents a plurality of sample mapped image features that have the predetermined reference shape. They represent respective objects from a statistical population of objects of the object type. As an example, the image mapped image features and the sample mapped image features represent partial- volume corrected images of the brain cortex of a plurality of individuals, obtained by performing a partial- volume correction algorithm on a set of PET image data and a set of magnetic resonance image data. Both the mapped image data and the sample image data comprise image features in accordance with a predetermined reference shape, although the mapped internal contrast feature of the mapped image feature and the sample mapped internal contrast features obtained from the sample image data may differ. The processing unit 108 is also configured to identify significant mapped internal contrast features that differ from corresponding sample mapped internal contrast features of the statistical population in a statistically significant manner. The processing unit thus compares the mapped image feature with the sample mapped image feature in search of statistically relevant deviations from the sample image data. Some embodiments use the mean value and standard deviation of the sample image data and identify mapped internal contrast features of the mapped image data that lie outside a predetermined contrast range centered at the mean value and having a range width given by twice the standard deviation. Other embodiments use other statistical approaches to identify the significant mapped internal contrast features that differ from the corresponding sample mapped internal contrast features in a statistically relevant manner. Finally, the processing unit provides output data depending on the identified significant mapped internal contrast features. The output data comprises in some embodiments relative position data pertaining to the identified mapped internal contrast feature within the mapped image feature having the reference shape. In other embodiments, the output data comprises the relative approximate position data pertaining to the identified mapped internal contrast feature within the partial- volume corrected image feature. In some embodiments the output data further comprises intensity data pertaining to the statistical analysis, such us for example, intensity data pertaining to the intensity difference between the mapped internal contrast feature and the sample image data.

In some embodiments, the determination of mapped image data by the processing unit is performed by identifying landmark features of the partial- volume corrected image feature by using a set of pre-stored landmark template features. This is explained in reference to Figs. 4 and 5. Fig. 4 shows an example of a partial- volume corrected set of image data 400. This set of image data represents an image feature 402 in view of adjacent surroundings 404. The image feature is distinguishable from the surroundings based on a contrast difference (e.g. different values in a gray or RGB color scale). The image feature 402 further contains internal contrast features. These are labeled in Fig. 4 as 406. a-c. These internal contrast features are also distinguishable based on a contrast difference (e.g. in some embodiments this contrast difference is based on the brightness values in a gray scale). The landmark features 408. a and 408. b included in the set of landmark template features represent certain features that are considered to be present in the partial- volume corrected image feature regardless of the actual shape (e.g. determined by folds and grooves) that the represented object presents. In this particular case, the transformation unit is configured to identify these landmarks features among the plurality of folds and grooves of the partial- volume corrected image feature based on the set of landmark template features and to determine mapped image data according to a predetermined mapping rule that is based on the identified landmark features 408. a and 408.b. These identified landmark features serve as a position reference for the internal contrast features 406. a-c. In some embodiments, the image feature is divided into regions defined by the landmarks 408. a and 408.b as shown by the dotted lines in Fig. 4. Fig. 5 shows mapped image data 500 that is the result of a

transformation based in a predetermined mapping rule applied to the partial- volume corrected set of image data 400, and wherein the mapped image feature 502 has a predetermined reference shape. Mapped positions 506. a and 506.b of the identified landmark features 408. a and 408. b on the mapped image feature also define reference regions of the mapped image feature. The internal contrast features 406. a-c are mapped onto the mapped image feature that has the predetermined reference shape. The mapped internal contrast features are labeled as 504. a-c and correspond to the internal contrast features 406. a-c respectively The fact that the mapped image feature 502 is in accordance with a predetermined reference shape means that, regardless of the respective folded structure of the image data, i.e., the individual folds and grooves of the partial- volume corrected image feature, the mapped image feature will have the predetermined reference shape and a statistical analysis of the internal contrast features performed by comparing these to a sample set of image features, all having the same shape, is therefore possible.

An exemplary embodiment of the image data processing device is advantageously configured to compare image data representing partial- volume corrected (PVC) positron emission tomography (PET) scans of an object that has a folded shape such as a brain or a brain cortex, with a sample image data also representing PVC-PET scans of other objects of the same object type. PET brain scans that are not partial- volume corrected appear blurred due to the limited resolution of the PET detector cells. This results in spill- in of gray matter PET activity into areas of white matter and vice versa. Applying partial- volume correction (PVC) based on magnetic resonance imaging (MRI) to PET scans enhances the resolution of the brain scans so that individual folds or ridges (gyri) and grooves or fissures (sulci) that were nor distinguishable in the PET scan are now identifiable in the PVC-PET scan. In this case, and since the shape of the human cortex is highly variable, the partial- volume corrected image feature does not match with those of the sample image data, due to the different shapes they present. Nevertheless, the presence of landmark features common to a majority of objects of an object type can be advantageously used to define mapped image features of a predetermined reference shape onto which internal contrast features can be mapped. The landmark features are in this particular case, features of the cerebral cortex which can be found in almost every brain independently of the individual gyri and sulci.

The processing input unit of this embodiment receives partial- volume corrected PET brain scans as a partial- volume corrected set of image data. This set of image data shows the cerebral cortex region, i.e. the region of interest, as a region of pixels having a distinguishable pixel values in comparison with the surrounding regions (white matter, cerebrospinal fluid, etc.). The processing unit identifies landmark positions in the cerebral cortex according to predetermined landmark template feature. The landmarks are thus features in the cerebral cortex which can be found in almost every brain independently of the individual gyri and sulci. According to some models, the landmarks define structural regions of the brain. For example, the Midboggle-101 dataset serves as a normative dataset to establish morphometric variation in a healthy population for comparison against clinical populations and uses the Desikan-Killiany-Tourville (DKT) protocol to improve the consistency and accuracy of labeling human cortical areas. Some of these structural regions are labeled according to some models as superior frontal, rostral middle frontal, triangularis, lateral orbito frontal, transverse temporal, etc. These structural regions are defined by brain features or landmarks (mostly sulci) which are present in most brains. After having identified the landmark feature in the received partial- volume corrected set of image data, the processing unit determines mapped image data representing a mapped image feature onto which the internal contrast features are mapped. The mapped image data can be then directly compared in a statistical manner with sample image data. In some embodiments the sample image data comprises mapped representations of sample brains which are considered to share a predetermined characteristic such as, for example, being regarded as representation of healthy brains not showing any medical condition,. Nevertheless, the sample image data shows certain differences in the form of different sample mapped internal contrast features. These differences are due to the fact that the sample image data represents a plurality of sample transformed objects that are not identical to each other. The sample image data can be thus regarded as registered set of reference scans. The processing unit is configured to identify mapped internal contrast features that differ from corresponding sample mapped internal contrast features of the statistical population in a statistically significant manner. This means the processing unit is configured to identify regions of the cerebral cortex that show contrast features that significantly differ from those of the registered set of reference scans, and that point to locations in the cerebral cortex where anomalous activity with regard to the sample image data has occurred during the PET scan. Information pertaining to these regions is output to an external user who may evaluate the differences between the image data and the sample image data.

Fig. 6 shows show a flow diagram of an exemplary method 600 for operating an image data processing device. The method 600 comprises receiving in a step 602 first and second mutually registered sets of image data representing measurement data from an identical external object of an object type. These two sets of image data are acquired by different measurement techniques. The respective sets of image data contain at least one respective image feature with respective internal contrast features. Furthermore, the first set of image data represents a first image having a higher spatial resolution than a second image represented by the second set of image data.

In a step 604, the method assigns, based on at least one predetermined segmentation criterion, segmentation information to the image data of the first set of image data. The segmentation information represents a segmentation of the first image into at least two image segments. Later, in a step 606, the method 600 performs a partial- volume correction algorithm using the first and the second sets of image data including the segmentation information, and provides, in a step 608, a partial- volume corrected set of image data representing an image with a partial- volume corrected image feature and a corrected spatial resolution higher than that of the second image. In a step 610, the method determines mapped image data according to a predetermined mapping rule transforming the partial- volume corrected image feature into a mapped image feature having a predetermined reference shape. The method also receives, in a step 612 sample image data pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the object type. The sample mapped image features have also the reference shape. In a step 614 the method identifies, using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population, and finally, in a step 616, the method provides output data depending on the identified significant mapped internal contrast features. A simple embodiment of a data processing device thus comprises a

transformation unit configured to receive two distinct sets of image data containing respective image features and pertaining to an identical external object, and to perform a partial volume correction algorithm using both sets of image data to obtain a partial volume corrected set of image data containing a partial- volume corrected image feature and. It further comprises a processing unit configured to determine mapped image data based on the partial- volume corrected set of image data, to receive sample mapped image data, to identify significant mapped internal contrast features of the mapped image feature that differ in a statistically significant matter form corresponding mapped internal contrast features of the statistical population and to provide output information that depends on the identified significant mapped internal contrast features.

While the present invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

For instance, the processing unit can be provided as a stand-alone device. Thus, another aspect of the invention is a processing unit, which is configured to:

- receive a partial- volume corrected set of image data;

determine mapped image data pertaining to an external object according to a predetermined mapping rule transforming a partial-volume corrected image feature comprised by the partial- volume corrected set of image data into a mapped image feature having a predetermined reference shape;

- receive sample image data pertaining to a plurality of sample mapped image features from a statistical population of further external objects of the same object type as the external object, the sample mapped image features having the reference shape;

identify, using the received mapped image data and the received sample image data, significant mapped internal contrast features of the mapped image feature that differ in a statistically significant manner from corresponding sample mapped internal contrast features of the statistical population; and

provide output data depending on the identified significant mapped internal contrast features. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality.

A single step or other units may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Any reference signs in the claims should not be construed as limiting the scope.