Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE NOISE REDUCTION AND/OR IMAGE RESOLUTION IMPROVEMENT
Document Type and Number:
WIPO Patent Application WO/2014/024076
Kind Code:
A1
Abstract:
A method for improving image quality of image data includes analyzing, for each of a plurality of voxels of image data, a set of entries of a dictionary, wherein an entry represents a mapping between a lower resolution patch of voxels and a corresponding higher resolution patch of voxel or a local neighborhood around a voxel, deriving, for each of the plurality of voxels, a subspace based on the analysis, wherein the subspace is for one of the mapping or the local neighborhood, and restoring target image data based on the subspaces, wherein the target image data is image data with higher image resolution or reduced image noise.

Inventors:
GOSHEN LIRAN (NL)
GRINGAUZ ASHER (NL)
Application Number:
PCT/IB2013/056104
Publication Date:
February 13, 2014
Filing Date:
July 25, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS NV (NL)
International Classes:
G06T5/00; G06T3/40
Foreign References:
US20040218834A12004-11-04
Other References:
NOURA AZZABOU ET AL: "Image Denoising Based on Adapted Dictionary Computation", IMAGE PROCESSING, 2007. ICIP 2007. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 September 2007 (2007-09-01), pages 109 - 112, XP031158016, ISBN: 978-1-4244-1436-9
CHARLES-ALBAN DELEDALLE ET AL: "Image denoising with patch based PCA: local versus global", PROCEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 1 September 2011 (2011-09-01), pages 1 - 10, XP055092400, ISBN: 978-1-90-172543-8, DOI: 10.5244/C.25.25
HANS-PETER KRIEGEL ET AL: "A General Framework for Increasing the Robustness of PCA-Based Correlation Clustering Algorithms", 9 July 2008, SCIENTIFIC AND STATISTICAL DATABASE MANAGEMENT; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 418 - 435, ISBN: 978-3-540-69476-2, XP019090803
DORE V ET AL: "Robust NL-Means Filter With Optimal Pixel-Wise Smoothing Parameter for Statistical Image Denoising", IEEE TRANSACTIONS ON SIGNAL PROCESSING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 57, no. 5, 1 May 2009 (2009-05-01), pages 1703 - 1716, XP011249631, ISSN: 1053-587X
Attorney, Agent or Firm:
STEFFEN, Thomas et al. (Building 5, AE Eindhoven, NL)
Download PDF:
Claims:
CLAIMS:

1. A method for improving image quality of image data, comprising:

analyzing, for each of a plurality of voxels of image data, a set of entries of a dictionary, wherein an entry represents a mapping between a lower resolution patch of voxels and a corresponding higher resolution patch of voxel or a local neighborhood around a voxel;

deriving, for each of the plurality of voxels, a subspace based on the analysis, wherein the subspace is for one of the mapping or the local neighborhood; and

restoring target image data based on the subspaces, wherein the target image data is image data with higher image resolution or reduced image noise.

2. The method of claim 1, wherein deriving the subspace comprises:

identifying patches for each voxel in the image data that are similar to a currently processed patch around the voxel;

creating a data matrix with the identified patches;

performing a PCA to the matrix; and

estimating a number of principle components with a largest corresponding eigenvalues to model the local patch.

3. The method of claim 2, wherein the PCA is a weighted-PCA, wherein weights are a function of a similarity function between at least one of the patch or a local noise level within the patches.

4. The method of any of claims 2 to 3, wherein the local patch is modeled using one or more of: Akaike information criterion, Bayesian information criterion, deviance information criterion, stepwise regression, cross-validation, Mallows' Cp, focused

information criterion, or thresholding of ratio / difference between consecutive eigenvalues.

5. The method of any of claims 1 to 4, wherein a number of similar patches with a weighted average value larger than a weighted average value of a currently processed patch is equal to a number of patches with a smaller weighted average.

6. The method of any of claims 1 to 5, wherein the restorer utilizes a local optimization to restore the image data.

7. The method of any of claims 1 to 5, wherein the restorer utilizes a global optimization to restore the image data.

8. The method of any of claims 1 to 7, wherein the dictionary is a combination of one or more of a prior dictionary, a self-similarity dictionary or a derived dictionary.

9. The method of claim 8, wherein the self- similarity dictionary includes a collection of matches between patches of lower resolution image data to their corresponding downed scaled patches.

10. The method of claim 9, wherein the matches include a sub-set of patches, wherein the sub-set of patches include only patches that are in a neighborhood of the lower resolution image data.

11. The method of claim 9, wherein the matches include all of the patches of the lower resolution image data.

12. The method of any of claims 8 to 11, wherein the derived dictionary that is derived from higher resolution image data.

13. The method of any of claims 1 to 12, further comprising:

high pass filtering entries in the dictionary, thereby removing entries having a frequency lower than a predetermined frequency and generating a pre-processed dictionary, wherein the pre-processed dictionary includes local features that correspond to high- frequency content.

14. The method of claim 13, wherein the pre-processed dictionary characterizes a relation between lower-resolution patches and edges and texture content within the corresponding higher-resolution patches.

15. An image data processor (116), comprising :

an analyzer (216) that analyzes, for each of a plurality of voxels of image data, a set of entries of a dictionary, wherein an entry represents a mapping between a lower resolution patch of voxels and a corresponding higher resolution patch of voxel or a local neighborhood around a voxel and derives, for each of the plurality of voxels, a subspace based on the analysis, wherein the subspace is for one of the mapping or the local

neighborhood; and

an image restore (220) that restores target image data based on the subspaces, wherein the target image data is image data with higher image resolution or reduced image noise.

16. The image data processor of claim 15, wherein the analyzer derives the subspaces by identifying patches for each voxel in the image data that are similar to a currently processed patch around the voxel, creating a data matrix with the identified patches, performing one of a PCA or weighted PCA to the matrix, and estimating a number of principle components with a largest corresponding eigenvalues to model the local patch.

17. The image data processor of any of claims 15 to 16, wherein a number of similar patches with a weighted average value larger than a weighted average value of a currently processed patch is equal to a number of patches with a smaller weighted average.

18. The image data processor of any of claims 15 to 17, wherein the restorer utilizes one of a local optimization or a global optimization to restore the image data.

19. The image data processor of any of claims 15 to 18, wherein the dictionary is a combination of one or more of a prior dictionary, a self-similarity dictionary or a derived dictionary.

20. The image data processor of claim 19, wherein the self-similarity dictionary includes a collection of matches between patches of lower resolution image data to their corresponding downed scaled patches, wherein the collection includes sub-set of patches that are in a neighborhood of the lower resolution image data.

21. The image data processor of claim 19, wherein the self- similarity dictionary includes a collection of matches between patches of lower resolution image data to their corresponding downed scaled patches, wherein the collection includes all of the patches of the lower resolution image data.

22. The image data processor of any of claims 19 to 21, wherein the derived dictionary is derived from higher resolution image data.

23. The image data processor of any of claims 15 to 22, further comprising:

a filter (214) that high pass filters entries in the dictionary, thereby removing entries having a frequency lower than a predetermined frequency and generating a pre- processed dictionary, wherein the pre-processed dictionary includes local features that correspond to high-frequency content.

24. The image data processor of claim 23, wherein the pre-processed dictionary characterizes a relation between lower-resolution patches and edges and texture content within the corresponding higher-resolution patches.

25. A computer readable medium encoded with computer executable instruction, which, when executed by a processor, causes the processor to:

generate image data with higher image resolution or reduced noise based on initial image data and a non-local principle component analysis (PCA).

Description:
IMAGE NOISE REDUCTION AND/OR IMAGE RESOLUTION IMPROVEMENT

The following generally relates to reducing image noise and/or improving image resolution of acquired image data and is described with particular application to computed tomography (CT).

A CT scanner generally includes an x-ray tube mounted on a rotatable gantry that rotates around an examination region about a longitudinal or z-axis. The x-ray tube emits radiation that traverses the examination region and a subject or object therein. A detector array subtends an angular arc opposite the examination region from the x-ray tube. The detector array includes one or more rows of detectors that are aligned with respect to each other and that extend along the z-axis. The detectors detect radiation that traverses the examination region and the subject or object therein and generate projection data indicative thereof. A reconstructor processes the projection data and generates 3D image data.

However, CT scanners emit ionizing radiation, which may increase a risk of cancer. This concern has been exasperated as the number of CT scans has increased and as use of CT scanning in asymptomatic patients has become more widespread. Dose deposited to the patient can be reduced by decreasing tube current and/or voltage and/or the number of scans, and/or increasing the pitch, slice thickness and/or slice spacing. However, image noise is inversely proportional to radiation dose, and thus reducing radiation dose not only reduces the dose deposited to the patient but also increases image noise in the acquired data, which is propagated to the image data during reconstruction, reducing image quality (i.e., noisier, less sharp images), which may degrade the diagnostic value of the imaging data.

A goal of image de-noising is to recover the original image from a noisy measurement through averaging. This averaging may be performed locally: the Gaussian smoothing model, the anisotropic filtering and the neighborhood filtering by the calculus of variations: the Total Variation minimization or in the frequency domain: the empirical Wiener filters and wavelet thresholding methods. Non-local means (NL) is an image de- noising process based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered around that pixel and the small patch centered around the pixel being de-noised. Image resolution has been improved through super-resolution algorithms. Some super-resolution algorithms exceed the diffraction- limit of the imaging systems, while other super-resolution algorithms provide an improvement over the resolution of the detector. Multiple-frame super-resolution algorithms generally use sub-pixel shifts between multiple low resolution images of the same scene and improve image resolution by fusing or combining multiple low resolution images into a single higher resolution image. Learning- based super-resolution algorithms additionally incorporate application dependent priors to infer the unknown high resolution images.

In view of the above, there is an unresolved need for other approaches for reducing patient dose while preserving image quality and/or for improving image resolution.

Aspects described herein addresses the above-referenced problems and others.

In one aspect, a method for improving image quality of image data includes analyzing, for each of a plurality of voxels of image data, a set of entries of a dictionary, wherein an entry represents a mapping between a lower resolution patch of voxels and a corresponding higher resolution patch of voxel or a local neighborhood around a voxel, deriving, for each of the plurality of voxels, a subspace based on the analysis, wherein the subspace is for one of the mapping or the local neighborhood, and restoring target image data based on the subspaces, wherein the target image data is image data with higher image resolution or reduced image noise.

In another aspect, an image data processor includes an analyzer that analyzes, for each of a plurality of voxels of image data, a set of entries of a dictionary, wherein an entry represents a mapping between a lower resolution patch of voxels and a corresponding higher resolution patch of voxel or a local neighborhood around a voxel and derives, for each of the plurality of voxels, a subspace based on the analysis, wherein the subspace is for one of the mapping or the local neighborhood; and an image restore that restores target image data based on the subspaces, wherein the target image data is image data with higher image resolution or reduced image noise.

In another aspect, a computer readable medium encoded with computer executable instruction, which, when executed by a processor, causes the processor to:

generate image data with higher image resolution or reduced noise based on initial image data and a non-local principle component analysis (PCA).

The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.

FIGURE 1 schematically illustrates an example imaging system in connection with an image data processor, which is configured to improve image quality of image data.

FIGURE 2 schematically illustrates an example of the image data processor.

FIGURE 3 illustrates a method for improving an image quality.

FIGURE 4 illustrates a method for improving an image resolution.

FIGURE 5 illustrates a method for reducing image noise.

Initially referring to FIGURE 1, an imaging system 100 such as a computed tomography (CT) scanner is schematically illustrated.

The imaging system 100 includes a generally stationary gantry 102 and a rotating gantry 104. The rotating gantry 104 is rotatably supported by the stationary gantry 102 and rotates around an examination region 106 about a longitudinal or z-axis.

A radiation source 110, such as an x-ray tube, is rotatably supported by the rotating gantry 104. The radiation source 110 rotates with the rotating gantry 104 and emits radiation that traverses the examination region 106. A source collimator includes collimation members that collimate the radiation to form a generally cone, wedge, fan or other shaped radiation beam.

A sensitive detector array 112 subtends an angular arc opposite the radiation source 110 across the examination region 106. The detector array 112 includes a plurality of rows of detectors that extend along the z-axis direction. The detector array 112 detects radiation traversing the examination region 106 and generates projection data indicative thereof.

A reconstructor 114 reconstructs the projection data and generates three- dimensional (3D) volumetric image data indicative thereof. The reconstructor 114 may employ a conventional 3D filtered-backprojection reconstruction, a cone beam algorithm, an iterative algorithm and/or other algorithm.

A subject support 118, such as a couch, supports an object or subject such as a human or animal patient in the examination region 106. The subject support 118 is configured to move vertically and/or horizontally before, during, and/or after a scan to position the subject or object in connection with the system 100.

A general-purpose computing system or computer serves as an operator console 120. The console 120 includes a human readable output device such as a monitor or display and an input device such as a keyboard, mouse, etc. Software resident on the console 120 allows the operator to interact with the scanner 100 via a graphical user interface (GUI) or otherwise, e.g., selecting a dose reducing and/or image quality improving algorithm, etc.

An image data processor 116 processes image data, reducing noise and/or improving image resolution of the image data. As described in greater detail below, in one instance, the image data processor 116 reduces noise and/or improves image resolution of the image date utilizing non-local principle component analysis (PCA) and a learning-based super resolution algorithm. Reducing image noise as such allows for reducing radiation for a scan (and thus the dose deposited to a patient) while maintaining image quality. Improving resolution allows for enhancing image resolution. A combination of noise reduction and improving image resolution is also contemplated herein.

The image data processor 116 can be implemented via a processor executing one or more computer readable instructions encoded or embedded on computer readable storage medium such as physical memory or other non-transitory medium. Such a processor can be part of the console 120 and/or other computing device such as a dedicated

visualization computer, and/or other computing device. Additionally or alternatively, the processor can execute at least one computer readable instructions carried by a carrier wave, a signal, or other non-computer readable storage medium such as a transitory medium.

A data repository 122 can be used to store the image data generated by the system 100 and/or the image data processor 116, image data used by the image data processor 116, and/or other data. The data repository 122 may include one or more of a picture archiving and communication system (PACS), a radiology information system (RIS), a hospital information system (HIS), an electronic medical record (EMR) database, a sever, a computer, and/or other data repository. The data repository 122 can be local to the system 100 or remote from the system 100.

FIGURE 2 schematically illustrates an example of the image data processor

116.

The illustrated image data processor 116 receives image data to be processed to increase resolution and generate higher resolution image data. This image data may be lower dose image data being processed to reduce noise and/or improve resolution, for example, to a level of that of conventional dose image data (or lower or higher).

Alternatively, the image data may be conventional dose image data being processed solely to increase the resolution. The image data can come from the reconstructor 114 (FIGURE 1), the data repository 122 (FIGURE 1) and/or other device. A dictionary bank 204 stores various dictionaries. The illustrated dictionary bank 204 includes at least one of a prior generated dictionary 206, a self-similarity dictionary 208 and/or derived dictionary 210. Each dictionary includes a dictionary for each voxel to be processed in the image data.

The prior generated dictionary 206 includes an already generated dictionary provided to the image data processor 116.

A dictionary determiner 212 determines the self-similarity dictionary 208 and/or derived dictionary 210. The dictionary determiner 212 may have determined the prior generated dictionary 206, for example, during earlier processing of first image data and/or other image data corresponding to the same patient and/or another patient.

For the self- similarity dictionary 208, the dictionary determiner 212 downscales the image data and generates a collection of matches between voxel

neighborhoods of the downed scaled image data and the image data. In another embodiment, other voxels may additionally or alternatively be collected. In the context of noise removal, this dictionary is created as a collection of all the patches in the input study.

For the derived dictionary 210, the dictionary determiner 212 identifies a voxel neighborhood in a higher resolution image that corresponds to a voxel in the image data using a registration, matching and/or other algorithm. The derived dictionary 210 is then derived as a collection of matches between the voxel neighborhoods of higher resolution image data and downscaled higher resolution image data. The down scaling can be achieved by smoothing and/or other processing with an appropriate filter and, optionally, sub-sampling the filtered higher resolution image data.

An optional (high pass) filter 214 filters lower- frequency components in the dictionary so that a dictionary entry does not have to be stored for all possible lowest frequency component values, i.e., rather than deriving the dictionary directly on study patches, pre-processing high-pass filter is employed, in order to extract local features that correspond to their high-frequency content. The filtering allows for focusing the training on characterizing the relation between the low-resolution patches and the edges and texture content within the corresponding high-resolution ones.

A non-local analyzer 216 obtains, for each voxel, a set of dictionary entries. The non-local analyzer 216 analyzes this set of entries and derives a subspace for the estimated patch. In the context of noise removal, the subspace is for the local neighborhood around the voxel, while in the context of super resolution the subspace is of the mapping between low resolution and high resolution patch. More specifically, for noise removal, for each voxel in the image data, patches that are similar to the currently processed patch around the voxel are identified. A data matrix is then created by putting the patches in the matrix, where each patch is corresponding to a row in the matrix. A non-weighted or weighted-PCA is then applied to the matrix, where the samples are the rows of the matrix and, in the case of weighted-PCA, the weights are function of a similarity function between the patch and/or the local noise level within the patches.

The non-local analyzer 216 then estimates the number of principle

components with the largest corresponding eigenvalues which are used to model the local signal/patch. Suitable methods for modeling the local signal/patch include, but are not limited to, Akaike information criterion, Bayesian information criterion, deviance

information criterion, stepwise regression, cross-validation, Mallows' Cp, focused

information criterion, thresholding of ratio / difference between consecutive eigenvalues and/or other estimation technique.

For super resolution, the steps are similar with the following exceptions. For each voxel in the input image, dictionary entries with a lower resolution patch similar to the currently processed patch around the voxel are identified. Each row in the matrix consists of the whole dictionary entry, i.e., concatenation of two vectors that consist of the low resolution and high resolution patches of the dictionary entry. The similarity function is between the currently processed patch and the lower resolution patch entry of the dictionary. The estimation of the number of principle components is applied, and there are corresponding measurements only for the low resolution entry patch of the dictionary.

An optional constraint enforcer 218 enforces one or more predetermined constraints, for example, a constraint on identifying similar patches. Such a constraint may be that the number of similar patches with a weighted average value larger than the weighted average value of the currently processed patch is equal to the number of patches with a smaller weighted average. This optional constraint can facilitate in some scenarios the preservation of low contrast regions.

An image restorer 220 restores target image data. In the context of noise removal, the target image data is a reduced noise image data, and in the context of super resolution the target image data is a higher resolution image data. Suitable restorations include, but are not limited to, a local approach, a global approach, and/or other approach.

The following describes an example local approach, which utilizes a greedy or other optimization approach. For noise removal, for each voxel in the noisy study, a least-squared is used to solve the optimization shown in EQUATION 1 :

EQUATION 1 : a =

where P is a vector corresponding to the patch around the currently processed voxel, V is vector of mean values of each columns of the matrix A, which includes relevant patches, where each patch corresponds to a row in the matrix, V i is the principle component with the i-th largest corresponding eigenvalue of the samples in the matrix A, K is a weight kernel that penalizes distance away from the currently processed voxel and m is the number of principle components that models the currently local patch.

The recovered noiseless patch is shown in EQUATION 2:

EQUATION 2:

m

P = vg +∑V i a i .

i=l

The recovered noiseless patches are merged by weighted averaging in the overlap area to create the final restored image.

For super resolution, for each voxel in the noisy study, a least-squared is used to solve the optimization shown in EQUATION 3 :

EQUATION 3 :

m

d = a r & n (P - V: v r g -∑V! r a i ) 2 K ,

a i=l where the Ir superscript refers to the vector section that is corresponding to low resolution patch of the dictionary entry. The recovered noiseless patch is shown in EQUATION 4: EQUATION 4:

m

i=l where the hr superscript refers to the vector section that is corresponding to high resolution patch of the dictionary entry.

The recovered high resolution patches are merged by weighted averaging in the overlap area to create the restored image. In case of using filtered patches, the restored image is added to an interpolated study of the input study.

An additional optional step is to enforce a global restoration constraint between the low resolution input study and the algorithm's output high resolution study. This is done efficiently using a back-projection method as shown in EQUATION 5 :

EQUATION 5:

jhigh = jhigh + us (ji ow _ DS ( i *&^

where I low is the input study, 7 0 fcgA is the output study of the previous step, US is an up- scaling operator and DS is a down-scaling operator.

An additional optional step is to include a gain parameter with EQUATION 5 as shown in EQUATION 6:

EQUATION 6:

if = us(i' ow )+ (/*** - us(i' ow ))* g where g is a gain parameter that controls an intensity of the high frequencies that are added to the image. In one instance, a default value of g is one (1), and the value of g is be set by presetting definition. An alternative option is that the value of g can be controlled by the user in real time using a scroller or the like, with the updated result presented in real time on the display.

The following describes an example global approach, which utilizes the optimization of a global cost function or other function. For noise removal, all the voxels in the noisy study are simultaneously optimized, for example, using a least-squared is used to solve the optimization shown in EQUATION 7:

EQUATION 7: ά ' = argmin K J l where P J is a vector corresponding to the patch around the voxel j, P£ is voxel k in the recovered patch around the voxel j, where the recovered patch around voxel j is

m J

P J = V' + V ά; , Vi m is a vector of mean values of each columns of the matrix A for i=l

voxel j, V/ is the principle component with the i-th largest corresponding eigenvalue of the samples in matrix A for voxel j, K is a weight kernel that penalizes distance away from its center, m j is the number of principle components that models the patch around voxel j, K k is the k element in the kernel, iH'H is an index that equal to one only if the kl element of the patch around voxel j 1 is overlapping in the study with the k2 element of the patch around voxel j2 and λ is a scalar input parameter.

The recovered noiseless patch is shown in EQUATION 8:

EQUATION 8:

The recovered noiseless patches are merged by weighted averaging in the overlap area to create the final restored image.

For super resolution, all the voxels in the noisy study are simultaneously optimized, for example, using a least-squared is used to solve the optimization shown in EQUATION 9:

EQUATION 9: ά ' = argmin

where the Ir superscript is referring to the vector section that is corresponding to low resolution patch of the dictionary entry. The recovered noiseless patch is shown in

EQUATION 10:

E UATION 10:

where the hr superscript is referring to the vector section that is corresponding to high resolution patch of the dictionary entry.

The recovered high resolution patches are merged by weighted averaging in the overlap area to create the restored image. In case of using filtered patches, the restored image is added to an interpolated study of the input study.

An additional optional step is to enforce a global restoration constraint between the low resolution input study and the algorithm's output high resolution study. This is done efficiently using a back-projection method as shown in EQUATION 5 above.

FIGURES 3, 4 and 5 respectively illustrate methods for improving image quality, improving image resolution and reducing noise.

It is to be appreciated that the ordering of the acts in the methods described herein is not limiting. As such, other orderings are contemplated herein. In addition, one or more acts may be omitted and/or one or more additional acts may be included.

FIGURE 3 illustrates a method for improving image quality.

At 302, a tailored dictionary for each voxel to be processed is obtained. In the context of super resolution, each dictionary entry represents a mapping between low resolution and high resolution voxels. In the context of noise removal, a dictionary entry represents a building block. As described herein, the tailored dictionary is a combination of prior, self-similarity and/or derived dictionaries.

At 304, for each voxel, a subspace is determined based on an analysis of a set of corresponding dictionary entries. In the context of super resolution, the subspace is a mapping between lower resolution and higher resolution voxels. In the context of noise removal, the subspace is for the local neighborhood around the voxel.

At 306, image data is restored by incorporating local and global fidelities and compatibilities constraints. In the context of noise removal, the target image data is a reduced noise image data, and in the context of super resolution the target image data is a higher resolution image data. Suitable constraints may include compatibilities of estimated local signal with its derived signal subspace, compatibilities between neighbor signal subspaces, compatibility between low resolution and its estimated high resolution image.

FIGURE 4 illustrates a method for improving image resolution.

At 402, a tailored dictionary for each voxel in image data to process is obtained. As discussed herein, each dictionary entry represents a mapping between lower resolution voxels and corresponding higher resolution voxels.

At 404, dictionary entries from the tailored dictionary are obtained for a lower resolution patch corresponding to a patch around a voxel to process.

At 406, a matrix is created based on the obtained patches. As discussed herein, each row in the matrix consists of the whole dictionary entry, i.e., concatenation of two vectors that consist of the lower resolution and the higher resolution patches of the dictionary entry.

At 408, a PCA or weighted-PCA is performed on the matrix. For this, the samples are the rows of the matrix and, in the case of a weighted-PCA, the weights are function of a similarity function between the currently processed patch and the lower resolution patch entry of the dictionary.

At 410, the number of principle components is estimated. In this case, corresponding measurements are only for the lower resolution entry patch of the dictionary.

At 412, a higher resolution image is restored based on the lower resolution patches of the dictionary, the matrix, the principal components, the number of principal components, and a local or global optimization, as described herein.

FIGURE 5 illustrates a method for reducing image noise.

At 502, a tailored dictionary for each voxel in image data to process is obtained. As discussed herein, each dictionary entry represents a building block.

At 504, patches that are similar to the currently processed patch around a voxel are identified.

At 506, a matrix is created based on the obtained patches. As discussed herein, each patch corresponds to a row in the matrix. At 508, a PCA or weighted-PCA is performed on the matrix. For this, the samples are the rows of the matrix and, in the case of a weighted-PCA, the weights are function of a similarity function between the patch and/or the local noise level within the patches.

At 510, the number of principle components is estimated. In this case, corresponding measurements are only for the lower resolution entry patch of the dictionary.

At 512, a reduced noise image is restored based on the patches around the voxels, the matrix, the principal components, the number of principal components, and a local or global optimization, as described herein.

The methods described herein may be implemented via one or more processors executing one or more computer readable instructions encoded or embodied on computer readable storage medium such as physical memory which causes the one or more processors to carry out the various acts and/or other functions and/or acts. Additionally or alternatively, the one or more processors can execute instructions carried by transitory medium such as a signal or carrier wave.

The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be constructed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.