Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE AND METHOD FOR SUPER RESOLUTION KERNEL ESTIMATION
Document Type and Number:
WIPO Patent Application WO/2022/174930
Kind Code:
A1
Abstract:
The present disclosure relates to super-resolution (SR) imaging, in particular SR imaging using neural networks. The disclosure presents in this respect a device for SR imaging, a method for SR imaging, and a corresponding computer program. The device comprises a processor configured to estimate an effective blurring kernel based on an input image having a lower resolution. The processor is further configured to estimate a coarse SR kernel by convolving the effective blurring kernel with itself a specified number of times, wherein the specified number is based on a target higher resolution. Further, the processor is configured to estimate a final SR kernel by refining the coarse SR kernel.

Inventors:
YAMAC MEHMET (SE)
ATAMAN BARAN (SE)
NAWAZ AAKIF (SE)
Application Number:
PCT/EP2021/054293
Publication Date:
August 25, 2022
Filing Date:
February 22, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
YAMAC MEHMET (SE)
International Classes:
G06T1/20; G06T3/40
Other References:
ZHOU RUOFAN ET AL: "Kernel Modeling Super-Resolution on Real Low-Resolution Images", IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, 27 October 2019 (2019-10-27), pages 2433 - 2443, XP033724079
GU JINJIN ET AL: "Blind Super-Resolution With Iterative Kernel Correction", IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 15 June 2019 (2019-06-15), pages 1604 - 1613, XP033686631
JI XIAOZHONG ET AL: "Real-World Super-Resolution via Kernel Estimation and Noise Injection", IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, 14 June 2020 (2020-06-14), pages 1914 - 1923, XP033798841
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
Claims

1. A device (100) for super-resolution, SR, imaging, the device (100) comprising a processor (112, 123, 134, 115) configured to: estimate an effective blurring kernel (102) based on an input image (101) having a lower resolution; estimate a coarse SR kernel (103) by convolving the effective blurring kernel (102) with itself a specified number of times, wherein the specified number is based on a target higher resolution; and estimate a final SR kernel (104) by refining the coarse SR kernel (103).

2. The device (100) according to claim 1, wherein the processor (112, 123, 134, 115) is configured to: estimate the coarse SR kernel (103) by convolving the effective blurring kernel (102) with itself 2sf times, wherein sf is an upscaling factor (200) between the lower resolution of the input image (101) and the target higher resolution.

3. The device (100) according to claim 1 or 2, wherein the processor (112, 123, 134,

115) is configured to: estimate the effective blurring kernel (102) using a blind deblurring algorithm.

4. The device (100) according to one of the claims 1 to 3, wherein the estimating of the effective blurring kernel (102) comprises: determining a gradient (301) of the input image (101), applying a fast Fourier transformation, FFT, to the gradient (301) of the input image (101), and processing the result of applying the FFT in an FFT domain.

5. The device (100) according to one of the claims 1 to 4, wherein the processor (112, 123, 134, 115) is configured to: estimate the effective blurring kernel (102), the coarse SR kernel (103), and the final SR kernel (104), using a convolutional neural network, CNN (300).

6 The device (100) according to claim 5, wherein: the CNN (300) comprises multiple modules (310, 311, 312, 313), each module (310, 311, 312, 313) being usable either for estimating the effective blurring kernel (102), for estimating the coarse SR kernel (103), or for estimating the final SR kernel (104).

7. The device (100) according to claim 6, wherein: each module (310, 311, 312, 313) comprises either matrix multiplication layers in a FFT domain or one or more hidden layers of the CNN (300).

8. The device (100) according to one of the claims 1 to 7, wherein the processor (112, 123, 134, 115) is further configured to: apply the final SR kernel (104) to the input image (101) having the lower resolution, to obtain an output image (105) having the target higher resolution.

9. A method (500) for super-resolution, SR, imaging, the method (500) comprising: estimating (501) an effective blurring kernel (102) based on an input image (101); estimating (502) a coarse SR kernel (103) by convolving the effective blurring kernel (102) with itself a specified number of times, wherein the specified number is based on a target higher resolution; and estimating (503) a final SR kernel (104) by refining the coarse SR kernel (103).

10. The method (500) according to claim 9, further comprising: estimating (502) the coarse SR kernel (103) by convolving the effective blurring kernel (102) with itself 2sf times, wherein sf is an upscaling factor (200) between the lower resolution of the input image (101) and the target higher resolution.

11. The method (500) according to claim 9 or 10, further comprising: estimating (501) the effective blurring kernel (102) using a blind deblurring algorithm.

12. The method (500) according to one of the claims 9 to 11, wherein the estimating (501) of the effective blurring kernel (102) comprises: determining a gradient (301) of the input image (101), applying a fast Fourier transformation, FFT, to the gradient (301) of the input image (101), and processing the result of applying the FFT in an FFT domain. 13. The method (500) according to one of the claims 9 to 12, comprising: estimating (501, 502, 503) the effective blurring kernel (102), the coarse SR kernel (103), and the final SR kernel (104), using a convolutional neural network, CNN (300). 14. A computer program comprising a program code for, when running on a processor (112, 123, 134, 115), causing the method (500) of one of the claims 9 to 13 to be performed.

15. A computer-readable medium having stored thereon a convolutional neural network (300), CNN, and instructions to cause the CNN (300) to perform the method (500) according to one of the claims 9 to 13, when the instructions are executed by a processor (112, 123, 134, 115).

Description:
DEVICE AND METHOD FOR SUPER RESOLUTION KERNEL ESTIMATION

TECHNICAL FIELD

The present disclosure relates to super resolution (SR) imaging, in particular, to SR imaging using a convolutional neural network (CNN). The disclosure is specifically concerned with estimating a SR kernel for a low resolution (LR) image. To this end, the disclosure proposes a device for SR imaging, a method for SR imaging, and a corresponding computer program.

BACKGROUND

Conventional methods using CNNs for SR kernel estimation have achieved remarkable performance when being applied to a LR image, which has been obtained from a higher resolution (HR) image with an ideal and predefined downsampling process. In particular, most CNN-based methods have been trained with a set of LR images, which were degraded from HR images by convolution with a fixed blurring kernel (e.g., bicubic or Gaussian) followed by subsampling. The methods have accordingly been tested with similarly produced synthetic image data. However, the performance of these CNN-based methods comes short when being applied to a real-world image, which has been obtained based on a more complex degradation operation.

In particular, when a conventional CNN -based method is applied to a real-world image, which has an unknown downsampling pattern (unlike the synthetically generated LR-HR image pair mentioned above), the performance of the method drops drastically. Applying the method to such an image is referred to as the blind SR problem. For example, it is observed that a conventional CNN-based SR approach - namely, the enhanced deep super resolution network (EDSR) - does not show any significant improvement compared to a simple binary cubic interpolation for a real-world image.

Therefore, estimating SR kernels accurately to solve the blind SR problem is still an open research question, and there are only a few exemplary approaches that focus on this problem. A first exemplary approach is model-based and applies a bicubic interpolation to a LR image, in order to obtain a coarse estimation of a HR image. Then, an approximate SR kernel is estimated using an existing blind deblurring algorithm.

A second exemplary approach uses an internal generative adversarial network (Intemal- GAN), and is thus referred to as “KemelGAN” In particular, KemelGAN is a type of network that produces downscaled versions of test images in a training phase, in order to leam image specific SR kernels. KemelGAN is composed of one generator network and one discriminator network. Given a test image to be upscaled, the generator network generates a lower scale image by degrading and downscaling the test image. The discriminator network then tries to distinguish, whether the generated LR image has the same patch distribution as the original one. The discriminator and generator networks are trained by using crops from the test image in an alternating manner. The downscaled generator network is trained to fool the discriminator network in each iteration. After convergence, the generator network can be used as the SR degradation model. The generator network may have five fully convolutional layers and one downsampled layer with a certain scale factor. Therefore, the impulse response of the convolutional layers produces the SR kernel estimation.

Although the approach using KemelGAN is able to provide remarkable results in the SR kernel estimation, its SR kernel recovery performance is still limited. Further, there is still room for significant improvement in terms of SR kernel reconstmction accuracy. Moreover, since this approach uses an image-specific training for performing the SR kernel estimation, it is not feasible to use it in real-time applications, for instance, in mobile devices.

Consequently, while the approach with KemelGAN shows promising results, its limited recovery performance and high computational complexity make it unsuitable for real-time application usage. SUMMARY

The present disclosure and its embodiments are based further on the following considerations of the inventors.

An experiment was conducted to demonstrate the importance of an accurate SR kernel estimation. In particular, an arbitrarily selected ground truth (GT) HR image was degraded with a previously known SR kernel to obtain a LR image. Then, the SR kernel was estimated from the LR image by using both KemelGAN and a conventional model-based approach as described above. Together with these estimated SR kernels, the LR image was given to a SR solution that can work on any SR kernel. FIG. 6 reveals how sensitive the SR solution turned out to be to inaccuracies in the SR kernel estimation. In particular, an inaccurate SR kernel estimation yields visible artifacts. Taking a closer look at FIG. 6, the estimated SR kernel of the model-based approach causes over-smoothing, because the width of its estimation is smaller than the one of GT kernel. On the other hand, the SR kernel estimated with KemelGAN has a width that is larger than the width of the GT, which results in over-sharpening and ringing artifacts. When using an inaccurate SR kernel in any SR solution, including training, CNN-based methods with inaccurately estimated SR kernels may cause a similar complication, which is known as the kernel mismatch problem.

Moreover, KemelGAN and the aforementioned conventional model-based approach are both based on iterative algorithms. Therefore, these SR kernel estimation algorithms are relatively slow, and cannot be used in a real-time applications based on a SR solution that requires SR kernel estimation.

In view of the above, embodiments of this disclosure aim to improve SR resolution imaging. An objective is, in particular, to provide a device and method that are able to estimate a SR kernel more accurately, in particular, when being applied to real-life images. That is, the device and method should provide better results regarding the blind SR problem. The device and method should also operate fast - specifically, faster than KemelGAN and the conventional model-based approach described above - in order to be usable in real-time applications. The objective is achieved by the embodiments of this disclosure as described in the enclosed independent claims. Advantageous implementations of the embodiments are further defined in the dependent claims.

In particular, embodiments of this disclosure base on a new SR kernel estimation technique, which is able to reach a state-of-the-art SR kernel estimation performance in terms of reconstruction accuracy, while it can also be run in real-time. To this end, the embodiments of this disclosure may employ, for example, a modular CNN structure that implements the new SR kernel estimation technique.

A first aspect of this disclosure provides a device for SR imaging, the device comprising a processor configured to: estimate an effective blurring kernel based on an input image having a lower resolution; estimate a coarse SR kernel by convolving the effective blurring kernel with itself a specified number of times, wherein the specified number is based on a target higher resolution; and estimate a final SR kernel by refining the coarse SR kernel.

Accordingly, the device of the first aspect is configured to implement a three-step processing procedure based on the input image, in order to determine the (final) SR kernel as an output. Thereby, the device is particularly able to find an accurate solution to the blind SR problem. In particular, the device of the first aspect outperforms conventional SR kernel estimation approaches, and is able to estimate the SR kernel more accurately, especially, when being applied to real-life images. For example, the device of the first aspect outperforms KemelGAN by a significant margin in SR kernel reconstruction accuracy. In addition, the SR kernel determination carried out by the device of the first aspect is fast enough to be usable in real-time applications.

In an implementation form of the first aspect, the processor is configured to estimate the coarse SR kernel by convolving the effective blurring kernel with itself 2 sf times, wherein sf is an upscaling factor between the lower resolution of the input image and the target higher resolution.

In this way, the device of the first aspect is configured to upscale the effective blurring kernel, depending on the target resolution of the HR image. The upscaling, by performing the convolution, is particularly fast and efficient. In an implementation form of the first aspect, the processor is configured to estimate the effective blurring kernel using a blind deblurring algorithm.

This technique yields fast and reliable results for obtaining the effective blurring kernel.

In an implementation form of the first aspect, the estimating of the effective blurring kernel comprises determining a gradient of the input image, applying a fast F ourier transformation (FFT) to the gradient of the input image, and processing the result of applying the FFT in an FFT domain.

This leads to a particularly accurate solution for obtaining the effective blurring kernel.

In an implementation form of the first aspect, the processor is configured to estimate the effective blurring kernel, the coarse SR kernel, and the final SR kernel, using a CNN.

The device of the first aspect is accordingly configured to perform a CNN-based method for the SR kernel estimation.

In an implementation form of the first aspect, the CNN comprises multiple modules, each module being usable either for estimating the effective blurring kernel, for estimating the coarse SR kernel, or for estimating the final SR kernel.

Thus, the CNN used by the device of the first aspect can be a modular NN, which may provide improved speed and accuracy.

In an implementation form of the first aspect, each module comprises either matrix multiplication layers in FFT domain or (one or more) hidden layers of the CNN.

In an implementation form of the first aspect, the processor is further configured to apply the final SR kernel to the input image having the lower resolution, to obtain an output image having the target higher resolution. Thus, the SR kernel determination of the device of the first aspect can be directly used for SR imaging, e.g., in a real-time application. The device of the first aspect may accordingly be configured as a SR solution with SR kernel estimation.

A second aspect of this disclosure provides a method for SR imaging, the method comprising: estimating an effective blurring kernel based on an input image; estimating a coarse SR kernel by convolving the effective blurring kernel with itself a specified number of times, wherein the specified number is based on a target higher resolution; and estimating a final SR kernel by refining the coarse SR kernel.

In an implementation form of the second aspect, the method further comprises estimating the coarse SR kernel by convolving the effective blurring kernel with itself 2 sf times, wherein sf is an upscaling factor between the lower resolution of the input image and the target higher resolution.

In an implementation form of the second aspect, the method further comprises estimating the effective blurring kernel using a blind deblurring algorithm.

In an implementation form of the second aspect, the method further comprises determining a gradient of the input image, applying a FFT to the gradient of the input image, and processing the result of applying the FFT in an FFT domain.

In an implementation form of the second aspect, the method comprises estimating the effective blurring kernel, the coarse SR kernel, and the final SR kernel, using a CNN.

In an implementation form of the second aspect, the CNN comprises multiple modules, each module being usable either for estimating the effective blurring kernel, for estimating the coarse SR kernel, or for estimating the final SR kernel.

In an implementation form of the first aspect, each module comprises either matrix multiplication layers in FFT domain or (one or more) hidden layers of the CNN.. In an implementation form of the second aspect, the method further comprises applying the final SR kernel to the input image having the lower resolution, to obtain an output image having the target higher resolution.

The method of the second aspect achieves the same advantages as are described above for the device of the first aspect.

A third aspect of this disclosure provides a computer program comprising a program code for, when running on a processor, causing the method of the second aspect or any of its implementation forms to be performed.

A fourth aspect of this disclosure provides a computer-readable medium having stored thereon a CNN, and instructions to cause the CNN to perform the method according to one of the second aspect or any of its implementation forms, when the instructions are executed by a processor.

It has to be noted that all devices, elements, units and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.

BRIEF DESCRIPTION OF DRAWINGS

The above described aspects and implementation forms will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which

FIG. 1 shows a device according to an embodiment of this disclosure. FIG. 2 shows a device according to an embodiment of this disclosure. In particular, a SR kernel estimation pipeline of the device is illustrated. FIG. 3 shows a device according to an embodiment of this disclosure. In particular, a SR kernel estimator network of the device (called “KemelNet”) is illustrated.

FIG. 4a shows a visual comparison of results obtained with kernels estimated with different SR kernel estimators, in particular, for six different examples.

FIG. 4b shows a visual comparison of results achieved with different SR algorithms for real-world images. FIG. 4c shows a visual comparison of results of achieved with different SR algorithms for real-world images.

FIG. 5 shows a method according to an embodiment of this disclosure. FIG. 6 shows shortcomings of conventional SR kernel estimation techniques, including a model-based technique and KemelGAN, compared to the ground truth.

DETAILED DESCRIPTION OF EMBODIMENTS

FIG. 1 shows a device 100 according to an embodiment of this disclosure. The device 100 can be used for SR imaging. In particular, the device 100 is configured to determine a SR kernel. The device 100 may also be configured as a full SR solution with SR kernel estimation.

The device 100 comprises a processing circuitry (processor), which may comprise multiple functional blocks 112, 123, 134 and 115 (processing blocks) as shown in FIG. 1. The processing circuitry is configured to perform, conduct or initiate the various operations of the device 100 described herein. The processing circuitry may comprise hardware and/or may be controlled by software. The hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry. The digital circuitry may comprise components such as application-specific integrated circuits (ASICs), field-programmable arrays (FPGAs), digital signal processors (DSPs), or multi-purpose processors. The device 100 may further comprise memory circuitry (not shown), which is configured to store one or more instruction(s) that can be executed by the processing circuitry, in particular under control of the software. For instance, the memory circuitry may comprise a non-transitory storage medium storing executable software code which, when executed by the processing circuitry, causes the various operations of the device 100 to be performed. In one embodiment, the processing circuitry comprises one or more processing units and a non- transitory memory connected to the one or more processing units. The non-transitory memory may carry executable program code which, when executed by the one or more processing units causes the device 100 to perform, conduct or initiate the operations or methods described herein.

The device 100 is configured to receive an input image 101 of a LR, and the processor (processing circuitry) is configured to process the input image 101, namely as follows. First, the processor is configured to estimate an effective blurring kernel 102 based on the LR input image 101. This may be done by the functional block 112 of the processor. Then, the processor is configured to estimate a coarse SR kernel 103, namely by convolving the effective blurring kernel 102 with itself a specified number of times, wherein the specified number is based on a target HR, e.g., on the HR of an output image that is to be obtained. For instance, the effective blurring kernel may be convolved 2 sf times with itself, wherein 2 sf is the specified number of times, and wherein sf is an upscaling factor between the LR of the input image 101 and the target HR. This may be done by the functional block 123 of the processor. Further, the processor is configured to estimate a final SR kernel 104 by refining the coarse SR kernel 103. This may be done by the functional block 134 of the processor.

Optionally, the device 100 may be further configure to provide an output image 105 of the HR. The processor of the device 100 may, to this end, be configured to apply the estimated final SR kernel 104 to the LR input image 101, in order to obtain the target HR output image 105. FIG. 2 shows a device 100 according to an embodiment of this disclosure, which builds on the embodiment shown in FIG. 1. Same reference signs in FIG. 1 and FIG. 2 label the same elements, wherein implementations of these elements may be identical. In particular, FIG. 2 illustrates an SR kernel estimation pipeline, which the device 100 may comprise. The SR kernel estimation pipeline can be configured to perform the following steps.

In the first step (at the optional functional block 112), the effective blurring kernel 102 is estimated in low scale (LR), and is denoted by k LR . It is thereby assumed that k is the GT SR kernel to be estimated from a LR image, particularly, from the input image 101. After having estimated the effective blurring kernel k LR , the coarse SR kernel 103 is estimated in the second step (at the optional functional block 123). In particular, the coarse SR kernel 103, which is denoted k HR , can be estimated by convolving the estimated effective blurring kernel k LR with itself 2 s f times, wherein sf is a scale factor 200 (for sf x upscaling). That is, the estimated LR effective blurring kernel 102 may be upscaled to obtain the coarse HR SR kernel. Finally, the final SR kernel 104, denoted k, can be obtained in the third step (at the optional functional block 134) by refining the estimated coarse SR kernel k HR , i.e., by obtaining a finer estimation.

FIG. 3 shows a device 100 according to an embodiment of this disclosure, which builds on the embodiments of FIG. 1 and FIG. 2. Same reference signs in FIG. 1, FIG. 2 and FIG. 3 label the same elements, wherein implementations of these elements may be identical. In particular, FIG. 3 shows a SR kernel estimator network 300, which the device 100 may comprise (wherein the network 300 is referred to as “KemelNef ’). The network 300 may be based on or be a CNN, i.e., the device 100 may be able to estimate the effective blurring kernel 102, the coarse SR kernel 103, and the final SR kernel 104, respectively, using a CNN 300. The CNN 300 may have a plurality of modules, wherein each module may be configured to either estimate the effective blurring kernel 102, or to estimate the coarse SR kernel 103, or to estimate the final SR kernel 104. Each module may comprise one or more hidden layers of the CNN 300.

In FIG. 3, the CNN 300 includes, as an example, four modules, namely a gradient estimator module 310, aFFT-1 module 311, a FFT-2 module 312, and a refinement module 313. The respective modules 310, 31, 312, 313 of the CNN 300 may be associated to, or may be implemented by, the functional blocks 112, 123, and 134 of the processor shown in FIG. 1 and FIG. 2. That is, the processor may be configured to implement the CNN 300. A data structure of the CNN 300, and instructions regarding an operation of the CNN 300, may be stored in a memory coupled to and working together with the processor. For instance, the memory may have stored thereon the CNN 300 and the instructions to cause the CNN 300 to perform a method according to steps performed by the device 100 as described above (or according to the method 500 shown in FIG. 5), when the instructions are executed by the processor.

The gradient estimator module 310 and the FFT-1 module 311 may be responsible for estimating the effective blurring kernel 102 in low scale, which is again denoted k LR , i.e., these two modules 310, 311 may be implemented in this example by the functional block 112 of the processor (see FIG. 1 and FIG. 2). In particular, the estimating of the effective blurring kernel 102 may comprise determining a gradient 301 of the input image 101 (by the gradient estimator module 310), then applying a FFT to the gradient 301 of the input image 101 (by the FFT-1 module), and then, optionally further processing the result of applying the FFT to the gradient 301 in the FFT domain.

The FFT-2 module 312 (which may also be referred to as a self-convolution module) is then configured to upsample the previously obtained effective blurring kernel 102 in the FFT domain, which is again denoted k HR , to obtain the coarse SR kernel 103. In particular, the FFT-2 module may be configured to estimate the coarse SR kernel 103 by convolving the effective blurring kernel 102 with itself for the specified number of times (e.g., 2 sf times by the upscaling factor 200) mentioned above, i.e., it may be implemented by the functional block 123 of the processor (see FIG. 1 and FIG. 2).

The refinement module 113 is then configured to obtain a finer estimation of k as an output of the CNN 300. That is, the refinement module 113 is configured to estimate the final SR kernel 104 by refining the coarse SR kernel 103, i.e., it may be implemented by the functional block 134 of the processor (see FIG. 1 and FIG. 2).

An experimental evaluation of the results obtained by embodiments of this disclosure is now discussed with respect to FIG. 4. In particular, FIG. 4 shows in (a) a visual comparison of results obtained with kernels estimated by different SR kernel estimators for six different examples. Further, FIG. 4 shows in (b) a visual comparison of results obtained by SR algorithms on real-world images, and shows in (c) a visual comparison of results obtained by SR algorithms applied to real-world images.

FIG. 4(a) demonstrates SR kernel estimation in synthetic image data. For obtaining a synthetic image data test set, one hundred 1024 c 1024 patches were randomly extracted from images included in a DIV2K dataset (i.e., a set of diverse 2K resolution high quality images) Then, these patches were degraded by SR kernels, which were randomly chosen from a realistic SR kernel pool (test pool). This realistic SR kernel test pool was generated by using a conventional method. The visual comparison of the results of the different kernel estimators is shown in FIG. 4a, wherein “KemelNet” relates to results obtained by a device 100 according to an embodiment of this disclosure.

FIG. 4(b) and (c) demonstrate results of SR kernel estimation in real-world images. For a visual comparison of the results, a subset of images included in a mobile phone image dataset were up-scaled by using all the competing SR methods. FIG. 4(b) and (c) are for different scales, and again “KemelNet” relates to a device 100 according to an embodiment of this disclosure.

In summary of FIG. 4, the device 100 according to an embodiment of this disclosure achieves a better performance. In particular, the device 100 outperforms the conventional SR kernel estimators (by a significant margin) in SR kernel reconstruction accuracy. The device 100 also yielded faster results than the conventional competitor approaches.

FIG. 5 shows a method 500 according to an embodiment of this disclosure. The method

500 is usable for SR imaging, in particular for determining a SR kernel of a LR input image 101. The method 500 may be performed by the device 100 as shown in FIG. 1, FIG. 2 or FIG. 3, particularly, by its processor and/or CNN 300. The method 500 comprises a step

501 of estimating an effective blurring kernel 102 based on the input image 101. Further, the method 500 comprises a step 502 of estimating a coarse SR kernel 103 by convolving the effective blurring kernel 102 with itself a specified number of times. The specified number is based on a target HR, e.g., of a HR output image. Further, the method 500 comprises a step 503 of estimating a final SR kernel 104 by refining the coarse SR kernel 103. The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed subject matter, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.