Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR BLIND MULTI- SPECTRAL IMAGE FUSION
Document Type and Number:
WIPO Patent Application WO/2021/205735
Kind Code:
A1
Abstract:
Systems, methods and apparatus for image processing for reconstructing a super resolution image from multispectral (MS) images. Receive image data and initialize a fused image using a panchromatic (PAN) image, and estimate a blur kernel between the PAN image and the MS images as an initialization function. Iteratively, fuse a MS image with an associated PAN image of a scene using a fusing algorithm. Each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior function to regularize the high-resolution information to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, to a predetermined threshold, to stop iterations stop, to obtain a PAN-sharpened image.

Inventors:
LIU DEHONG (US)
YU LANTAO (US)
MA YANTING (US)
MANSOUR HASSAN (US)
BOUFOUNOS PETROS (US)
Application Number:
PCT/JP2021/004190
Publication Date:
October 14, 2021
Filing Date:
January 22, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
International Classes:
G06T3/40
Foreign References:
US9225889B12015-12-29
Other References:
JIAO JIAO ET AL: "Image Restoration for the MRA-Based Pansharpening Method", IEEE ACCESS, IEEE, USA, vol. 8, 10 January 2020 (2020-01-10), pages 13694 - 13709, XP011768379, DOI: 10.1109/ACCESS.2020.2965921
HE CHUAN ET AL: "An Adaptive Total Generalized Variation Model with Augmented Lagrangian Method for Image Denoising", MATHEMATICAL PROBLEMS IN ENGINEERING, vol. 2014, 1 January 2014 (2014-01-01), CH, pages 1 - 11, XP055799867, ISSN: 1024-123X, Retrieved from the Internet DOI: 10.1155/2014/157893
DENIS LOÏC ET AL: "Fast Approximations of Shift-Variant Blur", INTERNATIONAL JOURNAL OF COMPUTER VISION, KLUWER ACADEMIC PUBLISHERS, NORWELL, US, vol. 115, no. 3, 8 April 2015 (2015-04-08), pages 253 - 278, XP035934553, ISSN: 0920-5691, [retrieved on 20150408], DOI: 10.1007/S11263-015-0817-X
YU LANTAO ET AL: "Blind Multi-Spectral Image Pan-Sharpening", ICASSP 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 4 May 2020 (2020-05-04), pages 1429 - 1433, XP033793217, DOI: 10.1109/ICASSP40776.2020.9053554
Attorney, Agent or Firm:
FUKAMI PATENT OFFICE, P.C. (JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A system for reconstructing a super resolution image from multispectral (MS) images, having a transceiver to accept data, a memory to store the data, wherein the data including MS images and a panchromatic (PAN) of a scene, each MS image is associated with the PAN image, as well as a processing device operatively connected to the transceiver and the memory, the system comprising that the processing device is configured to: initialize a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images as an initialization function; iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor, each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation (TGV2) function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image; and output, via an output interface in communication with the processor, the PAN-sharpened image to a communication network or to a display device. [Claim 2] The system of claim 1 , wherein the PAN image used to initialize the fused image is a ridge PAN image.

[Claim 3]

The system of claim 1 , wherein the blur kernel is a rigid transformation blur kernel, and the initialization function is an initial blur kernel function. [Claim 4]

The system of claim 1 , wherein the LLP regularizes a relationship between high-frequency components of the MS images and PAN image, yielding a level of a fusion performance that is greater than a level of a fusion performance using local gradient constraints.

[Claim 5]

The system of claim 4, wherein the LLP is a second-order gradient, such that the LLP is generalized to a second gradient or a higher order gradient.

[Claim 6]

The system of claim 1, wherein the TGV2 function is operable when an assumption that an image is piecewise constant is not valid in reconstructing images, such that the piecewise constant images are captured using the TGV2 function during the image reconstruction.

[Claim 7]

The system of claim 6, wherein the TGV2 is a regularizer on the blur kernel, which is assumed to be smooth and centralized according to the TGV2. [Claim 8]

The system of claim 1 , wherein the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis. [Claim 9]

An apparatus having computer storage including a computer- readable storage medium, and a hardware processor device operatively coupled to the computer storage and to reconstruct spatial resolution of an image of a scene captured within multi-spectral (MS) images and panchromatic (PAN) images, the MS images obtained from a MS image sensor having a color filter array and positioned at a first optical axis, and the PAN images obtained from a PAN image sensor positioned at a second optical axis that is substantially parallel to the first optical axis, wherein, to reconstruct the spatial resolution of the image, the apparatus comprising that the hardware processor device is to: initialize a fused image using a PAN image, and estimate a blur kernel between the PAN image and the MS images using an initialization function; iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor, each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation (TGV2) function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN and the fused MS image to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image; and output, via an output interface in communication with the processor, the PAN-sharpened image to a communication network or to a display device. [Claim 10]

The apparatus of claim 9, wherein the MS images are low resolution images and are obtained from the MS image sensor optically coupled to a first imaging lens, and the PAN images are high resolution images and are obtained from the PAN image sensor, the MS image sensor and the PAN image sensor have substantially identical focal plane arrays of substantially identical photosensitive elements, and wherein the MS image sensor and the PAN image sensor are set in substantially a single geometric plane such that the focal plane arrays receive optical projections of substantially an identical version of the scene.

[Claim 11]

The apparatus of claim 9, wherein the MS images are captured at a first frame rate and the PAN images are captured at a second frame rate different than or the same as the first frame rate.

[Claim 12]

The apparatus of claim 9, wherein the blur kernel combines a Point Spread Function (PSF) function and a shift such as a rigid transformation, together.

[Claim 13]

The apparatus of claim 9, wherein the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis.

[Claim 14]

A system for reconstructing a super resolution image from multispectral (MS) images, and an input interface to accept data, along with a memory to store the data, the data including MS images and panchromatic (PAN) images of a scene, each MS image is associated with a PAN image, such that a hardware processing device is operatively connected to the input interface and the memory, the system comprising that the hardware processing device is configured to: initialize a fused image using a rigid PAN image; estimate a rigid transformation blur kernel between the PAN image and the MS images as an initial blur kernel function; iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor, each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation (TGV2) function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image; and output, via an output interface in communication with the processor, the PAN-sharpened image to a communication network or to a display device. [Claim 15]

The system of claim 14, wherein the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis.

[Claim 16] The system of claim 14, wherein the data accepted by the input interface includes some data obtained from sensors including at least one MS image sensor device and at least one PAN image sensor device.

[Claim 17]

The system of claim 14, wherein the PAN image used to initialize the fused image is a ridge PAN image.

[Claim 18]

The system of claim 14, wherein the blur kernel is a rigid transformation blur kernel, and the initialization function is an initial blur kernel function.

[Claim 19]

A non-transitory machine-readable medium including instructions stored thereon which, when executed by processing circuitry, configure the processing circuitry to perform operations to sharpen a multi-spectral (MS) image using data from a panchromatic (PAN) image, the operations for: receiving data, the data including MS images and a panchromatic (PAN) image of a scene, each MS image is associated with the PAN image; initializing a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images to obtain a blur kernel using an initialization function; iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor, each iteration includes: updating the blur kernel based on a Second-Order Total Generalized Variation (TGV2) function to regularize a kernel shape; fusing the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high- resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; computing a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image; and outputting the PAN-sharpened image to a communication network or to a display device via an output interface in communication with the processor. [Claim 20]

The non-transitory machine-readable medium of claim 19, further including instructions stored thereon which, when executed by a machine, are configured for the machine to perform operations to create a PAN image with about a same resolution as a resolution of a MS image by down-sampling PAN image data stored in the memory, or determining PAN image data from the MS image data, such that the received data, received via a transceiver device in communication with the non-transitory machine-readable medium and processing circuitry, includes some data obtained from sensors including at least one MS image sensor device and at least one PAN image sensor device.

Description:
[DESCRIPTION]

[Title of Invention]

SYSTEMS AND METHODS FOR BLIND MULTI- SPECTRAL IMAGE

FUSION

[Technical Field]

[0001]

The present disclosure relates generally to multi-spectral imaging, and more particularly to fusing low spatial resolution multi-spectral images with their associated but not well aligned high spatial resolution panchromatic image. [Background Art]

[0002]

Conventional multi-spectral (MS) imaging is widely used in remote sensing and related areas. The bands of interest in MS imaging cover RGB, near infra- red (NIR), and shortwave IR (SWIR), etc. MS imaging provides for discrimination of objects with different material properties which may otherwise be very similar in the RGB bands, and information can be gathered in the presence of harsh atmospheric conditions such as haze and fog, as infra- red waves can travel more easily through these media, as compared to visible light.

[0003]

Conventional MS sensing presents many interesting challenges. For example, many applications require to have both high spatial and spectral resolutions. However, there is a fundamental trade-off between the bandwidth of the sensor and the spatial resolution of the image. Conventional high spatial resolution is achieved by panchromatic (PAN) image covering the visible RGB bands but without spectral information, while MS images have rich spectral information but with low spatial resolution, which leads to the problem of MS image fusion. [0004] Conventional methods use various techniques to mitigate this hardware limitation and achieve both high spatial and high spectral reoslution images. Further, there are many problems with conventional MS image fusion methods. For example, given a set of low resolution MS images obtained at different wavelengths as well as a high resolution panchromatic image which does not have spectral information, the conventional model-based MS image fusion methods may not perform well in achieving both high spectral and high spatial resolutions, while the recent data-driven methods, especially deep-learning based methods, may achive good performance, but require a lot of training MS and PAN images, and are less intepretable and lack of theoretical convergence guarrenttee.

[0005]

For example, some conventional methods use original MS and PAN images captured by different sensors, from different view angles, or at different times, resulting in images not well-aligned with each other, along with sharing the same blur kernel. Further, the parametric relationship between MS and PAN images are often unclear since the spectrum of PAN image only covers a fraction of the entire spectra of MS image. Some of the many problems with these conventional methods is the lack of spatial resolution of the MS images. These conventional methods fail to increase the spatial resolution of MS images, some reasons why these conventional methods fail is due to the MS images are degraded due to the misalignment between MS and PAN images.

[0006]

The present disclosure addresses the technological needs of today’s image processing industries and other related technology industries, by solving the conventional problems of MS image fusion, by producing a set of images that have both high spectral and high spatial resolutions.

[Summary of Invention] [0007]

The present disclosure relates to fusing low spatial resolution multi-spectral (MS) images with their associated high spatial resolution but not well aligned panchromatic (PAN) image. -

[0008]

Some embodiments of the present disclosure, assume the low resolution MS can be achieved by blurring and downsampling the fused high-resolution MS image, wherein the blurring operation is realized by an unknown smooth blur kernel, wherein the kernel has a minimum second-order total generalized variation, and the high-resolution information of the fused MS image can be acquired from the PAN image via a local Laplacian piror function. Some embodiements of the present disclosure initialize initialize an estimated fused image using a PAN image to obtain an estimated blur kernel, via an initialization function. Then, iteratively use a fusing algorithm, such that for each iteration includes the steps of, updating the estimated blur kernel using a Second-Order Total Generalized Variation (TGV 2 ) function, a next step of fusing the PAN and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to obtain an estimated fused image, followed by a step of computing a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein when, the relative error is less than a predetermined threshold, the iterations stops, resulting in obtaining a PAN-sharpened image.

[0009]

However, in order to construct the embodiments of the present disclosure, experimentation included many experimental approaches including an approach to blind MS image pan-sharpening. For example, blind MS image pan-sharpening is aimed to enhance the spatial resolution of a set of spatially low-resolution MS channels, that cover a wide spectral range, that use their corresponding misaligned spatially high-resolution PAN image. Wherein, the original MS and PAN images were captured using different sensors, from different view angles, or at different times, which resulted in images not well- aligned with each other. Also, the parametric relationship between MS and PAN images was unclear since the spectrum of PAN image only covered a fraction of the entire spectra of MS image. In order to address these problems, some embodiments of the present disclosure realized that some methods need to fuse MS and PAN images without the knowledge of the misalignment, the blur kernel, or any parametric models of cross-channel relationship. Based on this realization, results were found to yield significantly better images with a spatial resolution of the PAN image and a spectral resolution of the MS images, when compared to the above conventional methods that use optimization-based and deep-leaming-based algorithms. However, these methods still exhibited limited successes in the MS image pan-sharpening tasks.

[0010]

So, further experimentation led to some experiments in pan-sharpening using local gradient constraints (LGC) to regularize the cross-channel relationship, however, only when the blur kernel was known, were improvements found. This then followed with exploring with experiments relating to cross-channel priors for blind image pan-sharpening. Other experiments included using a total variation (TV)-based regularizer applied to the blur kernel which appeared to force small gradients to be 0, resulting in non-trivial errors, when the ground- truth blur kernel is smooth. Still some other experiments included using a Second-Order Total Generalized Variation (TGV 2 ) which later provided to have more flexible features than the total variation. However, because of the non-convexity of the problems to be solved according to the present disclosure, some experimental methods resulted in a bad local minima when misaligned displacements were large, thus causing poor fusion performance. Thus, based on the different approaches of experimentation, some novel aspects were realized, and later used in constructing some of the embodiments of the present disclosure.

[0011]

For example, at least one realization gained from experimentation included using a novel local Laplacian prior (LLP) to regularize a relationship between MS and PAN images, which was found to deliver better performance than using local gradient constraints (LGC). Another realization gain from experimentation is in using a Second-Order Total Generalized Variation (TGV 2 ) to regularize the blur kernel, which resulted in offering more robust and accurate estimation of the blur kernel than existing TV-based priors, as well as resulted in providing more flexible features than the total variation. Still another realization gain from experimentation is in adopting an initialization strategy for the blur kernel which was later discovered to help avoid undesirable local minima in the optimization, among other novel aspects.

[0012]

Some of the embodiments of the present disclosure address the conventional problems of sharpening MS images with their associated misaligned PAN image, based on using priors on a spatial blur kernel and on a cross-channel relationship. In other words, by formulating the blind pan-sharpening problem within a multi-convex optimization framework using a total generalized variation for the blur kernel and local Laplacian prior for the cross-channel relationship. The problem can be solved by the alternating direction method of multipliers (ADMM), which alternately updates the blur kernel and sharpens intermediate MS images. After experimentation of these methods of the present disclosure, these numerical experiments demonstrated that this approach is more robust to large mis-alignment errors, and yields significantly better super resolved MS images, when compared to conventional methods that use optimization-based and deep-leaming-based algorithms. However, these embodiments were also constructed based on the existing realizations, as well as other realizations gained from more experimentation.

[0013]

For example, some of these other realizations learned from experimentation includes different approaches using model-based methods and data-driven methods. Because MS image fusion is essentially an under-determined ill- posed problem, some aspects learned is that the model-based methods generally have theoretical convergence guarantees, but had a relatively poor performance when compared to data-driven methods. This was witnessed in some experiments using deep learning-based methods. On the other hand, purely data-driven methods that operated as a black box, were less interpretable. The model-based deep learning approaches, eventually led to experimentation using a combination of model-based and data-driven solution based on deep learning in order to solve the multi-spectral image fusion problem. For example, unrolling iterations of the projected gradient descent (PGD) algorithm, and then replacing the projection step of PGD with a convolutional neural network (CNN) to solve the multi-spectral image fusion problem. However, these experimental approaches were found to have many constraints and problems, not suitable for the proposed embodiments and methods of the present disclosure. For example, some of these many constraints and problems, included:

(1) requiring that the PAN & MS Sensors to be at same angle and elevation, (; unacceptable problems) which was later realized that these physical sensor localization limitations were unacceptable and significantly limiting the use and application for the user, among other complications;

(2) requiring prior knowledge of misalignment, knowledge of blur kernel, or parametric models of cross-channel relationship, {unacceptable problems) which upon reflection, were found to be unacceptable and very limiting in terms of computational resources, setup requirements, hardware requirements, among other difficulties;

(3) requiring pre-training of the fusion algorithm, (: unacceptable problems) which was learned from experimentation resulted in extensive computation costs, computation time, among other drawbacks;

(4) requiring large amounts of data for the methods to operate, {unacceptable problems) which as learned from experimentation proved to be inconvenient, time consuming, commanded an extensive effort, among other challenges;

(5) requiring an extensive amount of computational time, {unacceptable problems) which experimentation showed that the time complexity of these algorithms quantified large amounts of time taken to run as a function of the length of the input (i.e. computation time or run time to perform a computational process was an excessive amount of time), as well as with having a space complexity that required large amounts of space or memory that was taken by these algorithms to run as a function of the length of the input, in terms of the time limitations imposed on some of the methods of the present disclosure, of which, the above constraints and problems were compared to proposed performance and operation thresholds believed for some of the methods of the present disclosure.

Practical Applications [0014]

Some practical applications the embodiments of the present disclosure can include fuse low-resolution remote sensing MS images using high resolution PAN images captured by a different platform or at a different time for land survey, forest coverage analysis, crop growth monitoring, and mineral exploration, etc. [0015]

According to an embodiment of the present disclosure, a system for reconstructing a super resolution image from multispectral (MS) images. The system having a transceiver to accept data. A memory to store the data, the data including MS images and a panchromatic (PAN) of a scene, each MS image is associated with the PAN image. A processing device operatively connected to the transceiver and the memory. The system comprising that the processing device is configured to initialize a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images as an initialization function. Iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image. Output, via an output interface in communication with the processor, the PAN-sharpened image to a communication network or to a display device.

[0016]

According to another embodiment of the present disclosure, an apparatus having computer storage including a computer-readable storage medium. A hardware processor device operatively coupled to the computer storage and to reconstruct spatial resolution of an image of a scene captured within multi- spectral (MS) images and panchromatic (PAN) images. The MS images obtained from a MS image sensor having a color filter array and positioned at a first optical axis. The PAN images obtained from a PAN image sensor positioned at a second optical axis that is substantially parallel to the first optical axis. Wherein, to reconstruct the spatial resolution of the image, the apparatus comprising that the hardware processor device is to initialize a fused image using a PAN image, and estimate a blur kernel between the PAN image and the MS images using an initialization function. Iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes: (a) update the blur kernel based on a Second- Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape; (b) fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN and the fused MS image to obtain an estimated fused image; and (c) compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image. An output interface in communication with the processor, to output the PAN- sharpened image to a communication network or to a display device.

[0017]

According to another embodiment of the present disclosure, a system for reconstructing a super resolution image from multispectral (MS) images. The system having an input interface to accept data. The system having a memory to store the data, the data including MS images and panchromatic (PAN) images of a scene, each MS image is associated with a PAN image, and a hardware processing device operatively connected to the input interface and the memory. The system comprising that the hardware processing device is configured to initialize a fused image using a rigid PAN image. Estimate a rigid transformation blur kernel between the PAN image and the MS images as an initial blur kernel function. Iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes: (a) update the blur kernel based on a Second-Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape; (b) fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; (c) compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN- sharpened image; and (d) output, via an output interface in communication with the processor, to output the PAN-sharpened image to a communication network or to a display device.

[0018]

According to another embodiment of the present disclosure, a non-transitory machine-readable medium including instructions stored thereon which, when executed by processing circuitry, configure the processing circuitry to perform operations to sharpen a multi-spectral (MS) image using data from a panchromatic (PAN) image, the operations for receiving data, the data including MS images and a panchromatic (PAN) image of a scene, each MS image is associated with the PAN image. Initializing a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images to obtain a blur kernel using an initialization function. Iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes: (a) updating the blur kernel based on a Second-Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape; (b) fusing the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; (c) computing a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN-sharpened image; and (d) outputting the PAN- sharpened image to a communication network or to a display device via an output interface in communication with the processor.

[0019]

The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.

[Brief Description of Drawings]

[0020]

[Fig. 1A]

FIG. 1A is a block diagram illustrating a method for image processing for increasing resolution of a multi-spectral image, according to some embodiments of the present disclosure;

[Fig. IB]

FIG. IB is a schematic illustrating a method that includes some components that may be used for implementing the method, according to some embodiments of the present disclosure;

[Fig. 2]

FIG. 2 is a schematic illustrating how the method can collect data for the method, according to some embodiments of the present disclosure; [Fig. 3 A]

FIG. 3A is a schematic illustrating MS image fusion using a fusion system, according to some embodiments of the present disclosure;

[Fig. 3B]

FIG. 3B is a schematic illustrating some steps for implementing the fusion algorithm, according to some embodiments of the present disclosure;

[Fig. 4A]

FIG. 4A is a picture illustrating a comparison of estimated kernels in experiment II, such that FIG. 4A shows results using a prior art method according to some embodiments of the present disclosure;;

[Fig. 4B]

FIG. 4B is a picture illustrating a comparison of estimated kernels in experiment II, such that FIG. 4B shows results using the method of the present disclosure according to some embodiments of the present disclosure;;

[Fig. 4C]

FIG. 4C is a picture illustrating a comparison of estimated kernels in experiment II, such that FIG. 4C shows a ground truth image according to some embodiments of the present disclosure;

[Fig. 4D]

FIG. 4D is a schematic of a table illustrating some experimental results of a quantitative analysis of blind pan-sharpening results, according to some embodiments of the present disclosure;

[Fig. 5A]

FIG. 5A is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 5A is a input low- resolution MS image (only showing the RGB channels) according to some embodiments of the present disclosure;

[Fig. 5B] FIG. 5B is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 5B is a high-resolution PAN image according to some embodiments of the present disclosure;

[Fig. 5C]

FIG. 5C is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 5C is a fused image, according to some embodiments of the present disclosure;

[Fig. 5D]

FIG 5D is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 5D is a simulated true High Resolution MS image according to some embodiments of the present disclosure;

[Fig. 6A]

FIG. 6A is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 6A is conventional prior art method using BHMIFGLR according to some embodiments of the present disclosure;

[Fig. 6B]

FIG. 6B is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 6B is conventional prior art method using HySure according to some embodiments of the present disclosure;

[Fig. 6C]

FIG. 6C is a picture illustrating example of images of pan-sharpening results from experiment I using different methods, and FIG. 6C is a method of the present disclosure according to some embodiments of the present disclosure; [Fig. 6D] FIG. 6D is a picture illustrating example of images of pan-sharpening results from experiment I using different methods and FIG. 6D is showing a Ground truth image according to some embodiments of the present disclosure; and [Fig. 7]

FIG. 7 is a block diagram of illustrating the method of FIG. 1A, that can be implemented using an alternate computer or processor, according to some embodiments of the present disclosure.

[Description of Embodiments]

[0021]

While the above-identified drawings set forth presently disclosed embodiments, other embodiments are also contemplated, as noted in the discussion. This disclosure presents illustrative embodiments by way of representation and not limitation. Numerous other modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of the presently disclosed embodiments. [0022]

FIG. 1 A is a block diagram illustrating a method for image processing for increasing resolution of a multi-spectral image, according to some embodiments of the present disclosure.

[0023]

Step 110 of method 100A can include receiving data, the data including MS images and a panchromatic (PAN) image of a scene, each MS image is associated with the PAN image.

[0024]

Further, each MS image includes multiple channels, each channel is associated with a frequency band, such that an image of a channel represents the frequency response within the associated frequency band. It is possible the data can be stored in a memory. For example, the data can be stored in one or more databases of a computer readable memory, such that the processor or hardware processor is in communication with the computer readable memory and the input interface or a transceiver.

[0025]

Step 115 of FIG. 1A can include initializing a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images using an initialization function.

[0026]

Step 120 of FIG. 1 A can include iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes.

[0027]

Step 125 of FIG. 1 A can include updating the blur kernel based on a Second- Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape.

[0028]

Step 130 of FIG. 1 A can include fusing the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; and

[0029]

Step 135 of FIG. 1 A can include computing a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration. Wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN- sharpened image.

[0030] Step 140 of FIG. 1A can include outputting the PAN-sharpened image to a communication network or to a display device via an output interface in communication with the processor.

[0031]

Some methods of the present disclosure use a blind multi-spectral (MS)

Image fusion method using local Laplacian prior (LLP) and second-order total generalized variation (TGV 2 ). The LLP regularizes the relationship between high-frequency components of MS and PAN images, yielding better fusion performance than local gradient constraints. Wherein, the TGV 2 regularizes the blur kernel with more robustness to noise and more accurate estimation of the blur kernel than other existing sparsity-driven priors. From experimentation, results exhibited consistent better performance on fusing mis-registered MS and panchromatic images than the conventional state-of- the-art methods in terms of visual quality and multiple quantitative metrics. Further, as exhibited from experimentation, the methods of the present disclosure achieved a fast convergence in a short computational time with a warm start, than the conventional state-of-the-art methods. Wherein the blind fusion algorithm outperformed conventional deep-learning based methods in regions with abundant edges and textures such as Cuprite, Moffett, and Los Angeles images, and is comparable in regions without too much edges such as Cambria Fire image.

[0032]

Also, some aspects of the embodiments of the present disclosure include novel methods for misaligned MS image pan-sharpening based on the local Laplacian prior (LLP) and the Second-Order Total Generalized Variation (TGV2). Numerical experiments show that some method approaches significantly outperform conventional optimization-based and deep learning- based baseline approaches. Moreover, some embodiments of the present disclosure have a better generalization ability than conventional deep learning-based methods, due in part to not having or without external training data, and thus, provide substantial flexibility and adaptability to deal with multi-spectral imagery from a large variety of imaging platforms.

[0033]

FIG. IB is a schematic illustrating a method that includes some components that may be used for implementing the method 100B, according to some embodiments of the present disclosure. For example, some components can include an input interface 13, a user interface 17, a memory 10 (storage device), an external memory device 15, and a processor 12 (hardware processor) that can implement the steps of the method.

[0034]

The signal data can include multi-spectral (MS) image data gathered by at least one external sensor 14 and acquired by the input interface 13 or from an external memory device 15, or some other means of communication either wired or wireless. For example, the signal data can be acquired by the processor 12 either directly or indirectly, e.g., a memory transfer device, or a wireless communication like device. It is possible, a user interface 17 having a keyboard (not shown) can be in communication with the processor 12 and a computer readable memory, and can acquire and store the MS and PAN images in the computer readable memory 10 and other data, upon receiving an input from a surface of the keyboard of the user interface 17 by a user. [0035]

Still referring to FIG. IB, the processor 12 can be in communication with a network-enabled server 39, which may be connected to a client device 18. Further, the processor 12 can be connected to a transmitter 16 and an external storage device 19.

[0036] FIG. 2 is a schematic illustrating how a method 200 can collect data, i.e. multi-spectral (MS) image 201 of a scene 209 and panchromatic (PAN) image 202 of the scene 209, according to some embodiments of the present disclosure. The sensors 203 is capable of multiple sensing features including capturing or collecting data over a wide frequency range beyond the optical bands, including MS images 201 and PAN images 202 of the scene 209.

Since infra-red and short-wave infra-red bands can penetrate clouds, the sensors 203 can capture the scene 209 in the infra-red and short-wave infrared bands. Due to the size and weight of MS camera in sensors 203, the resolution of MS image 201 is lower than the PAN image 202.

[0037]

FIG. 3A is a schematic illustrating using the fusing algorithm having a process 300 A that includes using low-resolution Multi-spectral (LRMS) images 201 and high-resolution panchromatic (HRPAN) images 202 to reconstruct fused high-spatial and high-spectral resolution MS images 301, i.e. super-resolved (SR) multi-channel images, using a fusion algorithm 304, according to some embodiments of the present disclosure.

[0038]

FIG. 3B is a schematic illustrating some steps for implementing the fusion algorithm, according to some embodiments of the present disclosure.

[0039]

Step 1, 301 of FIG. 3B, includes inputting the original PAN image 202 and the original MS image 201 into the fusion algorithm 304.

[0040]

Step 2, 303 of FIG. 3B, explains that we use to denote the measured low-resolution MS image with N spectral bands, where h and w is the height and width of each band, respectively. We denote the measured high-resolution PAN image as where H and W are its height and width, respectively. The target, well-aligned and high-resolution, MS image of consistent sharpness with Y is denoted as _

[0041]

Step 3, 305 to step 11, 321 of FIG. 3B, include that in order to reconstruct the target image Z, we solve the following regularized inverse problem: in which the first component is the data fidelity term, u e is the blur kernel, incorporating the misalignment and relative blur between Z and X prior to down sampling, Toeplitz matrix implementing the convolution due to the blur kernel u, and s the down sampling operator. The second term R characterizes the similarity relationship between Z and Y, which we use the local Laplacian prior (LLP) as a penalty term. ih where parameters are defined as follows: _ l is a scalar factor; ¾ is the square window of size (2r+1) x (2r+1) in a HxW image, with r an integer; k refers to the kth element within the window, fc = i, 2, . . . , (2r + 1) 2 · a i j anc j are both constant coefficients of the linear affine transform in window w ί , corresponding to the i th band; z 4 is the i th band of is a function that computes the Laplacian of the input image, i.e., with

[0042]

Step 3, 305 of FIG. 3B and Step 4, 309 of FIG. 3B, due to the non-convexity of our problem, the initialization of u plays a crucial role in avoiding bad local minima, especially when the misalignment is large. To overcome this problem, we propose to treat the stacked PAN as the ground-truth MS in the data fidelity term constrained by NO low-resolution MS bands whose spectra overlap with PAN, and initialize u by solving the optimization problem:

[0043]

Step 5, 309 and step 6, 311 of FIG. 3B, the third term ¾ regularizes the blur kernel. The estimation quality of the blur kernel significantly affects the quality of the reconstructed high-resolution MS. We assume this kernel, constraining the target high-resolution MS and the measured low-resolution MS, is smooth, band-limited, and has compact spatial support with non-vanishing tail. We regularize the blur kernel using the Second-Order Total Generalized Variation (TGV2), given by where are the gradients of u, p =[pl p2] is an auxiliary variable of the same size as regularization strength of p’s approximation to and of the partial derivatives of p. Simplex, and is its indicator function, which ensures that the computed blur kernel is non- negative and preserves the energy of the image, i.e., has sum equal to 1. We name our approach as Blind pan-sharpening with local Laplacian prior and Total generalized variation prior, or BLT in short. [0044]

Step 5, 309 of FIG. 3B, in order to solve the blur kernel, we introduce auxiliary parameters related to the second order total generalized variation as , y = ε (P), and z = u, and apply the classical augmented Lagrangian method by minimizing with _μ1,μ2,μ3 > 0. We solve the problem using the alternating direction method of multipliers (ADMM) by alternating between a succession of minimization steps and update steps.

[0045]

The minimization subproblems of x and y are similar to each other and the solutions are given by component-wise soft-thresholding.

[0046]

Step 5, 309 of FIG. 3B, the I th row of c*+1 and y t+1 are updated using

Step 5, 309 of FIG. 3B, we use B(u)z to denote the operator implementing the convolution between u and Z. Therefore, we can rewrite B(u)z as is a Toeplitz matrix corresponding to the convolution operation. Using this notation, the z-subproblem first solves using conjugate gradient descent and then projects the solution onto the simplex S .

[0048]

Step 6, 311 of FIG. 3B and Step 7, 313 of FIG. 3B, the blur kernel u is estimated finally by solving a -subproblem, which minimizes

[0049]

The problem can be solved efficiently by making use of the fast Fourier transform.

[0050]

Step 8, 315 of FIG. 3B, the parameters of local Laplacian prior are determined by solving the A > c -subproblem can be approximated as ( 1 2) similar to guided image filtering, can be stably computed using ’s local window as the input image and ’s local window as the guide image.

[0051]

Step 9, 317 of FIG. 3B, once the local Laplacian prior parameters are determined, is the output of guided image filtering with input image L(Zi) and guide image L(Y), and L is Toeplitz matrix of the Laplacian.

[0052]

The Z-subproblem in each individual channel is reformulated as

[0053]

Equation (12) has a closed- form solution:

[0054]

Similarly, we use the Fast Fourier Transform to accelerate the computation since B is a Toeplitz matrix.

[0055]

Step 10, 319 of FIG. 3B, for the next iteration, we update steps until the relative error between the estimated fused image of the current iteration wherein e is a pre-defmed threshold.

[0056]

Step 11, 321 of FIG. 3B, finally, upon computing the relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, to a predetermined threshold e, the iterations are stopped, resulting in obtaining a PAN-sharpened image 323.

[0057]

FIG. 4A, 4B and 4C are pictures illustrating comparison of images using a prior art method and the method of the present disclosure as compared to a ground true image, such that FIG. 4A shows results using a prior art method, FIG. 4B shows results using the method of the present disclosure, and FIG.

4C shows a ground truth image, respectively, according to some embodiments of the present disclosure. [0058]

In comparison, BHMIFGLR failed to generate fused MS images with consistent performance; the blur estimated kernel estimated, shown in FIG. 4A, was trapped in a local minimum or saddle point which is far away from the ground-truth, shown in FIG. 4C, due to the large misalignment and the poor initial estimate of the blur kernel. Since the target MS image is aligned to the PAN, our approach treats the stacked PAN as the target MS, thereby generating a reasonable initialization of the blur kernel that aligns well to the ground-truth and resulting to a good estimation of the blur kernel, as shown in FIG. 4B.

[0059]

FIG. 4D is a schematic of a table illustrating some experimental results of a quantitative analysis of blind pan-sharpening results, according to some ' embodiments of the present disclosure. Our results are consistent at small/large misalignments and outperform the baseline algorithms by nearly 6 dB.

[0060]

FIG. 5A, 5B, 5C, 5D are pictures illustrating example of images of pan- sharpening results from experiment I using different methods, FIG. 5 A is a input low-resolution MS image (only showing the RGB channels), FIG. 5B is a high-resolution PAN image, but misaligned with the low-resolution MS image, FIG. 5C is a fused image, and FIG. 5D is a simulated true High Resolution MS image, respectively, according to some embodiments of the present disclosure. Our fused image result shown in FIG.5C is visually as sharp as the PAN image of FIG. 5B, and much sharper than the low- resolution MS image in FIG. 5A, while keeping the color information shown in FIG. 5A.

[0061] FIG. 6A, 6B, 6C, 6D are pictures illustrating example of images of pan- sharpening results from experiment I using different methods, FIG. 6A is conventional prior art method using BHMIFGLR, FIG. 6B is conventional prior art method using HySure, FIG. 6C is a method of the present disclosure, and FIG. 6D is showing a Ground truth image, respectively, according to some embodiments of the present disclosure. We observe that BHMIFGLR is prone to generating spurious textures and ignoring details. For example, in FIG. 6 A the reconstructed texture on the right of the parallel white lines, along the diagonal direction, is not present in the ground truth image. Also, the left of the three parallel white lines along the diagonal direction was not reconstructed. In comparison, HySure managed to fuse images with large misalignment, but failed to preserve the details of edges and textures. In FIG. 6B we observe that the three parallel white lines in the ground-truth image are blurred, and the details on the yellow roof are not identifiable. Instead, our approach, shown in FIG. 6C is visually much sharper and preserves more detail compared to the baseline methods.

Features

[0062]

A system for reconstructing a super resolution image from multispectral (MS) images. The system having a transceiver to accept data. A memory to store the data, the data including MS images and a panchromatic (PAN) of a scene, each MS image is associated with the PAN image. A processing device operatively connected to the transceiver and the memory. The system comprising that the processing device is configured to initialize a fused image using the PAN image, and estimate a blur kernel between the PAN image and the MS images as an initialization function. Iteratively, fuse a MS image with an associated PAN image of the scene using a fusing algorithm by a processor. Each iteration includes: update the blur kernel based on a Second- Order Total Generalized Variation (TGV 2 ) function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior (LLP) function to regularize the high-resolution similarity between the PAN image and the fused MS image to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, wherein, when the relative error is less than a predetermined threshold, the iterations stop, resulting in obtaining a PAN- sharpened image. Output, via an output interface in communication with the processor, the PAN-sharpened image to a communication network or to a display device. Wherein, it is contemplated is that the system, can include any combination of the different aspects listed below, regarding the above system. In particular, the following aspects are intended to either individually or in combination, create one or more embodiments based on the one or more combination of aspects listed below, for the above recited system.

[0063]

An aspect is that the PAN image used to initialize the fused image is a ridge PAN image. Another aspect is that the blur kernel is a rigid transformation blur kernel, and the initialization function is an initial blur kernel function.

[0064]

Wherein an aspect can include that the LLP regularizes a relationship between high-frequency components of the MS images and PAN image, yielding a level of a fusion performance that is greater than a level of a fusion performance using local gradient constraints. Wherein an aspect is that the LLP is a second-order gradient, such that the LLP is generalized to a second gradient or a higher order gradient.

[0065] Another aspect includes the TGV 2 function is operable when an assumption that an image is piecewise constant is not valid in reconstructing images, such that the piecewise constant images are captured using the TGV 2 function during the image reconstruction. Wherein an aspect includes the TGV2 is a regularizer on the blur kernel, which is assumed to be smooth and centralized according to the TGV2.

[0066]

Another aspect is that the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis.

[0067]

An aspect is that the MS images are low resolution images and are obtained from the MS image sensor optically coupled to a first imaging lens, and the PAN images are high resolution images and are obtained from the PAN image sensor, the MS image sensor and the PAN image sensor have substantially identical focal plane arrays of substantially identical photosensitive elements, and wherein the MS image sensor and the PAN image sensor are set in substantially a single geometric plane such that the focal plane arrays receive optical projections of substantially an identical version of the scene.

[0068]

Another aspect is the MS images are captured at a first frame rate and the PAN images are captured at a second frame rate different than or the same as the first frame rate. Wherein an aspect can include the blur kernel combines a Point Spread Function (PSF) function and a shift such as a rigid transformation, together. It is possible another aspect is that the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis.

[0069]

An aspect is that the MS images are obtained from a MS image sensor having a color filter array and positioned at a first optical axis and the PAN images are obtained from a PAN image sensor positioned at a second optical axis that converges at an angle with the first optical axis. Further, contemplated is that an aspect is the data accepted by the input interface includes some data obtained from sensors including at least one MS image sensor device and at least one PAN image sensor device. Another aspect is that the PAN image used to initialize the fused image is a ridge PAN image. It is possible that an aspect is that the blur kernel is a rigid transformation blur kernel, and the initialization function is an initial blur kernel function.

[0070]

Wherein, an aspect can further include instructions stored thereon which, when executed by a machine, are configured for the machine to perform operations to create a PAN image with about a same resolution as a resolution of a MS image by down-sampling PAN image data stored in the memory, or determining PAN image data from the MS image data, such that the received data, received via a transceiver device in communication with the non- transitory machine-readable medium and processing circuitry, includes some data obtained from sensors including at least one MS image sensor device and at least one PAN image sensor device. An aspect may be that the blur kernel jointly combines a Point Spread Function and a rigid transformation blur kernel.

Definitions

[0071] According to aspects of the present disclosure, and based on experimentation, the following definitions have been established, and certainly are not a complete definition of each phrase or term. Wherein the provided definitions are merely provided as an example, based upon learnings from experimentation, wherein other interpretations, definitions, and other aspects may pertain. However, for at least a mere basic preview of the phrase or term presented, such definitions have been provided. Further, the definitions below cannot be viewed as prior art since the knowledge gained is from experimentation only.

[0072]

Blind Deconvolution : Blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of "blurred" images in the presence of a poorly determined or unknown point spread function (PSF). (Note: In this patent, the unknown blur kernel is basically a rigid transformed PSF.) Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation- maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary.

[0073]

Some challenges of Blind deconvolution can be that both input image and blur kernel must live in fixed subspace. That means input image, represented by w, has to be written as w=Bh, where B is random matrix of size L by K (K<L) and h is of size Kx 1 , whereas blur kernel, if represented by x, has to be writen as x=Cm, where C is random matrix of size L by N (N<L) and m is of size Nx1. Observed image, if represented by y, given by y=w*x, can only be reconstructed if L >=K +N.

[0074]

Point Spread Function (PSF) : PSF describes the response of an imaging system to a point source or point object. A more general term for the PSF is a system's impulse response, the PSF being the impulse response of a focused optical system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object. In functional terms it is the spatial domain version of the optical transfer function of the imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy (like in confocal laser scanning microscopy) and fluorescence microscopy. The degree of spreading (blurring) of the point object is a measure for the quality of an imaging system. In non-coherent imaging systems such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in the image intensity and described by linear system theory. This means that when two objects A and B are imaged simultaneously, the resulting image is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons. In space-invariant system, i.e. the PSF is the same everywhere in the imaging space, the image of a complex object is then the convolution of the true object and the PSF. However, when the detected light is coherent, image formation is linear in the complex field. The recorded intensity image then can show cancellations or other non-linear effects.

[0075] Deep Image Prior. Deep image prior is a type of convolutional neural network used to enhance a given image with no prior training data other than the image itself. A neural-network is randomly initialized and used as prior to solve inverse problems such as noise reduction, super-resolution, and inpainting. Image statistics is captured by the structure of a convolutional image generator rather than by any previously learned capabilities.

[0076]

Resolution tradeoffs us ins some Sensors : Some aspects learned from experimentation includes that all sensors can have a fixed signal-to-noise ratio that can be a function of the hardware design. The energy reflected by the target needs to have a signal level large enough for the target to be detected by the sensor. The signal level of the reflected energy increases if the signal is collected over a larger instantaneous field of view (IFOV) or if it is collected over a broader spectral bandwidth. Collecting energy over a larger IFOV reduces the spatial resolution while collecting it over a larger bandwidth reduces its spectral resolution. Thus, there is a tradeoff between the spatial and spectral resolutions of the sensor. As noted above, a high spatial resolution can accurately discern small or narrow features like roads, automobiles, etc. A high spectral resolution allows the detection of minor spectral changes, like those due to vegetation stress or molecular absorption.

It seemed that most optical remote sensing satellites carry two types of sensors - the panchromatic and the multispectral sensors. The multispectral sensor records signals in narrow bands over a wide IFOV while the panchromatic sensor records signals over a narrower IFOV and over a broad range of the spectrum. Thus, the multispectral (MS) bands have a higher spectral resolution, but a lower spatial resolution compared to the associated panchromatic (PAN) band, which has a higher spatial resolution and a lower spectral resolution. [0077]

Alternating Direction Method of Multipliers (ADMM) : ADMM is a variant of the augmented Lagrangian scheme that uses partial updates for the dual variables. This method is often applied to solve problems such as constrained problem . . Though this change may seem trivial, the problem can now be attacked using methods of constrained optimization (in particular, the augmented Lagrangian method), and the objective function is separable in x and y. The dual update requires solving a proximity function in x and y at the same time; the ADMM technique allows this problem to be solved approximately by first solving for x with y fixed, and then solving for y with x fixed. Rather than iterate until convergence (like the Jacobi method), the algorithm proceeds directly to updating the dual variable and then repeating the process. This is not equivalent to the exact minimization, but surprisingly, it can still be shown that this method converges to the right answer (under some assumptions). Because of this approximation, the algorithm is distinct from the pure augmented Lagrangian method.

[0078]

Total Variation (TV) and Total Generalized Variation ( TGV ) TV based strategies, can include regularization for parallel imaging, such as in iterative reconstruction of under-sampled image data sets. TV models can have a benefit that they are well suited to remove random noise, while preserving edges in the image. However, an assumption of TV is that the images consist of regions, which are piecewise constant. What was learned is that the use of TV can often lead to staircasing artifacts and result in patchy, sketch type images which appear unnatural. However, using TGV which may be equivalent to TV in terms of edge preservation and noise removal, TGV can also be applied in imaging situations where an assumption that the image is piecewise constant is not valid. As a result, an application of TGV in imaging can be less restrictive. For example, TGV can be applied for image denoising and during iterative image reconstruction of under-sampled image data sets, was found to possibly yielding results that are superior to conventional TV. Where TGV may be capable to measure, in some sense, image characteristics up to a certain order of differentiation. At least one aspect noted is that TV only takes the first derivative into account. TGV is a semi-norm of a Banach space, associated variational problems that fit well into a well-developed mathematical theory of convex optimization problems, especially with respect to analysis and computational realization. Moreover, each function of bounded variation admits a finite TGV value, making the notion suitable for images. Which means that piecewise constant images can be captured with the TGV model which even extends the TV model. Finally, TGV is translation invariant as well as rotationally invariant, meaning that it is in conformance with the requirement that images are measured independent from the actual viewpoint. However, it was learned that using TGV 2 as a regularizer can lead to an absence of the staircasing effect which is often observed in TV regularization.

[0079]

Piecewise Constant Function. A function is said to be piecewise constant if it is locally constant in connected regions separated by a possibly infinite number of lower-dimensional boundaries. The Heaviside step function, rectangle function, and square wave are examples of one-dimensional piecewise constant functions. In mathematics, a piecewise-defined function (also called a piecewise function, a hybrid function, or definition by cases) is a function defined by multiple sub-functions, each sub-function applying to a certain interval of the main function's domain, a sub-domain. Piecewise is actually a way of expressing the function, rather than a characteristic of the function itself, but with additional qualification, it can describe the nature of the function. For example, a piecewise polynomial function is a function that is a polynomial on each of its sub-domains, but possibly a different one on each. The word piecewise is also used to describe any property of a piecewise-defined function that holds for each piece but not necessarily hold for the whole domain of the function. A function is piecewise differentiable or piecewise continuously differentiable if each piece is differentiable throughout its subdomain, even though the whole function may not be differentiable at the points between the pieces. In convex analysis, the notion of a derivative may be replaced by that of the subderivative for piecewise functions. Although the "pieces" in a piecewise definition need not be intervals, a function is not called "piecewise linear" or "piecewise continuous" or "piecewise differentiable" unless the pieces are intervals.

[0080]

Actual viewpoint : A viewpoint refers to the position we take the photograph from. This will also be the position you place the viewer in when they are looking at your finished shot. The viewpoint can dramatically change the feel of the photograph. A transformation of particular interest is viewpoint (i.e., camera panning, zooming, and translation). Cast as an image transformation, a change in camera viewpoint can be modeled as a mapping, or warp, between pixels in one or more basis views and pixels in a new image, representing a synthetic view of the same scene. Learned from experimentation is that there are some factors to consider in addressing an actual viewpoint, such as: measurability: Sufficient information to compute the transformation must be automatically or semi-automatically extracted from the basis images; correctness: Each synthesized image should be physically correct, i.e., it should correspond to what the real scene would look like as a result of the specified scene transformation; and synthesis: New algorithms must be developed for image-based scene transformations. The techniques should be robust, easy to use, and general enough to handle complex real-world objects and scenes.

[0081]

An image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation.

[0082]

Moments invariant : Moments are well-known for their application in image analysis, since they can be used to derive invariants with respect to specific transformation classes. The term invariant moments is often abused in this context. However, while moment invariants are invariants that are formed from moments, the only moments that are invariants themselves are the central moments. Note that the invariants detailed are exactly invariant only in the continuous domain. In a discrete domain, neither scaling nor rotation are well defined: a discrete image transformed in such a way is generally an approximation, and the transformation is not reversible. These invariants therefore are only approximately invariant when describing a shape in a discrete image. Translation invariants : The central moments g j of any order are, by construction, invariant with respect to translations, i.e. in Euclidean geometry, a translation is a geometric transformation that moves every point of a figure or a space by the same distance in a given direction.

[0083]

FIG. 7 is a block diagram of illustrating the method of FIG. 1 A, that can be implemented using an alternate computer or processor, according to embodiments of the present disclosure. The computer 711 includes a processor 740, computer readable memory 712, storage 758 and user interface 749 with display 752 and keyboard 751 , which are connected through bus 756. For example, the user interface 749 in communication with the processor 740 and the computer readable memory 712, acquires and stores the image data in the computer readable memory 712 upon receiving an input from a surface, keyboard 753, of the user interface 757 by a user.

[0084]

The computer 711 can include a power source 754, depending upon the application the power source 754 may be optionally located outside of the computer 711. Linked through bus 756 can be a user input interface 757 adapted to connect to a display device 748, wherein the display device 748 can include a computer monitor, camera, television, projector, or mobile device, among others. A printer interface 759 can also be connected through bus 756 and adapted to connect to a printing device 732, wherein the printing device 732 can include a liquid inkjet printer, solid ink printer, large-scale commercial printer, thermal printer, UV printer, or dye-sublimation printer, among others. A network interface controller (NIC) 734 is adapted to connect through the bus 756 to a network 736, wherein image data or other data, among other things, can be rendered on a third-party display device, third party imaging device, and/or third-party printing device outside of the computer 711. The computer / processor 711 can include a GPS 701 connected to bus 756. Further,

[0085]

Still referring to FIG. 7, the image data or other data, among other things, can be transmitted over a communication channel of the network 736, and/or stored within the storage system 758 for storage and/or further processing. Further, the time series data or other data may be received wirelessly or hard wired from a receiver 746 (or external receiver 738) or transmitted via a transmitter 747 (or external transmitter 739) wirelessly or hard wired, the receiver 746 and transmitter 747 are both connected through the bus 756. The computer 711 may be connected via an input interface 708 to external sensing devices 744 and external input/output devices 741. The input interface 708 can be connected to one or more input/output devices 741, external memory 706, external sensors 704 which may be connected to a machine-like device 702. For example, the external sensing devices 744 may include sensors gathering data before-during-after of the collected time-series data of the machine. The computer 711 may be connected to other external computers 742. An output interface 709 may be used to output the processed data from the processor 740. It is noted that a user interface 749 in communication with the processor 740 and the non-transitory computer readable storage medium 712, acquires and stores the region data in the non-transitory computer readable storage medium 712 upon receiving an input from a surface 752 of the user interface 749 by a user.

[0086]

The description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims. [0087]

Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.

[0088]

Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function’s termination can correspond to a return of the function to the calling function or the main function.

[0089]

Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks. [0090] The above-described embodiments of the present disclosure can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.

[0091]

Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.

[0092]

Also, the embodiments of the present disclosure may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments. Further, use of ordinal terms such as first, second, in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. [0093]

Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.