Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
Method for Compressing Textured Images
Document Type and Number:
WIPO Patent Application WO/2012/133469
Kind Code:
A1
Abstract:
A method compresses an image partitioned into blocks of pixels, for each block the method converts the block to a 2D matrix. The matrix is decomposing into a column matrix and a row matrix, wherein a width of the column matrix is substantially smaller than a height of the column matrix and the height of the row matrix is substantially smaller than the width of the row matrix. The column matrix and the row matrix are compressed, and the compressed matrices are then combined to form a compressed image.

Inventors:
PORIKLI FATIH (US)
Application Number:
PCT/JP2012/058029
Publication Date:
October 04, 2012
Filing Date:
March 21, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
MITSUBISHI ELECTRIC RES LAB (US)
PORIKLI FATIH (US)
International Classes:
H03M7/30
Domestic Patent References:
WO1999007157A11999-02-11
Foreign References:
US7400772B12008-07-15
US5983251A1999-11-09
US20100008424A12010-01-14
US7574019B22009-08-11
Other References:
CHRISTOPHER JASON OGDEN ET AL: "The singular value decomposition and it's applications in image processing", 18 December 1997 (1997-12-18), XP055031940, Retrieved from the Internet [retrieved on 20120705]
Attorney, Agent or Firm:
SOGA, Michiharu et al. (8th Floor Kokusai Building, 1-1, Marunouchi 3-chome, Chiyoda-k, Tokyo 05, JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A method for compressing an image partitioned into blocks of pixels, for each block comprising the steps of:

converting the block to a 2D matrix;

decomposing the matrix into a column matrix and a row matrix, wherein a width of the column matrix is substantially smaller than a height of the column matrix and the height of the row matrix is substantially smaller than the width of the row matrix;

compressing the column matrix and the row matrix to produce corresponding compressed columns matrix and compressed row matrix; and, combining the compressed column matrix and the compressed row matrix to form a compressed image, wherein the steps are performed in a processor.

[Claim 2]

The method of claim 1, further comprising:

partitioning the 2D matrix into a low-rank term and a sparse error term by applying principal component analysis.

[Claim 3]

The method of claim 1, further comprising:

removing variability among similar regions in the matrix.

[Claim 4]

The method of claim 1, wherein the decomposing is a singular value decomposition (SVD).

[Claim 5] The method of claim 1, wherein the decomposing is a k-means singular value decomposition (K-SVD).

[Claim 6]

The method of claim 4, wherein the image is Y, the column matrix is D, and the row matrix is X, and wherein the decomposing minimizes a residual error || Y-Z> ||2 F where F is a Frobenius norm.

[Claim 7]

The method of claim .5, wherein the image is Y, the column matrix is D, and the row matrix is X, and wherein the decomposing minimizes a residual error min||y— jj2^, such that : S for all i, where j •th

indicates an I column of X, and S is a sparsity constraint, such that S < K, and where F is a Frobenius norm.

[Claim 8]

The method of claim 7, wherein the K-SVD uses a matching pursuit process.

[Claim 9]

The method of claim 7, wherein the K-SVD alternates between sparse encoding and codebook update until convergence or a termination condition is reached.

[Claim 10]

The method of claim 9, wherein the sparse encoding stage determines vectors xt for each pixel y,- by minimizing the residual min|| , - Dxt \\2xi, subject to (J X[ ||o 5; S.

[Claim 11] The method of claim 10, wherein the codebook update each column k in D by defining a group of pixels that use an element wk, and further comprising:

determining an overall representation error matrix E^ by ; and

restricting E^ by only selecting the columns corresponding to wk and obtain ERfa and applying the SVD decomposition E^fc = U∑V* to select the updated column matrix column to be a first column of U, and coefficient vectorX R to be the first column of V multiplied by∑i j.

[Claim 12]

The method of claim 2, further comprising:

separating the low-rank term and the sparse error terms using a Lagrangian multiplier.

[Claim 13]

The method of claim 1, wherein the image is of a fagade of a building with multiple floor, and further comprising:

reducing the column matrix to a single-floor column matrix by identifying a periodicity of floors in the building.

[Claim 14]

The method of claim 1, further comprising:

decompressing the compressed image by decoding the compressed column matrix and the compressed row matrix, and multiplying the decoded column matrix by the decoded row matrix. [Claim 15]

The method of claim 1, wherein the block of pixels corresponds to the entire image.

[Claim 16]

The method of claim 1, further comprising:

clustering the rows of the column matrix using a target bit-rate and a target quality parameter.

[Claim 17]

The method of claim 1, further comprising:

finding the smallest dominant frequency for the column matrix by Fourier transform;

selecting a submatrix of the column matrix according to the smallest dominant frequency for compressing the column matrix; and

repeating the submatrix to reconstruct the column matrix in the decompression.

[Claim 18]

The method of claim 1, further comprising:

partitioning the 2D matrix into ID vectors;

applying alD transform to ID vectors to obtain transform coefficients and ordering the transform coefficients with respect to their values; and, selecting the highest values transform coefficients for further compression.

[Claim 19]

The method of claim 18, wherein the ID transform is a wavelet transform.

[Claim 20] The method of claim 18, wherein the ID transform is a discrete cosine transform.

[Claim 21]

The method of claim 1, further comprising:

compressing the column matrix and row matrix by entropy coding. [Claim 22]

The method of claim 1, further comprising:

applying a spatial transform to the input image in a preprocessing stage to obtain an image with vertical and horizontal patterns.

[Claim 23]

The method of claim 22, wherein the spatial transform maps pixels in polar coordinates to Cartesian coordinates for input image with circular patterns.

[Claim 24]

The method of claim 1, wherein the image is a multi-spectral image and the 2D matrix conversion maps each channel into a separate 2D matrix. [Claim 25]

The method of claim 1, wherein the image is a multi-spectral image and the 2D matrix conversion maps all channels into a single 2D matrix. [Claim 26]

The method of claim 1, further comprising:

determining a diffusion matrix between the column matrix and row matrix wherein a width of the column matrix is smaller than a width of the column matrix without a diffusion matrix and a height of the row matrix is smaller than the height of the row matrix without a diffusion matrix; and compressing the diffusion matrix.

[Claim 27] The method of claim 1, wherein the decomposing is a rank factorization.

[Claim 28]

The method of claim 1, wherein the decomposing is a non-negative matrix factorization.

Description:
[DESCRIPTION]

[Title of Invention]

Method for Compressing Textured Images

[Technical Field]

[0001]

This invention relates generally to textured images, and more particularly to compressing textured images by a decomposition of textured images into a multiplication of a row dominant matrix and a column dominant matrix.

[Background Art]

[0002]

Conventional image (and video) compression methods, such as JPEG (MPEG, H.264, involve partitioning an image into square blocks, and processing each block independently using some intra-frame dependences. Videos also use inter-frame dependencies.

[0003]

JPEG

JPEG is an image compression standard. It applies a discrete cosine transform (DCT) encoder that includes a quantization and entropy encoding within image (macro) blocks, often 6x8 or 16x16 pixels in size.

[0004]

Due to a high correlation among three color components, the first step is usually to change from a RGB color space to a YCbCr color space.

Generally, human visual perception is more sensitive to illumination, and less sensitive to saturation. Therefore, such a color transform helps to reduce the bit rate by keeping more illumination and less saturation data. The transformation into the YCbCr color space reduces the spatial resolution of the Cb and Cr components by down-sampling, and chroma subsampling.

[0005]

The ratios at which the down-sampling is usually done for JPEG images are 4:4:4 (no down-sampling), 4:2:2 (reduction by a factor of two in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). During the compression process, the Y, Cb and Cr channels are processed separately, and in a very similar manner.

[0006]

After the color transformation, the image is partitioned into non- overlapping blocks. The color values of pixels are shifted from unsigned integers to signed integers. Then, a 2D DCT is applied.

[0007]

For an 8-bit image, the intensity of each pixel is in the range [0, 255]. The mid-point of the range is subtracted from each entry to produce a data range that is centered around zero, so that the modified range is [-128, +127]. This step reduces the dynamic range requirements in the DCT processing stage that follows. This step is equivalent to subtracting 1024 from the DC coefficient after performing the transform, which is faster on some architectures because it involves performing only one subtraction rather than 64.

[0008]

Each 8x8 block of the image is effectively a 64-point discrete signal, which is a function of the two spatial dimensions x and y. The DCT takes such a signal and decomposes it into 64 unique, two-dimensional spatial frequencies, which comprise the input signal spectrum. The ouput of the DCT is the set of 64 basis-signal amplitudes, i.e., the DCT coefficients, whose values are the relative amount of the 2D spatial frequencies contained in the 64-point discrete signal.

[0009]

The DCT coefficients are partitioned into a DC coefficient and AC coefficients. The DC coefficient corresponds to the coefficient with zero frequency in both spatial dimensions, and the AC coefficients are the remaining coefficients with non-zero frequencies. For most blocks, the DCT coefficients usually concentrate in the lower spatial frequencies. In others words, most of the spatial frequencies have near-zero amplitude, which do not need to be encoded.

[0010]

To achieve compression, each of the 64 DCT coefficients is uniformly quantized in conjunction with a 64-element quantization table, which is specified by the application. The purpose of quantization is to discard information which is not visually significant. Because quantization is a many-to-one mapping, it is fundamentally a lossy transform. Moreover, it is the principal source of compression in DCT-based encoder. Quantization is defined as division of each DCT coefficient by its corresponding

quantization step size, followed by rounding to the nearest integer. Each step size of quantization is ideally selected as the perceptual threshold to compress the image as much as possible without generating any visible artifacts. It is also a function of the image and display characteristics.

[0011]

There are some processing steps applied to the quantized coefficients. The DC coefficient is treated separately from the 63 AC coefficients.

Because there is usually strong correlation between the DC coefficients of adjacent blocks, the quantized DC coefficient is encoded as the difference from the DC term of the previous block in the encoding order, called differential pulse code modulation (DPCM). DPCM can usually achieve further compression due to the smaller range of the coefficient values. The remaining AC coefficients are ordered into a zigzag sequence, which helps to facilitate entropy coding by placing low-frequency coefficients before high- frequency coefficients. Then, the outputs of DPCM and zigzag scanning are encoded by entropy coding methods, such as Huffman coding, and arithmetic coding.

[0012]

Entropy coding can be considered as a two-step process. The first step converts the zigzag sequence of quantized coefficients into an intermediate sequence of symbols. The second step converts the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols is dependent on both the DCT-based mode of operation and the entropy coding method.

[0013]

In general, JPEG is not suitable for graphs, charts and illustrations especially at low resolutions. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for the human eye) than the precision of contours (based on luminance).

[0014]

Conventional image and video compression schemes mainly aim at optimizing pixel-wise fidelity such as peak signal-to-noise ratio (PSNR) for a given bit-rate. It has been noticed that PSNR is not always a good metric for the visual quality of reconstructed images, while the latter is regarded as the ultimate objective of compression schemes. There are several attempts to design compression methods towards visual quality, in which some image analysis tools such as segmentation and texture modeling are utilized to remove the perceptual redundancy. The basic idea is to remove some image regions by the encoder, and to restore them by the decoder by inpainting, or synthesis methods.

[0015]

The limitation of block size hinders drastically the existing

compression algorithms performance especially when the underlying texture exhibits other spatial structures.

[Summary of the Invention]

[0016]

The embodiments of the invention provide methods for compressing and decompressing images. For images that exhibit vertical and horizontal texture patterns, e.g., building fa ade images, textile designs, etc., the methods produces a representation that includes a column matrix D, and a row matrix X. This representation achieves significantly high and scalable compression ratios.

[0017]

Other types of images that can be converted into matrices, e.g., a circular shaped pattern of a tire image can be unwrapped onto rectangular area such, flower images, iris images, can also be efficiently compressed using our method.

[0018]

We convert a block of mxn pixels of a gray-level image into a 2D rrv matrix. We determine the m k column matrix and a kxn row matrix decomposition of the original 2D matrix, where the width of the column matrix is substantially smaller than its height k « m, and the height of the row matrix is substantially smaller than its width k « n. For example, the column matrix can be from 640x2 to 640 x 16 and the corresponding row matrix from 2 X 480 to 16x480 pixels.

[0019]

In one embodiment, the column matrix is considered as a dictionary, and the row matrix as a sparse reconstruction matrix.

[0020]

We describe alternative methods to represent the 2D matrix, in addition to the multiplication of a column matrix and a row matrix. One method determines a diffusion matrix (DM) to further decrease the size of the column matrix. The diffusion matrix can be selected such that it changes the sizes of the column and row matrices to emphasize the quality of either or both of the vertical and horizontal textures in the image.

[0021]

Another method further compresses the column and row matrices using the periodicity information and repetition, and by applying a Fourier transform, and clustering.

[0022]

As an optional preprocessing step, we separate the matrix into a low- rank term and a sparse error term by applying principal component analysis (PCA), before decomposing the matrix. We solve an optimization problem that removes variability among similar regions in the matrix.

[0023]

To obtain the decomposition and learn the column matrix, we apply a singular value decomposition (SVD), or its k-means version (K-SVD). The SVD is optimal for non-sparse representations, and K-SVD is sub-optimal but better suited to sparse coefficients. In other embodiments, we apply matrix decomposition, for instance, rank decomposition and non-negative matrix factorization to determine the column matrix and row matrix.

[0024]

We use a quantized version of a matching pursuit process to

determine the coefficients of the row matrix. Matching pursuit involves finding optimal projections of multi-dimensional data onto an over-complete codebook. We quantize and carry-forward the error during the projection step. This method combines error due to quantization and approximation in a single step.

[0025]

In other embodiments, we use an orthogonal matching pursuit, and a compressive sensing matching pursuit. We can use any matching pursuits including orthogonal matching pursuit and compressive sampling matching pursuit for this purpose.

[0026]

For further compression to obtain extreme gains in the bit-rate, we apply a second decomposition to the column matrix. We estimate the periodicity of the column matrix by applying the Fourier transform, and select only the one period of the column matrix, and disregard the remaining coefficients. In the decoder, we construct the entire matrix by repeating the selected period.

[0027]

In another embodiment, we cluster the column matrix for additional compression. The rows of the column matrix are clustered using k-means, spectral k-means, or agglomerative clustering. Cluster centers and labels are transmitted. The choice of the number of clusters is guided by the target bit- rate, and minimum required PSNR. More clusters achieve a higher bit-rate. Fewer of clusters lower the PSNR.

[Brief Description of the Drawings]

[0028]

Fig. 1-2 and 4 are schematics of embodiments of methods for compressing an image;

Fig. 3 is a schematic of a method for decompressing a compressed image according to embodiments of the invention; and

Fig. 5 is an illustration of the spatial transform of an image depicting circular pattern.

[Description of Embodiments]

[0029]

As shown in Figl. 1-5, the embodiments of the invention provide a method for compressing and decompressing images. As shown, the steps of the methods can be performed in a processor connected to a memory and input/output interfaces as known in the art.

[0030]

An image 101 is acquired by a camera 102 of a scene, e.g., a building fa ade 103.

[0031]

We convert 110 each block of mm pixels of the image Y 101 into a 2D mXn matrix A 111.

[0032]

In an optional preprocessing step 120, we partition the matrix into a low-rank term and a sparse error term by applying principal component analysis. We can also solve an optimization problem to remove variability among similar regions in the matrix. [0033]

We decompose 130 the matrix to obtain a column matrix D 131, and coefficients matrix X 132, such that Y = DX, where D is mxk, and X = kxn, and k « mm(mXm). In other words, the final representations D and X is significantly smaller than the image Y. Processes that can be used to perform the factorization are a singular value decomposition (SVD), and a k-means singular value decomposition (K-SVD).

[0034]

We apply column compression 141 to the column matrix and coefficient compression 142 to the row matrix X compression 132 to produced corresponding compressed matrices 143-144, which when combined (®) 150 form the compressed image 109.

[0035]

SVD Based Factorization

In the case of the SVD 210, as shown in Fig. 2, the optimization problem is

a. min||Y-Z) || 2 .

[0036]

where F is a Frobenius norm of a matrix, which is defined as a square- root of a sum of squared values of all coefficients, or equivalently the square- root of a trace of the matrix left multiplied with a conjugate transpose.

[0037]

In other words, we want to approximate the image Y as accurately as possible in terms of a linear transformation of X and D by minimizing 230 a residual error.

[0038] The SVD represents an expansion of the original data in a coordinate system where a covariance matrix is diagonal. The SVD factors a matrix. The SVD of the mXn matrix^ is a factorization of the form

where U is a mxm unitary matrix,∑ is a mXn diagonal matrix with

nonnegative real numbers on the diagonal, and V * , is the conjugate transpose of V, is a nXm unitary matrix. The diagonal entries∑ ; of∑ are the singular values of A. The m columns of U, and the n columns of Fare the left singular vectors and the right singular vectors of A, respectively. A unitary matrix U satisfying the condition

where i is a nxn identity matrix. The SVD determines the eigenvalues and eigenvectors of AA T t and A T A.

[0039]

The SVD and the eigendecomposition are closely related. The left singular vectors of A are eigenvectors of AA * . The right singular vectors of A are eigenvectors of A A. The nonzero singular values of∑ are the square roots of the nonzero eigenvalues of AA * . The right singular vectors of A are the eigenvectors of A A oxAA . The SVD determines the pseudoinverse, least squares fitting of data, matrix approximation, and determining the rank, range, and null space of a matrix.

[0040]

The singular values are the square roots of the eigenvalues of AA * . The values of∑ are usually listed in decreasing order. The singular values are always real numbers. If the matrix A is a real matrix, then U and V are also real.

[0041]

The variance of the i th principal component is the i ih eigenvalue.

Therefore, the total variation exhibited by the data matrix (A) is equal to the sum of all eigenvalues. Eigenvalues are often normalized, such that the sum of all eigenvalues is 1. A normalized eigenvalue indicates the percentage of total variance explained by its corresponding structure.

[0042]

The largest eigenvectors points in directions where the data jointly exhibits large variation. The remaining eigenvectors point to directions where the data jointly exhibits less variation. For this reason, it is often possible to capture most of the variation by considering only the first few eigenvectors. The remaining eigenvectors, along with their corresponding principal components, are truncated. The ability of SVD to eliminate a large

proportion of the data is a primary reason for its use in compression.

[0043]

K-SVD Based Factorization

In the case of the K-SVD, the optimization problem is of the form

min||7- || 2

such that < S for all i, where X[ indicates the i th column of X, and S is a sparsity constraint, S < K, and where F is a Frobenius norm of a matrix, which is defined as a square- root of a sum of squared values of all coefficients, or equivalently the square- root of a trace of the matrix left multiplied with a conjugate transpose.

[0044] K-SVD constructs 220 a sparse representation of the image the form of D and X. Using an over-complete codebook that contains prototype elements, image regions are described by sparse linear combinations of these elements. Designing codebooks to better fit the above model can be done by either selecting 240 one from a predetermined set of linear transforms, or adapting the codebook to a set of training signals.

[0045]

Given a set of training signals, the K-SVD determines the codebook that leads to the best representation for each member in the set, under strict sparsity constraints. The K-SVD generalizes the k-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the pixels based on the current codebook and a process of updating 225 the codebook elements to better fit the data. The update of the codebook vectors is combined with an update of the sparse representations to accelerate convergence. The K-SVD can work with any pursuit method. In one embodiment, orthogonal matching pursuit (OMP) is used for the sparse encoding step.

[0046]

Let us first consider the sparse encoding stage, where a size of D is fixed, and consider the above optimization problem as a search for sparse representations with coefficients summarized in the matrix X. The residual (penalty term can be rewritten as

[0047] Therefore, the optimization problem described above decoupled to distinct problems of the form

|2

min||y r Z)¾|| F , subject to [^ ||o— S f° r a U *·

[0048]

If S is small enough, then the solution is a good approximation to the ideal one that is numerically infeasible to determine.

[0049]

Updating the codebook together with the nonzero coefficients in X is done iteratively, as shown. Assume that both D and X are fixed, and we put in question only one column d in the codebook D, and the coefficients that correspond to the codebook element (! , the th row in X, is denoted as Xi.

Note that this is not the vector X(l, which is the i column in . The residual can be rewritten as

[0050]

Above, we have decomposed the multiplication DX to the K sum of rank-1 matrices. Among those, K-1 terms are assumed fixed, and one that is the k th remains in question. The matrix E f c is the error for all the pixels when the k element is removed. [0051]

Ί k

Use of the SVD to determine alternative U would be incorrect, because the new vector is very likely to be filled, because in such an update of u we do not enforce the sparsity constraint. We define Wfc as the group of indices pointing to pixels { ; } that use the element^, i.e., those where x k (i) is nonzero

Wjc = (i 1 1 < i < K, for all non-zero x k (i).

[0052]

th

We define !¾ as a matrix with ones on the Wfc entries, and zeros elsewhere. When multiplying X R— X S- i£, this compresses the row vector by discarding of the zero entries, resulting with the row vector x k R . Similarly, the multiplication Y^k— ΥΩ} ζ constructs a matrix that includes a subset of the pixels that are currently using the (^element and E i¾ is the selection 240 of error columns that correspond to pixels that use the element d f c.

[0053]

We can minimize 230 the residual with respect to d f c and x to force the solution to have the same support as the original . This is equivalent to the minimization of

\\ E k k - d k x k Q k | >, which can be done directly via the SVD. The SVD decomposes E £ to

U∑V * . We define the solution for (I f c &u as the first column of U, and the k > coefficient vector R as the first column of V multiplied by . In this solution, we the columns of D remain normalized, and the support of all representations either stays the same, or gets smaller by possible nulling of terms.

[0054]

To summarize, K-SVD alternates between sparse encoding and codebook update. After initializing the codebook matrix D with normalized columns, we repeat the encoding, and update phases until convergence or a termination condition is reached.

[0055]

In the sparse encoding stage, we use the marching pursuit to determine the representation vectors Xi for each pixel , by minimizing a residual error subject to J J X[ J Jo : S.

[0056]

In the codebook update stage, we update each column k in D by defining a group of pixels that use element W^, determining the overall representation error matrix E f c by E k = Y- ^ d j xi

restricting E f c by only selecting the columns corresponding to Wfc and obtain E f c, and finally applying the SVD decomposition E £ = to select the updated codebook column to be the first column of U, and coefficient vector X^R to be the first column of V multiplied by∑i i.

[0057]

For very small values of e.g., K = 2 or 4, we allow S = K. However, we still use OMP due to the fact that some columns in the image may be well represented well with just one of the codebook elements. In such a case, the OMP gives a sparser solution compared to a straightforward pseudo-inverse. For larger values of K, e.g., 16 or 21, we vary 1 < S < K.

[0058]

Rank Factorization (RF)

A rank factorization of the matrix A is a products = UV, where U is an mxk matrix and V is an kxn matrix given a mXn matrix A of rank r.

[0059]

To construct the rank factorization we can compute Z, the row canonical form of A where in Z all nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes, and the leading

coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it. Then U is obtained by removing from A all non-pivot columns, and V by eliminating all zero rows of Z. [0060]

In one embodiment we apply a low-rank factorization instead of a full-rank factorization.

[0061]

Non-Negative Matrix Factorization (NMF)

A non-negative matrix factorization of A is a product A = UV, where U is an mxk non-negative matrix and V is an kxn non-negative matrix given a mxn matrix A. A non-negative matrix is a matrix that all elements are equal to or greater than zero.

[0062]

In the case of the NMF, the optimization problem is

min||Y- X|| 2 F .

where D and X are non-negative matrices.

[0063]

Pre-Processing

In the optional pre-processing step 120, we separate the image into a low-rank term L and a sparse error term R following a robust principal component analysis (RPCA) as see U.S. 7,574,019, and references described therein

[0064]

The sparse error term is then either compressed separately or discarded depending on whether it has essential or relevant information for the image at hand. If the original and low-rank images are appear sufficiently similar to recognize the image in question, then the sparse error term can be discarded.

[0065] For building images, consider a building that has curved windows and some artistic designs, which cannot be well-suited for block-based or row/column-based compression. After separating the image for this building into a low-rank term and a sparse error term, we can compress the low-rank part using our column matrix approach. In the case of this image, the sparse error holds important information regarding the shape of the windows, as well as the artistic designs, and hence should be encoded separately.

[0066]

The variations such as open windows, curtains or blinds contribute to minor intensity variations across the structure of the building. Open windows and half-open blinds are removed in the low-rank term so that all the windows are appear uniform. These variations are encompassed by the sparse error term, and are not important for the recognition of the building. Thus, in this case, the sparse error term can be discarded altogether.

[0067]

We use the inexact augmented Lagrangian multiplier (ALM) method for separating the low-rank and sparse error terms.

[0068]

RPCA requires recovering a low-rank matrix with unknown fractions of its entries being arbitrarily corrupted. It is possible to solve RPCA via convex optimization that minimizes a combination of the nuclear norm and the < *' norm. For convex optimization, we apply a modified inexact version of the augmented Lagrange multipliers, which has a Q-linear convergence speed and requires significantly less partial SVDs than the exact augment Lagrange multipliers method.

[0069] For other types of images, we first apply a transform to the input image to obtain a texture that is dominant with vertical and horizontal patterns.

[0070]

Images that depict circular patterns are applied a spatial transform that maps the pixels in polar coordinates onto Cartesian coordinates. Images that contain straight lines but oriented diagonally are rotated such that the lines become either vertical or horizontal to image axes. For instance we convert the disc shaped iris image into a rectangle image by assigning pixels located on concentric circular bands around the center of the pupil onto consecutive columns in a rectangular image, as illustrated in Fig. 5.

[0071]

Row Compression

For certain class of images, for instance for building fagade images, the multiple floors of a building are generally similar in appearance, and there also exists some similarity among the pixel rows in a floor. We can exploit this similarity by considering the m rows of the column matrix as data points and compressing the rows further. This can be done in one of two approaches:

b. clustering the rows of the column matrix; or

c. reducing the building column matrix to a "single-floor"' column matrix by identifying a periodicity of floors in the building.

[0072]

In both cases, the column matrix is further compressed.

[0073]

Clustering This method exploits the similarities of different floors, as well as within a single floor. The rows of the mYk column matrix are considered as points in R* and clustered using any potential clustering method. The column matrix can then be represented by the cluster centers along with the cluster indices. As a pre-processing step, the ^-dimensional points are scaled to have unit variance, but this does not affect the reconstructed column matrix due to the re-normalization of the columns,

[0074]

Periodicity

Another approach to capture the similarity among the multiple floors of a building is to identify the periodicity of each column matrix element taken as a signal by itself. Consider the column matrix elements

d c in R m

for i = 1, k; each of these signals are periodic with the fundamental period S corresponding to the number of pixel rows per floor of the building. Then, we can represent the column matrix elements by ^-dimensional vectors, forming a single-floor column matrix, Dfl oor , which is a submatrix of the column matrix D If the image of the building covers exactly eight floors, that is

S = m/8,

then it is possible to just represent the column matrix elements by m/8- dimensional vectors, reducing storage by 85%.

[0075]

To reconstruct the image we stack this smaller column matrix (the m

submatrix Ό βΟΟΓ ) times, and truncate at m rows. Thus even if the image does not cover a whole number of floors, this method can still be applied. Note that the period estimation process requires the presence of at least two completely visible floors in the image. The period estimation is applied to all the column matrix elements simultaneously to return a single value for S.

[0076]

While the reconstructed image from this floor-stacking procedure can appear artificially generated, it is nonetheless sufficient to represent the true structure of the building, ignoring variations across floors such as open windows, blinds and curtains which may be insignificant for a particular the application domain.

[0077]

In another embodiment, we extend the DCT implementation along the columns of the image. A one-dimensional DCT is applied to the columns of the image, and further quantization and entropy coding is performed on the resulting coefficients. We refer to this method as a column-DCT.

[0078]

The method yields much higher gains in terms of PSNR, as well as bit-rate. Even at low bit-rates, the PSNR gain achieved by our method is notable. For PSNR values corresponding to good visual quality, the bit-rates obtained by our method are at least 3-4 times smaller than JPEG in most images. Further, the images obtained from the row-column matrix approach are much crisper along the edges, without smoothing, or blocking effects.

[0079]

As shown in Fig. 4, we can also construct 410 a vector space, and apply 420 a ID wavelet (or DCT transforms) to the vectors that correspond to matrix rows or columns. We partition the matrix into ID vectors to obtain a vector space, and then cluster the vectors in this space with a given cluster number. We apply ID wavelet (or DCT transform) to obtain the compressed image.

[0080]

As the last step of the row and column matrix compression, we quantize and entropy encode both matrices by Huffman coding for

transmission.

[0081]

For multi-spectral and color images, we either:

construct a larger 2D matrix when we convert the image into matrix form; or

treat each color channel separately.

[0082]

In the case of images of building facades, due to the highly aligned nature of the images, as well as the strong horizontal and vertical structures, one approach deals with image columns and rows as the building blocks. Due to the repetitive nature of the facade images, in terms of multiple floors of the building and adjacent pillars/windows along the same floor, this method is well-suited for the compression of this type of images.

[0083]

Instead of representing each square block in a transform domain for e.g., DCT, where it would be sparse, it is much more appropriate to represent each column in the image in the way we describe herein.

[0084]

As an advantage, this method maintains the crispness of the horizontal and vertical edge structures, which are pre-dominant in building facades. In contrast, other block-based methods suffer from an intrinsic blurring of edges due to approximation and quantization.

[0085]

The row matrix can be considered as the column matrix to apply the method above to the row matrix.

[0086]

Decompressing

During image reconstruction as shown in Fig. 3, the column matrix is first reconstructed and normalized by decoding 311 the compressed image 105, and multiplied 320 with the decoded 312 row matrix to obtain the decompressed image 109.

[Industrial Applicability]

[0087]

The method of this invention is applicable to many compressions of textured images in many kinds of fields.