Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR GENERATING IMAGE USING MULTIPLE LENSES AND MULTIPLE IMAGERS
Document Type and Number:
WIPO Patent Application WO/2014/084730
Kind Code:
A1
Abstract:
The present invention relates to the system and method of combining low resolution images (LR) captured by digital camera consisting of plurality of lens assemblies with color filters and plurality of image sensors. Some of imagers are operating with Red, Green and Blue (R, G, B) color channels, and one of the imagers is using transparent filter elements, further referenced as White color filter (W). The method uses correlation between R, G, B and W channels together with edge detection to compose resulting high-resolution (HR) image from LR images.

Inventors:
VAN DEN BRANDHOF EVERT ALEXANDER (FR)
STARODUBOV KOSTYANTYN (UA)
LOAIZA MAURICIO (CH)
Application Number:
PCT/NL2013/050854
Publication Date:
June 05, 2014
Filing Date:
November 27, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MULTIMAGNETIC SOLUTIONS LTD (GB)
VAN DEN BRANDHOF EVERT ALEXANDER (FR)
International Classes:
G06T3/00; G06T3/40
Domestic Patent References:
WO2012057619A12012-05-03
Foreign References:
US6611289B12003-08-26
US20050063610A12005-03-24
Attorney, Agent or Firm:
ALGEMEEN OCTROOI- EN MERKENBUREAU B.V. (AP Eindhoven, NL)
Download PDF:
Claims:
CLAIMS

1. A system for generating high resolution RGB image using multi aperture camera comprising at least three imagers, including lens stacks, color filters and sensors, and at least one imager, including lens stacks, white filter and sensor, each imager generating low resolution images with the same field of view.

2. A system according to claim 1 , wherein the sensors are RAW image sensors implemented as a CMOS image sensor.

3. A method of processing of digitized image signals from R, G, B color filter sensors and W color filter sensors using the system according to claim 1-2, comprising the followings steps: i) transmitting the image data from each of the sensors to Bayer synthesis unit, ii) processing the data obtained from step i) in a Bayer synthesis unit, iii) processing the data from step ii) in a RGB processing unit, iv) processing the data obtained from step iii) in a post-processing unit and v) processing the data obtained from step iv) in a scaling unit.

4. A method according to claim 3, wherein transmitting the image data from each of the sensors to Bayer processing unit takes place via Ml PI CSI.

5. A method according to any one or more of the claims 3-4, wherein the synthesis of high resolution RGB image with dimensions 2Mx2N from low resolution MxN images takes place before the demosaicking step.

6. A method according to any one or more of claims 2-5, further comprising a step wherein edge matrices ER=E(R,W), EG=E(G,W), EB=E(B,W) are used to align R,G and B images relative to W image.

7. The method according to any one or more of claims 2-6, further comprising a step of calculation of correlation between low-resolution images from color filter images and White filter image, where White filter image is used as a reference image.

8. The method according to claim 3, consisting of the following steps: i) edge detection step for each of R, G, B and W images, ii) computation step of cross-correlation between color filter images and White filter image, iii) scene-depended spatial alignment step, and iv) edge slopes detection and image sharpening step.

9. The method according to claims 5, further comprising the calculation of missing Green values in resulting Bayer CFA, based on edge information and their orientation.

10. The method according to claims 5, wherein edge slopes information is used for sharpening of pixel values across the edges, in order to compensate the averaging of pixel values that happens during demosaicking step.

1 1. The method as claimed in claim 7 of cross-correlation computation, characterized in that it is performed for the edges of the images, using Boolean AND operation.

12. The method according to any one or more of claims 2-1 1 , further comprising synthesizing a RGB Bayer image using a process based on extracting image information from the image edges and their slopes.

Description:
SYSTEM AND METHOD FOR GENERATING IMAGE USING MULTIPLE LENSES AND

MULTIPLE IMAGERS

ABSTRACT

The present invention relates to the system and method of combining low resolution images (LR) captured by digital camera consisting of plurality of lens assemblies with color filters and plurality of image sensors. Some of imagers are operating with Red, Green and Blue (R, G, B) color channels, and one of the imagers is using transparent filter elements, further referenced as White color filter (W). The method uses correlation between R, G, B and W channels together with edge detection to compose resulting high- resolution (HR) image from LR images.

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of priority to U.S. Provisional Patent Application No. 61/730, 160 filed Nov. 27, 2012, the content of which is incorporated herein by reference in its entirety.

FIELD OF THE INVENTION

The present invention relates to a technology of image processing for imaging device with plurality of color filter sensors and for devices with multiple lenses with color filters, sharing single sensor. More particularly, exemplary embodiments of the present invention relate to composition of high-resolution Bayer RAW image from low resolution images in primary colors. BACKGROUND OF THE INVENTION

The image sensors with Bayer Color Filter Array (CFA) are very common in modern smartphones, tablet PCs and digital cameras. It is described in U.S. Pat. No. 1976/3971065 and herewith incorporated by reference. Usually Bayer filter is composed of regular patterns each including two Green pixel masks together with one Blue and one Red pixel mask. The resulting RGB image is obtained from Bayer arrangement using various demosaicking algorithms. Usage of Bayer CFA implies the use of single lens assembly and single image sensor. The alternative to single lens, single sensor imaging systems are multi aperture cameras and camera arrays, consisting of more than one lens and one or more sensors.

International application WO 2012/057619 relates to multi aperture camera that consists of multiple single-color image channels and at least one image sensor. The images from channels are combined into single multi-color image using several methods of spatial resolution improvement. According to one of embodiments the area with higher modulation is selected and thus selected areas are then combined into a final image. In the another embodiment the method comprises the steps of selecting the area of interest of more than one images, obtaining the luminance (G channel) from composition of above areas, obtaining the chrominance (R or B channels) from images of different colors, and combining the luminance and chrominance into the resulting image. Yet another embodiment of this patent application provides methods for improvement of the low light performance where the luminance and chrominance created from one or more images are combined into a final color image. It is expected that resulting image will have a higher signal to noise ratio. In order to select the luminance the method determines the amount of light in the scene and selects the source of luminance according to it.

The demosaicking process of full color image reconstruction from Bayer CFA arrangement is based on interpolation of known pixel values to determine the values of missing pixels. In order to avoid interpolation across the edges, the direction of interpolation is detected. U.S. Pat. No. US5382976 relates to analysis of neighbor G pixels to determine a preferred interpolation direction (gradient). The vertical and horizontal gradients are compared to a predetermined value. Depending of the comparison results, there are four categories of interpolation. They reveal the local image structures and determine how pixels along vertical and horizontal directions are used to obtain the missing pixel.

U.S. Pat. No. US05373322 relates to the single-sensor digital camera with Bayer CFA. This patent describes the method for recovering missing pixel values from source image data. The method is based on calculation of gradient values in vertical and horizontal directions of R and B channels of the CFA arrangement. The interpolation is performed according to the preferred orientation (horizontally, vertically or two- dimensionally). The gradients are calculated using 5 x 5 area with the missing pixel in the center. The approximation of the second-order derivatives of R and of B values is used for gradient calculations. The interpolation methods of demosaicking depend significantly on the information about image structure, first of all of edges. When edge is detected, the interpolation is performed along the edge, not across it, in order to avoid the averaging of too different pixel values. In demosaicking methods the edge detection is limited by local neighborhood of the pixel. In contrast, in image processing the edge detection algorithms are applied to the whole image. Edge detection provides the reduction of the amount of data to be processed but keeps the structural information of the image untouched. The Canny edge detection algorithm is de- facto the standard for edge detection. It is described in document Canny, J. A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8(6):679-698, 1986.

U.S. Pat. No. US2010/0141812 relates to a solid state imaging device with CFA where quincunx pattern of White array elements are incorporated along with Red, Green and Blue elements. Usage of W filters increases the sensitivity of imaging device with a minimal decrease in resolution. The G, R and B elements are generated using direction correlation between White and G, White and R, White and B. The direction correlation uses the rule that within a given structural "object" the cross-color ratio R/G remains unchanged and the B/G ratio is also constant. The ratios R/G and B/G are not constant across the edges. The W pixels are replaced with G pixels using a correlation between W and G. Similarly, R pixels and B pixels are generated based on correlation between the R and W pixels and between the B and W pixels.

U.S. Pat. No US2007/0024879, herein incorporated by reference, relates to alternative to the Bayer CFA - the so called CFA 2.0 arrangement. Similarly to U.S. Pat. No US2010/0141812, the transparent array elements are incorporated between R,G,B elements but are arranged in a different manner. In this way the "panchromatic" sensor detects all wavelengths of the visible incident light. After interpolation the panchromatic pixels compose the full resolution luminance image. It is then subsampled to the luminance image P with low resolution. After that the system obtains the low resolution color differences R-P, G-P and B-P. Using the interpolation technique, the low resolution images are transformed to full resolution chrominance-luminance images. At the last step, the full resolution panchromatic image P is added to full resolution images and final R, G, B images are obtained. Such approach improves the luminance characteristics of the sensor. But the chrominance sensitivity is lower comparing with traditional CFA. U.S. Pat. No. US005453840A discloses the system for alignment of R, G, and B images using the correlation between reference image, for example G, and other two images (R and B). The disclosed method calculates 2-dimensional cross-correlation between two matrices as multiplication of pixel values in both matrices. System seeks for the maximum (peak) value of the cross-correlation function. Its location defines the relative translation and rotation between two images. Both images are divided into a number of blocks, each of the same size. The cross-correlation function is computed for each block. Then for each of the blocks the cross-correlation peak is determined. In this way the system obtains the set of translation and rotation parameters, needed to align color image (R or B) relative to the reference image G. The method provides sub-pixel accuracy and thus requires floating point operations.

U.S. Pat. No. US2012/0147205 discloses systems and methods where a super-resolution (SR) process is using the information from multiple LR images from array camera to generate resulting HR image. Each imager in the imaging array generates the signal in one of R, G, B spectra. The SR process performs signal restoration and cross- channel fusion and aliasing. The cross-channel processing includes estimation of HR images for B and R colors using the estimation of HR image for G channel. Some of the methods apply scene independent geometric correction of images based on geometric calibration. In the embodiments the portion of HR image is compared with at least a portion of at least one input image. This comparison may include the calculation of weight coefficients of the pixel neighborhood. The LR images are placed on the SR grid using scene independent calibration data and scene dependent parallax information.

SUM MARY

An object of this invention is to increase a quality of image composition from LR images in multi-camera systems.

A further object of the invention is to obtain HR image in the form of Bayer arrangement. The above stated objects are realized by system and method of the processing of digitized image signals from R, G, B and W color filter sensors. Such system and method include: a) Bayer synthesis unit directly operating with image signals coming from plurality of sensors.

b) Means for obtaining correlation between each of R and W, G and W, B and W image pairs.

c) LR images alignment using W image as a reference for R, G, B images; d) Means to obtain synthesized HR Bayer arrangement with better characteristics than initial R, G, B, W images.

The distinguishing feature of the invention is synthesis of resulting image at early step before the demosaicking. The demosaicking process typically performs the interpolation to obtain missing pixels values from neighbor pixels, which results in the loss of information. That is why the composition of HR image from LR images at early step has the following advantages:

a) The RAW image format is used, thus providing full information directly from the sensors;

b) It is possibly to directly operate with pixels of monochrome R, G and B images, changing their values to improve the quality of the resulting HR image

c) The optimal results of HR image restoration are obtained when multi-channel

LR images (in our case R, G, B and W) are used.

The distinguishing feature is the usage of W image as a reference for R, G, B images. Comparing to other methods, the proposed method uses the whole W image, not the separated W pixels embedded into CFA arrangement.

The distinguishing feature is the usage of correlation between R and W edges, G and W edges and B and W edges for spatial alignment of R, G, B images relative to W image. This is different from existing approaches that use the cross-color correlation between pixels within the local neighborhood, because edges correlation provides more information and calculation of the edge correlation requires less of computational resources. This approach is also different from super-resolution methods since it does not use sub- pixel projection and interpolation of HR images onto SR grid.

The distinguishing feature of the invention is the usage of edge information for interpolation of missing green pixels during transformation of LR Green image into Bayer CFA.

Another distinguishing feature is the usage of edge slopes information for image sharpening to improve the quality of the resulting HR image.

According to the embodiments of the present invention, the resulting HR image is represented as common RGB Bayer arrangement that is why an existing Image Signal Processing (ISP) can be used. The significantly expensive development of new ISP is not required. The present invention relates to a system for generating high resolution RGB image using multi aperture camera comprising at least three imagers, including lens stacks, color filters and sensors, and at least one imager, including lens stacks, white filter and sensor, each imager generating low resolution images with the same field of view.

In a preferred embodiment the sensors are RAW image sensors implemented as a CMOS image sensor.

The present invention also relates to a method of processing of digitized image signals from R, G, B color filter sensors and W color filter sensors using the system as mentioned before, comprising the followings steps: i) transmitting the image data from each of the sensors to Bayer synthesis unit, ii) processing the data obtained from step i) in a Bayer synthesis unit, iii) processing the data from step ii) in a RGB processing unit, iv) processing the data obtained from step iii) in a post-processing unit and v) processing the data obtained from step iv) in a scaling unit.

In a preferred embodiment transmitting the image data from each of the sensors to Bayer processing unit takes place via Ml PI CSI.

In another preferred embodiment the synthesis of high resolution RGB image with dimensions 2Mx2N from low resolution MxN images takes place before the demosaicking step.

The present method further comprises preferably a step wherein edge matrices ER=E(R,W), EG=E(G,W), EB=E(B,W) are used to align R,G and B images relative to W image.

The present method further comprises preferably a step of calculation of correlation between low-resolution images from color filter images and White filter image, where White filter image is used as a reference image.

In a preferred embodiment the present method according consists of the following steps: i) edge detection step for each of R, G, B and W images, ii) computation step of cross-correlation between color filter images and White filter image, iii) scene- depended spatial alignment step, and iv) edge slopes detection and image sharpening step.

The present method further comprises preferably the calculation of missing Green values in resulting Bayer CFA, based on edge information and their orientation.

In the method according to the present invention edge slopes information is preferably used for sharpening of pixel values across the edges, in order to compensate the averaging of pixel values that happens during demosaicking step. The method of cross-correlation computation is preferably performed for the edges of the images, using Boolean AND operation.

The present method further comprises preferably synthesizing a RGB Bayer image using a process based on extracting image information from the image edges and their slopes.

These and other features, aspects, and advantages of the present invention will become better understood with reference to the following description and claims.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be described in relation to the drawings, wherein:

Figure 1 is a block diagram of a digital image camera used in connection with the invention.

Figure 2 is a block diagram of a synthesis unit implementing the method of generating the image according to the invention.

Figure 3 is a chart illustrating the edge slope calculation within a single row of pixels of the sensor.

Figure 4 is a diagram illustrating the method of invention, including the portions of Bayer geometry with pixels layout for R, G, B, W pixel arrays, and the resulting Bayer arrangement.

Figure 5 demonstrates sample image together with edges detected for its Red and White channels.

Figure 6 shows the example of the monochrome Red edges image, divided into rectangular blocks, and offset vectors for each block

Figure 7 contains tables with horizontal and vertical offsets X 0ffSet and Y 0ffSet values for the Red edges image.

Figure 8 contains the charts illustrating how the vertical edge is sharpened within the single row of the pixel values.

Figure 9 contains the interpolation formulas for each of cases of edge orientation for missing green pixels interpolation.

DETAILED DESCRIPTION

Advantages and features of the present invention and methods of accomplishing the same way may be understood more readily by reference to the following detailed descriptions of the preferred embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be considered as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of event to those skilled in the art, and the present invention will only be defined by the appended claims. Like reference numerals refer to like elements throughout the specification.

The present invention relates to a system and method which may be used in imaging systems of mobile phones, smartphones and digital cameras. The system and method provide high resolution images with higher quality for multi-lens, multi-sensor cameras.

Figure 1 is a view illustrating a block diagram of digital image camera including four lens stacks (101) together with four image sensors 102 supplied with Red (103), Green (104), Blue (105) and White (106) filters. It is preferable to have four lenses to focus the incident light to sensors 102. All imagers have the same field of view. The imager 106 has White color filter. This imager receipts both the entire visible spectra and near-IR and hence it has increased signal-to-noise ratio.

The number of lenses and filters is not limited. The plurality of lenses and sensors can be spatially arranged in a matrix or linearly, oriented vertically or horizontally. The plurality of filters needs to include at least three &vo color filters (R, G or and B) and at least one white color filter (W).

The image data from sensors is transmitted to Bayer Synthesis Unit (108) preferably via MIPI CSI (107). The image sensors 102 may be a RAW image sensors implemented as a CMOS image sensor. The first processing unit 107 may be a MIPI- CMOS Camera Serial Interface (CSI), specified in document:

MIPI Alliance Standard for Camera Serial Interface CSI-2.

http://www.mipi.org/specifications/camera-interface

The second processing unit 108 is a Bayer Synthesis Unit, the next processing unit 109 may be a RGB processing unit that performs the demosaicking, the next processing unit 1 10 may be a post-processing unit and the last one may be a scaling unit 1 11.

The processing of LR images to obtain resulting HR image in accordance with an embodiment of the invention is illustrated in Figure 2. The imagers 201 create pixel information provided to images alignment module 203. Let LR images are represented by matrices R = [η,,] Μ χ Ν , G = B = [¾] ΜΧ Ν and W = ,] ΜΧΝ . generated by Red, Green, Blue and White filters respectively. The module 203 performs spatial alignment of R, G, B and W pixel matrices according to scene-independent geometric calibration data 202 together with scene-depended image alignment.

The homography matrices M R =M(R,W), M G =M(G,W), M B =M(B,W) are used for preliminary alignment of R,G and B images relative to W image.

The horizontal rows of pixels of R,G,B and W sensors are assumed parallel. This is provided during manufacturing. Further alignment (rectification) is performed during calibration of the camera system and is set by mentioned homography matrices. Nevertheless, small misalignments are still present due to manufacturing errors, calibration inaccuracy and lens distortions, parallax and differences in sensor matrices. The misalignments can be treated as translation only, since rotation is already excluded. The translation is expressed as vertical and horizontal offsets of R,G and B images relative to the reference image W.

The edge detection algorithm is applied to each of R, G, B, W images. It generates the edge matrices E R = [e R ij] M xN, E G = [ β ] Μ χΝ, E B = [e B ij] M xN and E w = [e W ij] M xN. After edges are detected, the R, G ,B, and W images are divided into the same number of blocks (windows) of the same size M i 0 ck and N i 0C k. In this way each image contains K = M/M iock rows and L = N/N i 0C k columns. The Correlation Matrices Block calculates the correlation functions CORR k l , k=1 ,..,K, 1=1 ,.., L for each block and finds the coordinates of the maximal value per block. These coordinates represent the horizontal and vertical offsets between color images and white image. The offsets are used to eliminate the parallax and perform the scene-dependent spatial alignment of R,G,B images relative to W image.

Edge Slopes Detection block 204 performs scanning of source R,G,B matrices, determines the width and height of the edges and transforms them into edge slopes matrices S R = [s R j] M xN, S G = [s G ij] MX N and S B = [s B j] M xN- Each element of the matrices s C ij, where C is one of R,G,B, represents the slope that is used for adjustment of the LR pixels prior to demosaicking process :

The block 205 implements HR image reconstruction by using the spatial domain approach. The synthesized HR RGB Bayer arrangement 206 has dimensions with scale factor 2 of dimensions of the LR images: H = [hj j ]2Mx2N- This Bayer CFA 206 is then used as input to RGB Processing Unit for further demosaicking. Figure 3 depicts an example of vertical edge profile. The detected edge pixel has column index j=12. The local extrema have indexes ji = 3 and j 2 = 20 and values c^ = 187 and ¾ 2 = 48 respectively. Thus, the edge width is 20 - 3 = 17 pixels. The edge slope is defined as s¾ = (C|ji . Cijz) / (j 2 - ji) = (187 - 48) / 17 = 8.176, (1) according to: Marziliano, P.; Dufaux, F.; Winkler, S.; Ebrahimi, T. Perceptual Blur and Ringing Metrics: Application to JPEG2000. Signal Processing: Image Communication February 2004, 19 (2), 163-172.

The edge slopes are calculated not only for vertical direction but also for horizontal and diagonal direction of edges.

Figure 4 illustrates the process of composition of HR image from LR images R, G, B, represented by matrices R = [Γ° ,,] Μ ΧΝ, G = [g°ij]MxN, B = [b°ij] M xN, where superscript 0 denotes the initial values obtained from the sensors. The edge matrices E R = [e R ij] MX N, E G = [e G ij]MxN, E B = [e B ij] MX N and E w = [e W ij] MX N are built from the matrices of pixels of monochrome colors R (103), G (104), B (105) and W (106). At this step the matrices R = G = [g 1 ij]MxN, B = [b 1 ij] MX N are generated, where superscript 1 stands for step 1. Their pixels are spatially aligned relative to matrix W, and are ready for further processing.

Figure 5 shows the sample image used for explanation of the method of invention. The image is taken from the Kodak color image database (Eastman Kodak and various photographers, 1991. http://rOk.us/graphics/kodak/, image 19). This database is traditionally used as a benchmark for comparison of the algorithms and methods. Figure 5, a displays initial image. Figures 5,b and 5,c show edges detected in R and W channels respectively. All images are used for illustration purposes only.

After the edges have been detected for monochrome color matrices R, G or B and for matrix W, the correlation functions CORR k l , k=1 ,..,K, 1=1 ,.. L are computed for each block. For example, Figure 6 shows the image of Red edges divided into 12 x 12 blocks, each of size M i 0 ck rows by N i 0C k columns.

The cross-correlation function is calculated for corresponding pairs of blocks of color matrix and of matrix W. The spatial cross-channel correlation value for each coordinates <i,j> of block <k,l> is calculated as

(2) block Nbloek

m- i n~l where & means Boolean AN D operation, Qn, n is either 1 or 0 if color (R,G,B) pixel value belongs to the color edge or not, and w m+i , n+ i is either 1 or zero, depending if a white image pixel belongs to the white edge. The maximal value of CORR , , is used to define the translation between the block in color image relative to the block of white image : Xoffset = argmax 7 CORRij (3) Y 0 ffset = = argmax; CORR (4)

In the Figure 6 the vectors <X 0 ffset,Yoffset > are displayed as arrows oriented in such a way as to compensate the displacement of R image relative to W image. In certain cases, when block of C (R, G, B) matrix contains no edges, or when block of W matrix contains no edges, the system uses a standard formula of cross-channel correlation:

where mean(C) and mean(W) values are taken over the pixels in color (R,G, B) block and White block respectively, o c and o w are the standard deviations of C and W blocks. Similarly to equation (2), the translations Xoffset, Yoffcet are detected as coordinates of the maximum of CORRij, i=1 , ... ,M, j=1 ,..,N.

In general the algorithm of edges cross-correlation can be implemented as follows: FOR y=1, K

FOR x=1, L

Catenate CORR using aquation {2} ar maS j , CO Ry

Yoffset ^ argniiiXi CO Ry

IF horizo tal offset and vertical offset are not valid

Calculate CORR using equation (5)

¾f S et = argmax , CORR

Yoffs t = argnsax { CORR

END !F

END FOR END FOR

The algorithm makes decision about validity of horizontal and vertical offsets based on their size. If offsets are larger than the dimensions of the block, they are considered invalid. This means the C and/or W blocks contain no edges or that cross- correlation between C and W edges cannot be estimated. In this case the cross-correlation is calculated not for edges but for pixels of C and W blocks. Even in this case certain offsets are not valid, as shown on Figure 7, grayed cells. They are considered as outliers and are replaced by offsets from adjacent blocks.

The values of Xoffset.Yoffset are shown in Figure 7. Their values correspond to offset vectors in Figure 6.

The division of C and W matrices into blocks can be efficiently implemented in

GPU parallel architecture. This allows to significantly increase the performance of the described method.

Once the translation parameters Xoffset.Yoffset are obtained for all monochrome images c e {R,G,B}, the image matrices are spatially aligned relative to W matrix. In disclosed embodiment the alignment (resampling) is performed using bilinear interpolation. The resulting matrices are R = [Γ 1 ,,] ΜΧ Ν, G = [g 1 ij]MxN, B = [b 1 ij] MX N (Figure 4). Other embodiments of the described method can utilize cubic convolution, spline interpolation and other resampling methods.

Using edge matrices and R,G,B matrices as input, the Edge Slope Detection block 204 generates edge slopes matrices S R = [s R ij] MX N, S G = [s G ij] MX N and S B = [s B ij] MX N- Elements of edge slope matrices are used in correction coefficients for pixels of color R,G,B matrices at the step of HR image synthesis 205. The system performs the sharpening of edges to eliminate the averaging effect of demosaicking algorithms. Since during demosaicking the interpolation across the edges might be performed, the proposed method increases the pixel values of the edges to compensate potential averaging. The method calculates new pixel values in the vicinity of an edge using sharpening formula from (Digital Image Quality Testing. Imatest LLC. http://www.imatest.com/docs/sharpening), adjusted here for edge slopes:

(6)

where

Ksharp is the sharpening parameter, ranging from 0.3 to 0.5,

Sjj is a slope of the edge,

V is the width of the edge,

L = V/2.

Figure 8 depicts the example of vertical edge sharpening for = 0.5 and

V=2.

The Bayer CFA arrangement contains double number of green pixels comparing to the number of red or blue pixels. That is why on step 3 the missing green pixels g 3 , , (shown as gray cells in the Figure 4, item 206) are interpolated ushg the information about the edges, obtained on previous steps. The green pixels define the luminance of the image, that is why during demosaicking it is important to provide proper values of G channel firstly, and later on to operate with R and B channels The interpolation is performed depending on edge configuration. It takes into account the orientation of the edge, as shown in Figure 9. The green pixels that belong to edge are depicted as gray rectangles. The central green pixel value g, , is obtained from surrounding green pixels depending on where the vertical edge is located. If pixel g, , is at the left from an edge, its value is taken as an average of green pixels from a homogeneous area, belonging to an "object" in the image, as shown on Figure 9, a :

Qij = (Qi- j-1 + gi+i , j -i) 1 2 (7)

When central green pixel is located at the right from vertical edge, its value is taken as an average of neighbor pixels that belong to the vertical edge (Figure 9,b) :

Qij = (9i-i ,j-i + 9i+i ,j-i) / 2 (8)

For bottom-up diagonal edge the central value is counted as the average of pixels belonging to an edge, to preserve it consistency (Figure 9,c):

Qij = (Qi- j+1 + gi+i ,j-i) / 2 (9) Finally, when none of the pixels belongs to edges inside the 3 x 3 neighborhood of central pixel g, , , its value is calculated in usual demosaicking way (Figure 9,d) :

Qij = (Qi- j-1 + gi-i j + i + gi+i , j -i + gi + i , j +i) / 4 (10)

Similarly the missing green pixels are interpolated for horizontal edge and top- down diagonal edge.

Resulting RGB Bayer arrangement 206 has dimensions 2M x 2N and consists of pixel values r 2 , , , g 2 , j ,b 2 j and g 3 , , , where superscripts 2 and 3 denote the steps 2,3.

After all described steps the synthesized RGB Bayer arrangement 206 is submitted to standard RGB processing unit 109 for subsequent processing, including demosaicking process.