Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD OF GENERATING PANORAMIC IMAGE AND COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM FOR EXECUTING THE METHOD
Document Type and Number:
WIPO Patent Application WO/2010/101434
Kind Code:
A2
Abstract:
An apparatus and method of generating a panoramic image and a computer-readable recording medium having embodied thereon a program for executing the method. The apparatus includes: an image acquiring unit for sequentially acquiring a plurality of images; a matching region acquiring unit for acquiring a matching region, which is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images, wherein the matching region includes sub-regions obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined; and a panorama generating unit for generating a panoramic image by blending a region of the first image corresponding to the matching region with a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

Inventors:
CHANG SEUNG HO (KR)
Application Number:
PCT/KR2010/001380
Publication Date:
September 10, 2010
Filing Date:
March 05, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CORE LOGIC INC (KR)
CHANG SEUNG HO (KR)
International Classes:
H04N5/262; H04N5/225
Foreign References:
KR100724134B12007-05-25
KR100678208B12007-02-02
Attorney, Agent or Firm:
Y.P.LEE, MOCK & PARTNERS (Seocho-dong Seocho-gu, Seoul 137-875, KR)
Download PDF:
Claims:
Claims

[Claim 1] An apparatus for generating a panoramic image, the apparatus comprising: an image acquiring unit for sequentially acquiring a plurality of images; a matching region acquiring unit for acquiring a matching region, which is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images, wherein the matching region includes sub-regions obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined; and a panorama generating unit for generating a panoramic image by blending a region of the first image corresponding to the matching region with a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

[Claim 2] The apparatus of claim 1, wherein the panorama generating unit comprises: a matching region blending unit for obtaining a blending matching region by using color information of pixels in the regions of the first image and the second image corresponding to the matching region and weight values of the sub-regions calculated by the weight function; and a panorama combining unit for combining a region of the first image other than the matching region, the blending matching region, and a region of the second image other than the matching region into the panoramic image.

[Claim 3] The apparatus of claim 2, wherein the matching region blending unit obtains the blending matching region by calculating a color information deviation by subtracting color information of pixels in regions of the first image corresponding to the sub-regions from color information of pixels in regions of the second image corresponding to the sub-regions, in units of the sub-regions, calculating a weighted color information deviation by multiplying weight values of the sub-regions by the color information deviation, and adding the weighted color information deviation to the color information of the pixels in the regions of the first image corresponding to the sub-regions.

[Claim 4] The apparatus of claim 2, wherein the matching region blending unit obtains the blending matching region by weight averaging color information of pixels in regions of the first image corresponding to the sub-regions and color information of pixels in regions of the second image corresponding to the sub-regions with the weight values of the sub-regions??, in units of the sub-regions.

[Claim 5] The apparatus of claim 1, wherein the panorama generating unit comprises: an image line loading unit for sequentially loading, in the combination direction, lines that are composed of pixels of the first image and the second image and are perpendicular to the combination direction; a matching region determining unit for determining whether a loaded line is included in the matching region; and a panorama sequential generation unit for, if it is determined that the loaded line is not included in the matching region, inserting the loaded line into a position of the panoramic image corresponding to the loaded line, and if the loaded line is included in the matching region, loading a matching line of the second image matched to the loaded line, determining a final line of the loaded line by using a weight value of a sub-region in which the loaded line is included and which is calculated by the weight function, and inserting the final line into the position of the panoramic image corresponding to the loaded line.

[Claim 6] The apparatus of claim 1, wherein the number of the sub-regions is equal to the number of lines of pixels of the matching region in the combination direction.

[Claim 7] The apparatus of claim 1, wherein the number of the sub-regions is variable. [Claim 8] The apparatus of claim 1, wherein, if the number of the sub-regions is L, the weight function is a monotonic function with input variables of 0 to L+l (L is an integer), the weight function has a value of 0 when the input variable is 0 and has a value of 1 when the input variable is L+l.

[Claim 9] The apparatus of claim 8, wherein the weight function is a linear function. [Claim 10] The apparatus of claim 1, further comprising an image correcting unit for selecting a predetermined number of pixels having the same positions in the regions of the first image and the second image corresponding to the matching region, calculating averages of color information of the pixels, and multiplying a ratio of the averages by color information of all pixels included in the first image or the second image.

[Claim 11] A method of generating a panoramic image, the method comprising: sequentially acquiring a plurality of images; acquiring a matching region, which is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images, wherein the matching region includes sub-regions obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined with each other; and generating a panoramic image by blending a region of the first image corresponding to the matching region and a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

[Claim 12] The method of claim 11, wherein the generating of the panoramic image comprises: obtaining a blending matching region by using color information of pixels in the regions of the first image and the second image corresponding to the matching region and weight values of the sub-regions calculated by the weight function; and combining a region of the first image other than the matching region, the blending matching region, and a region of the second image other than the matching region into the panoramic image.

[Claim 13] The method of claim 12, wherein the obtaining of the blending matching region comprises obtaining the blending matching region by calculating a color information deviation by subtracting color information of pixels in regions of the first image corresponding to the sub-regions from color information of pixels in regions of the second image corresponding to the sub-regions, in units of the sub-regions, calculating a weighted color information deviation by multiplying weight values of the sub- regions by the color information deviation, and adding the weighted color information deviation to the color information of the pixels in the regions of the first image corresponding to the sub-regions.

[Claim 14] The method of claim 12, wherein the obtaining of the blending matching region comprises obtaining the blending matching region by weight averaging color information of pixels in regions of the first image corresponding to the sub-regions and color information of pixels in regions of the second image corresponding to the sub-regions with the weight values of the sub-regions, in units of the sub-regions.

[Claim 15] The method of claim 11, wherein the generating of the panoramic image comprises: sequentially loading, in the combination direction, lines that are composed of pixels of the first image and the second image and are perpendicular to the combination direction; determining whether a loaded line is included in the matching region; if it is determined that the loaded line is not included in the matching region, inserting the loaded line into a position of the panoramic image corresponding to the loaded line, if it is determined that the loaded line is included in the matching region, loading a matching line of the second image matched to the loaded line, determining a final line of the loaded line by using a weight value of a sub-region in which the loaded line is included and which is calculated by the weight function, and inserting the final line to the position of the panoramic image corresponding to the loaded line.

[Claim 16] The method of claim 11, wherein the number of the sub-regions is equal to the number of lines of pixels of the matching region in the combination direction.

[Claim 17] The method of claim 11, wherein the number of the sub-regions is variable. [Claim 18] The method of claim 11, wherein, if the number of the sub-regions is L, the weight function is a monotonic function with input variables of 0 to L+l (L is an integer), the weight function has a value of 0 when the input variable is 0 and has a value of 1 when the input variable is L+l.

[Claim 19] The method of claim 18, wherein the weight function is a linear function. [Claim 20] The method of claim 11, further comprising selecting a predetermined number of pixels having the same positions in the regions of the first image and the second image corresponding to the matching region, calculating averages of color information of the pixels, and multiplying a ratio of the averages by color information of all pixels included in the first image or the second image.

Description:
Description

Title of Invention: APPARATUS AND METHOD OF GENERATING PANORAMIC IMAGE AND COMPUTER- READABLE RECORDING MEDIUM STORING PROGRAM

FOR EXECUTING THE METHOD Technical Field

[1] The present invention relates to an apparatus and method of generating a panoramic image and a computer-readable recording medium having embodied thereon a program for executing the method, and more particularly, to an apparatus and method of generating a panoramic image having a naturally blended matching region, and a computer-readable recording medium having embodied thereon a program for executing the method. Background Art

[2] Methods and apparatuses, such as single-lens reflex (SLR) cameras, have been used to create film-based images. Now, digital optical devices, such as a charge-coupled device (CCD) and a complementary metal oxide semiconductor (CMOS), are used to create digital images.

[3] As people enjoy more cultural activities, digital optical devices are becoming more popular, and rapid progress is being made regarding various auxiliary photographing apparatuses and digital image processing devices.

[4] Image photographing apparatuses, such as film-based image photographing apparatuses and digital image photographing apparatuses, create images by developing optical information introduced through optical devices, such as a lens, an iris, and a shutter, on a film or by converting the optical information into electrical energy by using an optical sensor. However, the image photographing apparatuses are limited by a wide angle of the lens facing a subject.

[5] Although various lens groups have been developed to solve this limitation, it may not be completely solved because of physical characteristics of the lens.

[6] Panoramic images have been suggested as a way to solve the limitation and satisfy digital image photographing apparatuses users' various needs. The term panoramic image refers to a wide field of view image that may not be taken with a lens but may be taken by using a special photographing technique, changing a focal point of the lens, and performing digital image processing.

[7] That is, a panoramic image is a wide field of view image generated by connecting a plurality of separately picked images in rows, columns, or both. [8] There are a method of generating a panoramic image by connecting a plurality of images sensed by a plurality of cameras at different angles, and a method of generating a panoramic image by physically adjusting an angle of a lens with respect to a subject in an image photographing apparatus to generate an adjusted image and composing a panoramic image from the adjusted image.

[9] However, the methods require additional equipment, are greatly affected by subjective factors, such as a user's operational method, and are not suitable for portable compact mobile terminals that provide an image capturing service.

[10] In recent years, methods and apparatuses for generating a panoramic image even in a mobile terminal have been developed. The methods and apparatuses generate a panoramic image by detecting a matching region between two or more images and combining the two or more images based on the matching region. However, the panoramic image generated by the methods and apparatuses is not natural in many cases.

[11] This is because of distortion occurring when a three-dimensional (3D) image changes to a two-dimensional (2D) image, that is, a phenomenon where the same length appears to decrease as a distance from an image sensor increases. Also, this is because due to physical characteristics of a wide-angle lens, a straight line appears to be curved. Also, this is because due to auto white balance and auto exposure, the same object appears to have different levels of brightness or different colors.

[12] The distortion and the unnatural combination may be corrected by using a program executed in a computing device capable of high speed operation. However, it is difficult to perfectly correct the distortion and the unnatural combination when using a mobile terminal having limited resources and limited operation.

[13] Accordingly, there is still a demand for a method of generating a natural panoramic image in a simple manner in a portable image photographing apparatus. Disclosure of Invention Technical Problem

[14] The present invention provides an apparatus and method of generating a natural panoramic image by naturally blending a matching region between original images.

[15] The present invention also provides a computer-readable recording medium having embodied thereon a program for executing the method. Solution to Problem

[16] According to an aspect of the present invention, there is provided an apparatus for generating a panoramic image, the apparatus including: an image acquiring unit for sequentially acquiring a plurality of images; a matching region acquiring unit for acquiring a matching region, which is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images, wherein the matching region includes sub-regions obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined; and a panorama generating unit for generating a panoramic image by blending a region of the first image corresponding to the matching region with a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

[17] The panorama generating unit may include: a matching region blending unit for obtaining a blending matching region by using color information of pixels in the regions of the first image and the second image corresponding to the matching region and weight values of the sub-regions calculated by the weight function; and a panorama combining unit for combining a region of the first image other than the matching region, the blending matching region, and a region of the second image other than the matching region into the panoramic image.

[18] The matching region blending unit may obtain the blending matching region by calculating a color information deviation by subtracting color information of pixels in regions of the first image corresponding to the sub-regions from color information of pixels in regions of the second image corresponding to the sub-regions, in units of the sub-regions, calculating a weighted color information deviation by multiplying weight values of the sub-regions by the color information deviation, and adding the weighted color information deviation to the color information of the pixels in the regions of the first image corresponding to the sub-regions.

[19] The matching region blending unit may obtain the blending matching region by weight averaging color information of pixels in regions of the first image corresponding to the sub-regions and color information of pixels in regions of the second image corresponding to the sub-regions with the weight values of the sub-regions, in units of the sub-regions.

[20] The panorama generating unit may include: an image line loading unit for sequentially loading in the combination direction lines that are composed of pixels of the first image and the second image and are perpendicular to the combination direction; a matching region determining unit for determining whether a loaded line is included in the matching region; and a panorama sequential generation unit for, if it is determined that the loaded line is not included in the matching region, inserting the loaded line into a position of the panoramic image corresponding to the loaded line, and if the loaded line is included in the matching region, loading a matching line of the second image matched to the loaded line, determining a final line of the loaded line by using a weight value of a sub-region in which the loaded line is included and which is calculated by the weight function, and inserting the final line into the position of the panoramic image corresponding to the loaded line.

[21] The number of the sub-regions may be equal to the number of lines of pixels of the matching region in the combination direction. The number of the sub-regions may be variable.

[22] If the number of the sub-regions is L, the weight function is a monotonic function with input variables of 0 to L+l (L is an integer), the weight function may have a value of 0 when the input variable is 0 and have a value of 1 when the input variable is L+l. The weight function may be a linear function.

[23] The apparatus may further include an image correcting unit for selecting a predetermined number of pixels having the same positions in the regions of the first image and the second image corresponding to the matching region, calculating averages of color information of the pixels, and multiplying a ratio of the averages by color information of all pixels included in the first image or the second image.

[24] According to another aspect of the present invention, there is provided a method of generating a panoramic image, the method including: sequentially acquiring a plurality of images; acquiring a matching region, which is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images, wherein the matching region includes sub-regions obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined with each other; and generating a panoramic image by blending a region of the first image corresponding to the matching region and a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

[25] According to another aspect of the present invention, there is provided a computer- readable recording medium having embodied thereon a program for executing the method.

Advantageous Effects of Invention

[26] As described above, the apparatus and method according to the present invention may generate a natural panoramic image by blending regions of original images corresponding to a matching region by using a weight value that is defined based on a distance of each of pixels.

[27] Furthermore, the apparatus and method according to the present invention may achieve natural combination between regions other than the matching region by readjusting auto white balance and exposure values by considering characteristics of an image photographing apparatus of a mobile terminal. [28] Moreover, the apparatus and method according to the present invention may generate a panoramic image further suitable for the hardware environment of a mobile terminal since a natural panoramic image is generated even with simple computation and limited resources. Brief Description of Drawings

[29] The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:

[30] FIG. 1 is a block diagram of an apparatus for generating a panoramic image, according to an embodiment of the present invention;

[31] FIG. 2 is a block diagram of a panorama generating unit of the apparatus of FIG. 1, according to an embodiment of the present invention;

[32] FIG. 3 is a block diagram of a panorama generating unit of the apparatus of FIG. 1, according to another embodiment of the present invention;

[33] FIGS. 4 through 8 are schematic views for explaining a process of generating a panoramic image, according to an embodiment of the present invention;

[34] FIG. 9 is a flowchart illustrating a method of generating a panoramic image, according to an embodiment of the present invention;

[35] FIG. 10 is a block diagram illustrating a panorama generating operation of the method of FIG. 9, according to an embodiment of the present invention;

[36] FIG. 11 is a block diagram illustrating a panorama generating operation of the method of FIG. 9, according to another embodiment of the present invention;

[37] FIG. 12 is a flowchart illustrating an image correcting operation optionally included in the method of FIG. 9; and

[38] FIGS. 13 and 14 illustrate panoramic images before and after blending.

Best Mode for Carrying out the Invention

[39] The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. The terms and words which are used in the present specification and the appended claims should not be limited to the common or dictionary meaning, because an inventor may define the concept of the terms appropriately to describe his/her invention in the best manner. Therefore, they should be construed as a meaning and concept fit to the technological concept and scope and of the present invention.

[40] Therefore, the embodiments and structure described in the drawings of the present specification are just preferred embodiments of the present invention, and they do not represent the entire technological concept and scope of the present invention. Therefore, it should be understood that there may be many equivalents and modified embodiments that may substitute those described in this specification.

[41] Before the detailed description of the invention, terms and definitions necessary to describe the present invention will be briefly explained.

[42] In general, a color space of an image, essential to image processing, may be expressed in various ways, for example, as red, green, blue (RGB), cyan, magenta, yellow, key black (CMYK), HS-family, the Commission Internationale d'Eclairage (CIE), and Y-family, according to color mixing or similarity to a visual system of human beings. It is obvious to one of ordinary skill in the art that a color space may be converted to another kind of color space by a simple mathematic conversion formula.

[43] An input image includes a plurality of pixels, and each of the pixels has its unique image information such as brightness, hue, and saturation. Generally, the image information has values of 0 to 255 and is indicated as 8 bit information. However, in alternative embodiments, the image information may be indicated as 10 bit or 12 bit information depending on application conditions.

[44] Therefore, it should be understood that a color space coordinate system used as an example in the present invention may be applicable to another color space coordinate system equally or similarly, and a bit size of image information of a pixel in the input image is just an example used to describe the present invention.

[45] FIG. 1 is a block diagram of an apparatus 100 for generating a panoramic image, according to an embodiment of the present invention.

[46] Referring to FIG. 1, the apparatus 100 includes an image matching unit 110, a matching region acquiring unit 120, and a panorama generating unit 130. Optionally, the apparatus 100 may include an image correcting unit 140.

[47] The image acquiring unit 110 sequentially acquires a plurality of images which are to be combined into a panoramic image.

[48] The plurality of images include at least two original images to be combined. The image acquiring unit 110 may acquire images to be combined into a vertical panoramic image or a horizontal panoramic image. Alternatively, the image acquiring unit 110 may sequentially acquire images to be combined into a vertical and horizontal panoramic image such as 2x2, 2x3, or 3x3. In this case, however, there is no overlapping region between the images that are sequentially acquired. That is, a first image may overlap with a part of a third image or a fourth image.

[49] The plurality of images to be combined into a panoramic image may be received from an external device of the apparatus 100, or may be acquired by being directly captured by a camera unit (not shown) included in the apparatus 100.

[50] This will be explained later in detail with reference to FIG. 4.

[51] The matching region acquiring unit 120 acquires a matching region that is an overlapping region between a first image and a second image to be combined with the first image from among the plurality of images which are sequentially acquired by the image acquiring unit 110.

[52] The matching region may be acquired by extracting feature points of the first and second images, and causing coordinates of the first and second images to correspond to each other through pattern matching of the extracted feature points.

[53] The matching region may be divided into a plurality of sub-regions. As the number of the sub-regions increases, a more natural panoramic image is achieved.

[54] This will be explained later in detail with reference to FIG. 4.

[55] The panorama generating unit 130 generates a panoramic image by blending a region of the first image corresponding to the matching region and a region of the second image corresponding to the matching region in units of the sub-regions by using a weight function that is defined for each of the sub-regions.

[56] The panoramic image generated by the panorama generating unit 130 is natural in terms of color and distortion since color information of the first image and color information of the second image are smoothly integrated with each other.

[57] This will be explained later in detail with reference to FIGS. 2 and 3.

[58] The image correcting unit 140 which is optionally included in the apparatus 100 performs image correction by selecting a predetermined number of pixels having the same positions in the regions of the first image and the second image corresponding to the matching region, calculating a first average and a second average by calculating averages of color information of the selected pixels, and applying a ratio of the first average to the second average to all pixels included in the first image or the second image.

[59] Due to the image correction of the image correcting unit 140, white balance and exposure values determined during capturing of the first image and the second image may be similar to each other, thereby making the panoramic image more natural.

[60] Since the image correcting unit 140 is optionally included in the apparatus 100, the image correcting unit 140 may be omitted without departing from the scope of the present invention.

[61] The image correcting unit 140 will be explained later in detail with reference to FIG.

12.

[62] FIG. 2 is a block diagram of a panorama generating unit 130a of the apparatus 100 of

FIG. 1, according to an embodiment of the present invention.

[63] Referring to FIG. 2, the panorama generating unit 130a includes a matching region blending unit 131 and a panorama combining unit 132.

[64] The matching region blending unit 131 uses color information of pixels in the regions of the first image and the second image corresponding to the matching region. The matching region blending unit 131 calculates a weight value of each of the sub- regions by using a weight function and uses the weight value of each of the sub- regions to blend the matching region.

[65] In FIG. 2, the matching region blending unit 131 may calculate a color information deviation by subtracting color information of pixels in regions of the first image corresponding to the sub-regions from color information of pixels in regions of the second image corresponding to the sub-regions, in units of the sub-regions. Next, the matching region blending unit 131 may calculate a weighted color information deviation by multiplying a weight value of each of the sub-regions, which is calculated by the weight function, by the color information deviation. Finally, the matching region blending unit 131 may obtain a blending matching region by adding the weighted color information deviation to the color information of the pixels in the regions of the first image corresponding to the sub-regions. The matching region blending unit 131 may obtain the blending matching region by performing the above process on all pixels included in all of the sub-regions.

[66] Alternatively, the matching region blending unit 131 may weight average color information of pixels in regions of the first image corresponding to the sub-regions and color information of pixels in regions of the second image corresponding to the sub- regions by using the weight values of the sub-regions, in units of the sub-regions, to obtain a weighted average value. The matching region blending unit 131 may allocate the weighted average value as a value of color information of pixels of a blending matching region. The matching region blending unit 131 may obtain the blending matching region by performing the above process on all pixels included in all of the sub-regions.

[67] The matching region blending unit 131 will be explained later in detail with reference to FIG. 4.

[68] The panorama combining unit 132 may combine a region of the first image other than the matching region, the blending matching region, and a region of the second image other than the matching region, to generate a panoramic image.

[69] FIG. 3 is a block diagram of a panorama generating unit 130b of the apparatus 100 of

FIG. 1, according to another embodiment of the present invention.

[70] Referring to FIG. 3, the panorama generating unit 130b includes an image line loading unit 135, a matching region determining unit 136, and a panorama sequential generation unit 137.

[71] The image line loading unit 135 may sequentially load lines composed of pixels of the first image and the second image. The lines may be perpendicular to a combination direction in which the first image and the second image are combined.

[72] The matching region determining unit 136 may determine whether a loaded line loaded by the image line loading unit 135 is included in the matching region. [73] If the loaded line is not included in the matching region, the panorama sequential generation unit 137 may insert the loaded line into a corresponding position of the panoramic image. If the loaded line is included in the matching region, the panorama sequential generation unit 137 may load a matching line of an image to be combined which corresponds to the loaded line. Next, the loaded line and the matching line are combined by using a sub-region in which the loaded line is included and a weighted value of the sub-region, to form a final line. The final line may be inserted into the corresponding position of the panoramic image.

[74] This will be explained later in detail with reference to FIG. 4.

[75] FIGS. 4 through 8 are schematic views for explaining a process of generating a panoramic image, according to an embodiment of the present invention.

[76] Referring to FIGS. 1 and 4, two images 302 and 304 are exemplarily illustrated from among a plurality of images sequentially acquired by the image acquiring unit 110.

[77] As described above, the image acquiring unit 110 may sequentially acquire a plurality of images to be combined into a panoramic image.

[78] A process of combining the first image 302 and the second image 304 into a horizontal panoramic image will now be explained. The first image 302 is an arbitrary image acquired by the image acquiring unit 110, and the second image 304 is an image to be combined with the first image 302 in order to generate the horizontal panoramic image. It is assumed that each of the first image 302 and the second image 304 has a resolution of n x m. However, the scope of the present invention is not limited by a combination direction in which images are combined and the number of the images to be combined.

[79] The matching region acquiring unit 120 may acquire a matching region that is an overlapping region between the first image 302 and the second image 304.

[80] In order to acquire the matching region, feature points, for example, "312", of the first image 302 and the second image 304, may be extracted. Feature points refer to regions or points that are distinguishable from surrounding regions or points. For example, feature points may be points having the highest brightness compared to surrounding points, contact points between boundaries, or points having a pixel size different from those of surrounding points. Feature points may be determined in various extraction methods without departing from the scope of the present invention.

[81] Warping may be performed by extracting homography between the first image 302 and the second image 304 based on the feature points 312. Homography refers to a process of making a pixel coordinate system of one image the same as a pixel coordinate system of another image in order to combine the images. Distortion of the first image 302 and the second image 304 may be corrected due to the warping.

[82] For example, one of the feature points 312 may be selected. Coordinates of the selected feature point in the first image 302 and the second image 304 may be obtained. For example, assuming that coordinates of the feature point 312 in the first image 302 are (xl, yl) and coordinates of the feature point 312 in the second image 304 are (x2, y2), a region of the first image 302 corresponding to a matching region 308 may be defined with (xl-x2, yl-y2), (xl-x2, m), (n, yl-y2), and (n, m). Here, a coordinate system sets a left upper end of each image to (1, 1) and a right lower end of the image to (n, m). As shown in FIG. 4, it is assumed that the second image 304 is combined with a right portion of the first image 302.

[83] Accordingly, the matching region acquiring unit 120 may acquire the matching region 308 between the first image 302 and the second image 304 through the above process. Optionally, the matching region 308 may be externally transmitted from an external device of the apparatus 100. For example, a photographing unit (not shown) of the apparatus 100 may provide the matching region 308, which is previously determined, by capturing an image so that the first image 302 and the second image 304 partially overlap with each other.

[84] Referring to FIGS. 1 and 5, the first image 302 and the second image 304 are illustrated with the matching region 308.

[85] The matching region acquiring unit 120 may determine a combination direction in which the first image 302 and the second image 304 are combined with each other by using the matching region 308. In a horizontal panoramic image mode, if the matching region 308 is located on a right portion of the first image 302 as shown in FIG. 5, the second image 304 is to be combined with the right portion of the first image 302, and in this case, the combination direction is a rightward direction. In a vertical panoramic image mode, if the matching region 308 is located on a lower end portion of the first image 302, the second image 304 is to be combined with the lower end portion of the first image 302, and in this case, the combination direction may be a downward direction.

[86] In FIGS. 1 and 5, it is assumed that the matching region 308 is composed of d x m pixels. That is, the matching region 308 includes "d" vertical lines each line composed of m pixels.

[87] The matching region 308 may include a plurality of sub-regions 314 that are obtained by dividing the matching region 308 in a direction perpendicular to the combination direction. If the combination direction is a rightward direction, the sub-regions 314 may be obtained by dividing the matching region 308 in a vertical direction. In FIGS. 1 and 5, it is assumed that the matching region 308 includes L sub-regions 314. Since a pixel is a minimum unit having color information, L is not greater than "d".

[88] As will be described later, the number L of the sub-regions 314 is a constant that allows smooth combination. As the number L of the sub-regions 314 constituting the matching region 308 increases, smoother combination between the first image 302 and the second image 304 may be achieved. For example, if the number L of the sub- regions 314 is equal to the number "d" of lines of pixels in the combination direction, the smoothest combination between the first and second images 302 and 304 may be achieved.

[89] Through the above process, a plurality of images including at least two images to be combined with each other are acquired and a matching region which is an overlapping region between the two images is determined.

[90] The panorama generating unit 130 will now be explained in detail.

[91] As described above, the panorama generating unit 130 may be the panorama generating unit 130a illustrated in FIG. 2 or the panorama generating unit 130b illustrated in FIG. 3. The panorama generating unit 130a generates a panoramic image by performing an arithmetic operation on the matching region and then combining the matching region with non-matching regions 306 and 310. On the other hand, the panorama generating unit 130b may generate a panoramic image by sequentially loading lines composed of pixels of the first image 302 and the second image 304 and performing an arithmetic operation. The operation of the panorama generating unit 130b may be suitable for displaying the panoramic image on a screen.

[92] Before describing the panorama generating units 130a and 130b in detail, a weight function w(x) defined for each of the sub-regions 314 and a weight value of each of the sub-regions 314 calculated by using the weight function w(x) will be explained with reference to FIGS. 6 and 7.

[93] A weight value of each of the sub-regions 314 may be calculated by using the weight function w(x). The weight function w(x), which is a function with input variables of 0 to L+l (L is an integer), may have a value equal to or greater than 0 and equal to or less than 1. The weight function w(x) may have a value of 0 when an input variable is 0, and may have a value of 1 when an input variable is L+l. The weight function w(x) may be a monotonic function whose value increases or is equal as the input variable increases.

[94] For example, the weight function w(x) may be a linear function, for example, w(x)=x/(L+l), or a function which varies according to characteristics of the first image 302 and the second image 304, for example, w(x)=0.5*sin[π{x/(L+l)-0.5] + 0.5, w(x)=0.5 * tan[π{x/2(L+l)-0.25}] + 0.5, or w(x)= 4{x/(L+l)-0.5}3 + 0.5. The weight function w(x) may be determined by a user's selection. In order to reduce the amount of computation of the apparatus 100, a linear function may be determined as the weight function w(x).

[95] An input of the weight function w(x) may be a number of a sub-region, and a result of the input may be a weight value of the sub-region. For example, a weight value of an ath sub-region is a result value of the weight function w(x) when "a" is an input to the weight function w(x), that is, w(a). Here, "a" is obviously equal to or greater than 1 and equal to or less than L, and a number of a sub-region is assumed to be determined in the combination direction. In FIG. 6, a leftmost sub-region is a first sub-region, and a rightmost sub-region is an Lth sub-region.

[96] The matching region blending unit 131 of the panorama generating unit 130a of FIG.

2 blends regions of the first image 302 and the second image 304 corresponding to the matching region 308.

[97] The matching region blending unit 131 calculates a color information deviation by subtracting color information of all pixels included in the region of the first image 302 corresponding to the matching region 308 from color information values of all pixels included in the region of the second image 304 corresponding to the matching region 308. For example, if color information of a pixel Pl with coordinates (i, j) of the first image 302 corresponding to the matching region 308 is (rl, gl, bl) and color information of a pixel P2 with coordinates (i, j) of the second image 304 corresponding to the matching region 308 is (r2, g2, b2), a color information deviation may be (r2-rl, g2-gl, b2-bl). The coordinates (i, j) are determined by a coordinate system limited to the matching region 308, and it is assumed that a left upper end of the matching region 308 is (0, 0) and a right lower end of the matching region 308 is (d, m).

[98] If it is assumed that a number of a sub-region in which the pixels Pl and P2 with the coordinates (I, j) are included is "a", a result obtained by inputting "a" to the weight function w(x), that is, a weight value w(a), is multiplied by the color information deviation. As a result, a weighted color information deviation w(a)(r2-rl, g2-gl, b2-bl) is obtained. A final matching region 318 may be obtained by adding the weighted color information deviation w(a)(r2-rl, g2-gl, b2-bl) to the color information (rl, gl, bl) of the pixel Pl with the coordinates (i, j) of the first image 302. Final color information of a pixel with the coordinates (I, j) of the final matching region 318 may be (rl+w(a)(r2-rl), gl+w(a)(g2-gl), bl+w(a)(b2-bl)).

[99] The above process is performed on all pixels included in the matching region 308. In order to reduce the amount of computation, the above process may be performed in units of sub-regions having the same weight value that is a result of the weight function w(x).

[100] In detail, if the number L of the sub-regions 314 is equal to the number "d" of the lines of the pixels in the combination direction, a weight value is allocated to each vertical line composed of pixels which are vertically arranged. It is assumed that the weight function w(x) is a linear function, such as w(x)=x/(d+l).

[101] Pixels included in an ith line of the final matching region 318 have final color information (rl+i(r2-rl)/(d+l), gl+i(g2-gl)/(d+l), bl+i(b2-bl)/(d+l)) according to the above formula.

[102] Optionally, the matching region blending unit 131 may obtain final color information of the final matching region 318 by weight averaging color information of all pixels included in the region of the first image 302 corresponding to the matching region 308 and color information of all pixels included in the region of the second image 304 corresponding to the matching region 308 with the weight values of sub-regions in which the pixels are included.

[103] For example, color information of a pixel with coordinates (i, j) of the matching region 308 may be calculated by weight averaging color information of a pixel with the coordinates (i, j) of the first image 302 and color information of a pixel with the coordinates (i, j) of the second image 304 with l-w(a) and w(a), respectively.

[104] The amount of computation may be reduced since weight averaging is used and a difference between color information of pixels included in the first image 302 and the second image 304 does not need to be obtained.

[105] The above process may be performed in units of sub-regions having the same result of the weight function w(x) in order to reduce the amount of computation.

[106] Although an RGB color coordinate system has been used to describe the present invention, the present invention is not limited thereto.

[107] Referring to FIG. 8 illustrating a final panoramic image generated by combining the first image 302 and the second image 304, the panorama combining unit 132 of FIG. 2 may generate a panoramic image 330 by combining the final matching region 318, which is obtained by the matching region blending unit 131, with the non-matching region 306 of the first image 302 other than the matching region 308 and with the non- matching region 310 of the second image 304 other than the matching region 308.

[108] Referring to FIG. 4, as a camera is aimed at the center of a subject, portions near the subject, that is, a right portion of the first image 302 and a left portion of the second image 304, appear to be larger than a left portion of the first image 302 and a right portion of the second image 304. Accordingly, the roof of a house in the first image 302 is inclined at a positive angle, and the roof of a house in the second image 304 is inclined at a negative angle. This is due to distortion that occurs when a three- dimensional (3D) space is displayed as a two-dimensional (2D) space, that is, a phenomenon where the same length appears to decrease as a distance from a lens of an image photographing apparatus increases. Such distortion may result in X-shaped overlapping or make a connected line appear to be a disconnected line, thereby leading to an unnatural panoramic image.

[109] However, referring to FIG. 8, a portion "A" of the final matching region 318 is slightly curved. Since a weight value is used to combine the first image 302 and the second image 304, the portion "A" does not appear to be X-shaped but appears to be curved. Even when a wide-angle lens having a wide viewing angle is used, since a straight line at a peripheral portion of an image appears to be curved, the distortion may be considered to be moderate.

[110] FIGS. 13 and 14 illustrate panoramic images before and after blending.

[I l l] Referring to the panoramic image before blending in FIG. 14, the first image 302 and the second image 304 are separated with a clear line and the first image 302 and the second image 304 are mismatched. However, after blending, the first image 302 and the second image 304 are naturally combined.

[112] The panorama generating unit 130b of FIG. 3 will now be explained with reference to FIGS. 3, 4, 5, and 8.

[113] The image line loading unit 135 sequentially loads all pixels included in the first image 302 and the second image 304 in units of lines.

[114] The lines are perpendicular to the combination direction in which the first image 302 and the second image 304 are combined and may be loaded sequentially in the combination direction. Referring to FIG. 5, a leftmost vertical line composed of leftmost "m" pixels of the first image 302 is first loaded, and then an adjacent right vertical line is loaded. In the same manner, lines are loaded in the order of the non-matching region 306 of the first image 302, the matching region 308 of the first image 302, the matching region 308 of the second image 304, and the non-matching region 310 of the second image 304.

[115] The matching region determining unit 136 determines whether a loaded line loaded by the image line loading unit 135 is included in the matching region 308. Information about the matching region 308 may be acquired by the matching region acquiring unit 120 illustrated in FIG. 1.

[116] If it is determined by the matching region determining unit 136 that the loaded line is included in the matching region 308, the panorama sequential generation unit 137 may load a matching line of the second image 304 matched to the loaded line.

[117] Next, a final matching line may be obtained by using a weight value of a sub-region 314 in which the loaded line is included. An arithmetic operation using a weight value has been described above, and thus will not be explained.

[118] A panoramic image may be generated by inserting the final matching line into a position corresponding to the loaded line.

[119] If it is determined by the matching region determining unit 136 that the loaded line is not included in the matching region 308, a panoramic image may be generated by inserting the loaded line into the position corresponding to the loaded line.

[120] After loading all pixels included in the first image 302, the image line loading unit

135 loads pixels in the non-matching region 310. Hence, the panorama generating unit 130b may generate a panorama image by performing the above process on all pixels. [121] The panorama generating unit 130b of FIG. 3 may sequentially generate the panoramic image 330 at the same time as the final matching region 318 is obtained, and may not require a great amount of computation or high capacity memory resources. The panorama generating unit 130b of FIG. 3 may be particularly suitable for displaying the panoramic image 330 on a display device in real time or transmitting data to another device. This is because the matching region 308 may be blended and displayed on a screen at the same time.

[122] The image correcting unit 140 illustrated in FIG. 1 will now be explained.

[123] Referring to FIG. 1, the image correcting unit 140 may be disposed between the matching region acquiring unit 120 and the panorama generating unit 130 and correct an image before generating a panoramic image.

[124] Referring to FIGS. 4 through 8, the image correcting unit 140 selects a predetermined number of pixels having matching positions in the regions of the first image 302 and the second image 304 corresponding to the matching region 308.

[125] The predetermined number of pixels may be previously determined pixels included in the matching region 308, for example, pixels of a central region. Alternatively, the predetermined number of pixels may be selected randomly. Alternatively, the predetermined number of pixels may be all pixels included in the matching region 308.

[126] Desirably, the pixels may have the same color information in the first image 302 and the second image 304. However, since the first image 302 and the second image 304 are photographed at different times, color information may not be exactly the same. Accordingly, white balance and exposure values may be different between the first image 302 and the second image 304 even though the first image 302 and the second image 304 are obtained by photographing the same subject. The difference in the white balance and exposure values leads to a sense of difference between the first image 302 and the second image 304.

[127] The image correcting unit 140 calculates a first color average and a second color average by averaging color information of the selected predetermined number of pixels in the first image 302 and the second image 304.

[128] For example, if an RGB color coordinate system is used, the first color average of the selected predetermined number of pixels may be (Rl, Gl, Bl) and the second color average may be (R2, G2, and B2).

[129] For example, if an exposure value of the second image 304 is greater than an exposure value of the first image 302, the second image 304 may be brighter than the first image 302 and pixels of the second image 304 may have higher color information, for example, RGB. That is, color information R2 + G2 + B2 of the selected predetermined number of pixels in the second image 304 may be greater than color information Rl + Gl + Bl in the first image 302. [130] If there is much blue in the second image 304 and thus the apparatus 100 moves white balance to blue, pixels of the second image 304 may have lower blue information. That is, blue information B2 of the selected predetermined number of pixels in the second image 304 may be lower than blue information B 1 in the first image 302.

[131] The image correcting unit 140 may calculate a color ratio or a color deviation which is a difference between the first color average and the second color average.

[132] In FIGS . 4 through 8 , a color deviation may be (R 1 -R2, G 1 -G2, B 1 -B2) and a color ratio may be (R1/R2, G1/G2, B1/B2), or vise- versa.

[133] The image correcting unit 140 may apply the color deviation or the color ratio to color information of all pixels included in the first image 302 or the second image 304.

[134] If a color deviation is (R1-R2, G1-G2, B1-B2), white balance and exposure values of the second image 304 may be corrected to be equal to white balance and exposure values of the first image 302 by adding the color deviation to color information of all pixels included in the second image 304. On the contrary, white balance and exposure values of the first image 302 may be corrected to be equal to white balance and exposure values of the second image 304 by subtracting the color deviation from color information of all pixels included in the first image 302.

[135] Likewise, if a color ratio is (R1/R2, G1/G2, B1/B2), white balance and exposure values of the first image 302 and the second image 304 may be equal by multiplying or dividing color information of all pixels included in the first image 302 or the second image 304 by the color ratio.

[136] After the white balance and exposure values are equal between the first image 302 and the second image 304, if a panoramic image is generated from the corrected first image 302 or the corrected second image 304, the panoramic image may be more natural.

[137] Although the RGB color coordinate system has been used to describe the present invention, it is to be understood by one of ordinary skill in the art that other color coordinate systems may be used.

[138] A method of generating a panoramic image will now be explained.

[139] FIG. 9 is a flowchart illustrating a method of generating a panoramic image, according to an embodiment of the present invention.

[140] Referring to FIGS. 1 and 9, in operation SlO, the image acquiring unit 110 acquires a plurality of images including a first image and a second image having an overlapping region therebetween, which has been explained above in detail and thus will not be explained again here.

[141] In operation S20, the matching region acquiring unit 120 acquires a matching region that is the overlapping region between the first image and the second image. The matching region includes a plurality of sub-regions that is obtained by dividing the matching region in a direction perpendicular to a combination direction in which the first image and the second image are combined, which has been explained above in detail and thus will not be explained again here.

[142] In operation S40, optionally, the image correcting unit 140 may correct the first image or the second image.

[143] A predetermined number of pixels having the same positions in regions of the first image and the second image corresponding to the matching region may be selected, averages of color information of the pixels may be respectively calculated, and a ratio of the averages may be applied to all pixels included in the first image or the second image. The application may be performed by multiplying or dividing the first image or the second image by the ratio, which has been explained above in detail and thus will not be explained again here.

[144] In operation S30, the panorama generating unit 130 may generate a panoramic image by combining the first image and the second image, which has been explained above in detail and thus will not be explained again here.

[145] FIG. 10 is a flowchart illustrating the operation S30 of the method of FIG. 9, according to an embodiment of the present invention.

[146] Referring to FIGS. 2 and 10, a panorama generating operation S30a may include a matching region blending operation S31 and a panorama combining operation S32.

[147] In operation S31, the matching region blending unit 131 may obtain a blending matching region by blending regions of the first image and the second image corresponding to the matching region, which has been explained above in detail and thus will not be explained again here.

[148] In operation S32, the panorama combining unit 132 may generate a panoramic image by combining the blending matching region with non-matching regions of the first image and the second image, which has been explained above in detail and thus will not be explained again here.

[149] FIG. 11 is a flowchart illustrating the operation S30 of the method of FIG. 9, according to another embodiment of the present invention.

[150] Referring to FIGS. 3 and 11, a panorama generating operation S30b may include an image loading operation, a matching region determining operation, and a panorama sequential generation operation.

[151] In operation S35, the image loading unit 135 may sequentially load all pixels included in the first image and the second image in units of lines.

[152] In operation S36, the matching region determining unit 136 may determine whether a loaded line is included in the matching region.

[153] If it is determined in operation S36 that the loaded line is included in the matching region, the method proceeds to operation S37. In operation S37, the panorama se- quential generation unit 137 may load a matching line of the second image matched to the loaded line. In operation S38, a final matching line may be obtained by using a weight value of the loaded line. In operation S39, a panoramic image may be generated by inserting the final matching line into a position corresponding to the loaded line.

[154] If it is determined in operation S36 that the loaded line is not included in the matching region, the method proceeds to operation S39. In operation S39, the panorama sequential generation unit 137 may generate a panoramic image by inserting the loaded line into the position corresponding to the loaded line.

[155] In the panorama generating operation S30b, after all pixels included in the first image are loaded, all pixels included in a non-matching region of the second image are loaded. The above process may be performed on all pixels to generate a panoramic image.

[156] The panorama generating operation S30b has been explained above in detail with reference to FIG. 3, and a detailed explanation thereof will not be repeated here.

[157] FIG. 12 is a flowchart illustrating operation S40 optionally included in the method of FIG. 9.

[158] Referring to FIG. 12, the operation S40 may include an operation S42 in which a predetermined number of pixels having matching positions are selected in the first image and the second image.

[159] In operation S44, a first average, which is an average of color information of the selected pixels in the first image, and a second average, which is an average of color information of the selected pixels in the second image, are calculated.

[160] In operation S46, a ratio or a deviation between the first and second averages is calculated. In operation S48, the ratio or the deviation between the first and second averages is applied to the first image or the second image.

[161] The image correcting operation has been explained above in detail and a detailed explanation thereof will not be repeated here.

[162] The present invention may be embodied as computer-readable codes on a computer- readable recording medium. The computer-readable recording medium is any data storage device that may store data which may be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memories (ROMs), random-access memories (RAMs), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium may be installed in a computer system connected to a network, and stored and executed as a computer-readable code in a distributed computing environment. Functional programs, codes, and code segments for embodying the present invention may be easily derived by programmers in the art to which the present invention belongs.

[163] While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by one of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims. Industrial Applicability

[164] The present invention relates to an apparatus and method of generating a panoramic image and a computer-readable recording medium having embodied thereon a program for executing the method, and more particularly, to an apparatus and method of generating a panoramic image having a naturally blended matching region, and a computer-readable recording medium having embodied thereon a program for executing the method.

[165] The apparatus and method according to the present invention may generate a natural panoramic image by blending regions of original images corresponding to a matching region by using a weight value that is defined based on a distance of each of pixels. Also, the apparatus and method according to the present invention may generate a panoramic image further suitable for the hardware environment of a mobile terminal since a natural panoramic image is generated even with simple computation and limited resources.

[166]