Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
RECONSTRUCTING THREE DIMENSIONAL OIL PAINTINGS
Document Type and Number:
WIPO Patent Application WO/2008/105606
Kind Code:
A1
Abstract:
Techniques for generating three dimensional image data with brushstroke effects from a two dimensional image is disclosed. One or more three dimensional brushstroke patterns from at least one brushstroke are generated. A two dimensional image is partitioned into one or more color regions. For each color region, each three dimensional brushstroke pattern is transformed to obtain a brushstroke effect. Each transformed three dimensional brushstroke pattern is then applied to each color region to generate a three dimensional image data having the brushstroke effect.

Inventors:
YUN IL DONG (KR)
Application Number:
PCT/KR2008/001078
Publication Date:
September 04, 2008
Filing Date:
February 25, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RES AND INDUSTRY UNIVERSITY CO (KR)
YUN IL DONG (KR)
International Classes:
G06T17/00; G06T15/02; G06T15/04
Domestic Patent References:
WO2002082378A12002-10-17
Foreign References:
KR970049862A1997-07-29
KR20020056594A2002-07-10
US20060082579A12006-04-20
Attorney, Agent or Firm:
CHANG, Soo Kil (Seyang B/D 223 Naeja-dong, Jongno-gu, Seoul 110-720, KR)
Download PDF:
Claims:

CLAIMS

1. A method for generating three dimensional image data with brushstroke effects from a two dimensional image, comprising: generating one or more three dimensional brushstroke patterns from at least one brushstroke; partitioning a two dimensional image into one or more color regions; for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; and applying each transformed three dimensional brushstroke pattern to each color region to generate a three dimensional image data having the brushstroke effect.

2. The method of Claim 1, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

3. The method of Claim 2, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: applying the brushstroke image to the mesh data.

4. The method of Claim 2, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

5. The method of Claim 2, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

6. The method of Claim 5, wherein the luminance mapping comprises applying a brightness of the brushstroke to the mesh data.

7. The method of Claim 1, wherein the operations of transforming each three dimensional brushstroke pattern and applying each transformed three dimensional brushstroke pattern are repeated a predetermined number of times.

8. The method of Claim 1, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

9. The method of Claim 1, wherein the transformation of each three dimensional brushstroke pattern is a perspective transformation.

10. A method for reconstructing three dimensional image data with brushstroke effects from a two dimensional image, comprising: segmenting a two dimensional image into one or more color regions; generating three dimensional brushstroke pattern data of at least one sample brushstroke; for each color region, transforming the three dimensional brushstroke pattern data to generate a deformed 3 -dimensional brushstroke pattern data; and applying the transformed three dimensional brushstroke pattern data to each color region to generate a three dimensional image data.

11. The method of Claim 10, wherein the operation of applying each transformed three dimensional brushstroke pattern data for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

12. The method of Claim 11, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: applying the brushstroke image to the mesh data.

13. The method of Claim 11, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

14. The method of Claim 11, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

15. The method of Claim 14, wherein the luminance mapping comprises applying a brightness of the brushstroke to the mesh data.

16. The method of Claim 10, wherein the operations of transforming each three dimensional brushstroke pattern data and applying each transformed three dimensional brushstroke pattern data are repeated a predetermined number of times.

17. The method of Claim 10, wherein said transformation of the three dimensional brushstroke pattern data is a perspective transformation.

18. A method for generating three dimensional image data with brushstroke effects from a two dimensional image, comprising: generating one or more three dimensional brushstroke patterns from at least one brushstrokes; partitioning a two dimensional image into one or more color regions; and for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; generating a mesh data for each color region; generating a brushstroke image to be mapped to the mesh data; and applying the brushstroke image to the mesh data to generate a three dimensional image data having the brushstroke effect.

19. The method of Claim 18, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

20. The method of Claim 18, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

21. The method of Claim 20, wherein the luminance mapping is performed by applying a brightness of the brushstroke to the mesh data.

22. The method of Claim 18, wherein the operation of transforming each three dimensional brushstroke pattern is repeated a predetermined number of times.

23. The method of Claim 18, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

24. The method of Claim 18, wherein each three dimensional brushstroke pattern is transformed by a perspective transformation.

25. A computer readable medium storing computer executable code that performs a method comprising the steps of: generating one or more three dimensional brushstroke patterns from at least one brushstroke; partitioning a two dimensional image into one or more color regions; for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; and applying each transformed three dimensional brushstroke pattern to each color region to generate a three dimensional image data having the brushstroke effect.

26. The computer readable medium of Claim 25, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

27. The computer readable medium of Claim 26, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: applying the brushstroke image to the mesh data.

28. The computer readable medium of Claim 26, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

29. The computer readable medium of Claim 26, wherein the operation of

generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

30. The computer readable medium of Claim 29, wherein the luminance mapping comprises applying a brightness of the brushstroke to the mesh data.

31. The computer readable medium of Claim 25, wherein the operations of transforming each three dimensional brushstroke pattern and applying each transformed three dimensional brushstroke pattern are repeated a predetermined number of times.

32. The computer readable medium of Claim 25, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

33. The computer readable medium of Claim 25, wherein the transformation of each three dimensional brushstroke pattern is a perspective transformation.

34. A computer readable medium storing computer executable code that performs a method comprising the steps of: segmenting a two dimensional image into one or more color regions; generating three dimensional brushstroke pattern data of at least one sample brushstroke; for each color region, transforming the three dimensional brushstroke pattern data to generate a deformed 3-dimensional brushstroke pattern data; and applying the transformed three dimensional brushstroke pattern data to each color region to generate a three dimensional image data.

35. The computer readable medium of Claim 34, wherein the operation of applying each transformed three dimensional brushstroke pattern data for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

36. The computer readable medium of Claim 35, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: applying the brushstroke image to the mesh data.

37. The computer readable medium of Claim 35, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

38. The computer readable medium of Claim 35, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

39. The computer readable medium of Claim 38, wherein the luminance mapping comprises applying a brightness of the brushstroke to the mesh data.

40. The computer readable medium of Claim 34, wherein the operations of transforming each three dimensional brushstroke pattern data and applying each transformed three dimensional brushstroke pattern data are repeated a predetermined number of times.

41. The computer readable medium of Claim 34, wherein said transformation of the three dimensional brushstroke pattern data is a perspective transformation.

42. A computer readable medium storing computer executable code that performs a method comprising the steps of: generating one or more three dimensional brushstroke patterns from at least one brushstrokes; partitioning a two dimensional image into one or more color regions; and for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; generating a mesh data for each color region; generating a brushstroke image to be mapped to the mesh data; and applying the brushstroke image to the mesh data to generate a three

dimensional image data having the brushstroke effect.

43. The computer readable medium of Claim 42, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

44. The computer readable medium of Claim 42, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

45. The computer readable medium of Claim 44, wherein the luminance mapping is performed by applying a brightness of the brushstroke to the mesh data.

46. The computer readable medium of Claim 42, wherein the operation of transforming each three dimensional brushstroke pattern is repeated a predetermined number of times.

47. The computer readable medium of Claim 42, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

48. The computer readable medium of Claim 42, wherein each three dimensional brushstroke pattern is transformed by a perspective transformation.

49. A computer program being comprised of instructions that, when executed by a computer, cause the computer to perform a method for generating three dimensional image data with brushstroke effects from a two dimensional image, the method comprising: generating one or more three dimensional brushstroke patterns from at least one brushstroke; partitioning a two dimensional image into one or more color regions; for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; and

applying each transformed three dimensional brushstroke pattern to each color region to generate a three dimensional image data having the brushstroke effect.

50. The computer program of Claim 49, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

51. The computer program of Claim 50, wherein the operation of applying each transformed three dimensional brushstroke pattern for each color region further comprises: applying the brushstroke image to the mesh data.

52. The computer program of Claim 50, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

53. The computer program of Claim 50, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

54. The computer program of Claim 49, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

55. A computer program being comprised of instructions that, when executed by a computer, cause the computer to perform a method for generating three dimensional image data with brushstroke effects from a two dimensional image, the method comprising: segmenting a two dimensional image into one or more color regions; generating three dimensional brushstroke pattern data of at least one sample brushstroke; for each color region, transforming the three dimensional brushstroke pattern

data to generate a deformed 3 -dimensional brushstroke pattern data; and applying the transformed three dimensional brushstroke pattern data to each color region to generate a three dimensional image data.

56. The computer program of Claim 55, wherein the operation of applying each transformed three dimensional brushstroke pattern data for each color region further comprises: generating a mesh data for each color region; and generating a brushstroke image to be mapped to the mesh data.

57. The computer program of Claim 56, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

58. The computer program of Claim 56, wherein the operation of generating the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

59. A computer program being comprised of instructions that, when executed by a computer, cause the computer to perform a method for generating three dimensional image data with brushstroke effects from a two dimensional image, the method comprising: generating one or more three dimensional brushstroke patterns from at least one brushstrokes; partitioning a two dimensional image into one or more color regions; and for each color region, transforming each three dimensional brushstroke pattern to obtain a brushstroke effect; generating a mesh data for each color region; generating a brushstroke image to be mapped to the mesh data; and applying the brushstroke image to the mesh data to generate a three dimensional image data having the brushstroke effect.

60. The computer program of Claim 59, wherein the operation of generating the mesh data for each color region further comprises a surface gradient mapping.

61. The computer program of Claim 59, wherein the operation of generating

the brushstroke image to be mapped to the mesh data comprises a luminance mapping.

62. The computer program of Claim 59, wherein the operation of generating the one or more three dimensional brushstroke patterns further comprises: obtaining each three dimensional brushstroke pattern from at least one sample brushstrokes images; and performing iteratively projective transformation to each three dimensional brushstroke pattern of said at least one sample brushstrokes.

Description:

RECONSTRUCTING THREE DIMENSIONAL OIL PAINTINGS

TECHNICAL FIELD The present disclosure relates to image processing and, more particularly, reconstructing three-dimensional image data from two-dimensional images.

BACKGROUND

Oil paintings are usually considered to be two dimensional (2D) images. On closer inspection, however, oil paintings typically contain many brushstrokes, each of which is unique from the other brushstrokes. For example, each brushstroke is characterized by a unique height and color, and creates a unique texture effect according to the oil color thickness of the individual brushstroke. Therefore, oil paintings can be considered three dimensional (3D) structures having various texture effects.

The difference between the brushstrokes is in the height of the brushstrokes, which is caused from the thickness difference of the oil colors. This difference can be very small. Typically, laser scanners are used to obtain high resolution 3D data of a 3D structure having texture effects. However, even high resolution laser scanners may not provide sufficient resolution to adequately represent 3D structures of oil paintings that have very minute texture effects.

With regard to image processing, 3D oil painting reconstruction is related to artistic filters, in which various painting styles including oil, watercolor, and line art renderings are synthesized based on either digitally filtered or scanned real-world examples. Work has been done in creating artistic styles by computer, often referred to as non-photorealistic rendering. Most of these works have been related to a specific rendering style. In various conventional image analogy techniques, a user presents two source images with the same content which are aligned, but with two different styles. Given a new input image in one of the above styles, the mapping from an input image to an aligned image of the same scene in a different style is estimated. The aligned image pair with the same scene but in a different image style, however, is often unavailable.

In another conventional technique, for a given input image, only one source image of an unrelated scene that contains the appropriate style is required. In this case, the unknown mapping between the images is inferred by Bayesian technique based on belief propagation and expectation maximization. These conventional

techniques, however, have been typically limited to 2-dimensional image construction in which only limited types of texture effects were reconstructed.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a schematic diagram of an example system to implement one embodiment of a method for reconstructing 3-dimensional data having texture effects, in accordance with one embodiment.

FIG. 2 shows a flow chart of a photometric stereo method using a hybrid reflection model, in accordance with one embodiment. FIG. 3 illustrates an image construction model used in the photometric stereo method, in accordance with one embodiment.

FIGS. 4-6 show an example of a sample brushstroke pattern data obtained by the photometric stereo method, in accordance with one embodiment.

FIGS. 7-9 show another example of a sample brushstroke pattern data obtained by the photometric stereo method, in accordance with one embodiment.

FIGS. 10 and 11 illustrate flow diagrams illustrating the processing for reconstructing 3-dimensional data having texture effects as well as 2-dimenstional image, in accordance with one embodiment.

FIGS. 12 and 13 illustrate exemplary perspective transformations, in accordance with one embodiment.

FIGS. 14 and 18 show examples of 2-dimensional input images to which embodiments of the described techniques may be applied.

FIGS. 15 and 19 show the results of color segmentation applied to Figures 14 and 18 respectively, in accordance with one embodiment. FIGS. 16 and 20 show the 3-D reconstruction results of Figures 14 and 18 respectively, in accordance with another embodiment.

FIGS. 17 and 21 show rendering results having different light conditions from Figures 16 and 20, respectively, in accordance with one embodiment.

SUMMARY

The present disclosure provides techniques for generating three dimensional image data with brushstroke effects from a two dimensional image. Brushstroke pattern data is obtained from sample brushstrokes and the pattern data is used to form three dimensional mesh data. The brushstroke pattern data is then applied to the three dimensional mesh data. Accordingly, any two dimensional image can be effectively and efficiently transformed into a three dimensional image having

brushstroke effects.

In one embodiment, a method for generating three dimensional image data with brushstroke effects from a two dimensional image includes generating one or more three dimensional brushstroke patterns from at least one brushstroke. A two dimensional image is partitioned into one or more color regions. For each color region, each three dimensional brushstroke pattern is transformed to obtain a brushstroke effect. Each transformed three dimensional brushstroke pattern is then applied to each color region to generate a three dimensional image data having the brushstroke effect. In another embodiment, a method for reconstructing three dimensional image data with brushstroke effects from a two dimensional image includes: (i) segmenting a two dimensional image into one or more color regions; (ii) generating three dimensional brushstroke pattern data of at least one sample brushstroke; (iii) for each color region, transforming the three dimensional brushstroke pattern data to generate a deformed 3-dimensional brushstroke pattern data; and (iv) applying the transformed three dimensional brushstroke pattern data to each color region to generate a three dimensional image data.

In still another embodiment, a method for generating three dimensional image data with brushstroke effects from a two dimensional image is provided. In this method, one or more three dimensional brushstroke patterns are generated from at least one brushstrokes. A two dimensional image is partitioned into one or more color regions. Then for each color region, each three dimensional brushstroke pattern is transformed to obtain a brushstroke effect and a mesh data is obtained to generate a brushstroke image to be mapped to the mesh data. The brushstroke image is then to the mesh data to generate a three dimensional image data having the brushstroke effect.

In yet another embodiment, a computer readable medium storing instructions causing a computer program to execute the method for generating three dimensional image data with brushstroke effects from a two dimensional image is provided.

DETAILED DESCRIPTION

In the following description, numerous specific details are set forth. It will be apparent, however, that the described embodiments may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present disclosure.

FIG. 1 illustrates a schematic diagram of an example imaging system 100 in which embodiments of the present disclosure may be implemented. Imaging system 100 includes a camera 104, a light source 106 and a computer system 110. Computer system 110 includes a controller 112, an I/O subsystem 114 (e.g., keyboard, mouse, trackball, etc.), a storage device 116 (e.g., mass storage device, hard disk drive, etc.), a CPU 118 and a memory 120 (e.g., random access memory), which are connected to each other via a bus 122. Camera 104 and light source 106 are operatively coupled to controller 112 of computer 110 for communicating control and data signals. In this configuration, controller 112 controls the operation of camera 104 and the position of light source 106. Light source 106 provides light in different directions under the control of controller 112 to form reflected images 102 of real 3D brushstrokes in accordance with a photometric stereo method using a hybrid reflection model. Camera 104 captures images 102 such as 3D brushstrokes and 2D paintings under the control of controller 112. In an alternative embodiment, any apparatus such as a scanner that is capable of obtaining 2D or 3D data from real objects or images may be used instead of camera 104. Storage device 116 is a mass storage device such as an optical disk, a hard disk drive, etc., and stores computer instructions implementing one or more methods for reconstructing 3D data with brushstroke effects. The instructions may be loaded into memory 120 (e.g., RAM) and provided to CPU 118, which may execute the computer instructions for reconstructing 3D data with brushstroke effects.

According to one embodiment, N images for each brushstroke among several sample brushstrokes are obtained by using light source 106 and camera 104 under the control of controller 112. The N images are used to obtain brushstroke pattern data for the sample brushstrokes by using a photometric stereo method using a hybrid reflection model as described in FIG. 2 below. Once the sample brushstroke pattern data has been obtained, an image of a 2D painting to be 3-dimensionally reconstructed is captured via camera 112. After the 2D image is obtained, color segmentation is applied. For each color region in the 2D image, a transformation of the 3D brushstroke pattern obtained from the images is performed to obtain various 3D brushstroke patterns. In this process, each transformed 3D brushstroke is iteratively applied to each color region to generate 3D image with brushstroke effects. It should be appreciated, however, that the techniques disclosed are not limited to any specific 3-D reconstruction method for obtaining 3-D data of brushstroke patterns.

FIGS. 2-9 illustrate methods for obtaining brushstroke pattern data from real brushstrokes by employing a photometric stereo method using a hybrid reflection model to obtain brushstroke pattern data from N images on each of several sample brushstrokes. Brushstrokes are real 3D objects having distinct shape, height and texture effects. Considering that real oil paintings include a large number of different brushstrokes, obtaining as much brushstroke pattern data as possible is helpful to reconstruct 3-D data with texture effects. However, for the sake of efficiency in the image processing, the perspective transformation is iteratively performed to generate various brushstroke pattern data from the pattern data of a few sample brushstrokes. The number of sample brushstrokes may be determined by various factors including, for example, the size of the input image, the sample brushstroke and the segment formed by the color segmentation. For example, even one or two sample brushstrokes may provide sufficient oil painting texture effects through proper perspective transformation. The number of sample brushstrokes may also be selected to represent a painter's brushstroke style. For a more realistic 3-D reconstruction, real brushstrokes of known painters may be selected as sample brushstrokes.

Before explaining the photometric stereo method illustrated in FIG. 2, an image construction model and error estimation process according to one embodiment will be discussed below.

In general, reflected light includes both diffuse reflection components and specular reflection components. In the hybrid reflection model in accordance with one embodiment, the diffuse reflection components may be approximated by using the Lambertian model, and the specular reflection components may be approximated by using the Torrance-sparrow model. FIG. 3 shows an image construction model used for defining the hybrid reflection model in the photometric stereo method, in accordance with one embodiment. All the vectors illustrated in FIG. 3 are unit vectors, where n is a normal vector of the surface of the brushstroke, v is a directional vector from the surface of the brushstroke to camera 104, and * is a directional vector from the surface of the brushstroke to light source 106. The vector λ is a directional vector of the specular reflection defined as follows: [Equation 1]

Under the image construction model of FIG. 3, generalized radiance L obtained by camera 104 is composed of a diffuse reflection component, LQ, and a specular reflection component, Ls, as follows: [Equation 2] L

where p D is a diffuse reflection albedo, p s is a specular reflection albedo,

k is a texture parameter of the surface, and θ = cos "1 ψ • nj is an angle (rad) between vectors n and h. In the hybrid model, variables n, p D , p s and k are estimated to determine the diffuse reflection surface value and the specular reflection surface value by using error indexes from N different images of one sample brushstroke.

For error estimation, an error index is defined in terms of radiance of N images, the hybrid reflection model, and mathematical stability of the estimated values, as follows: [Equation 3]

= E D + E S

where I k is a k-th input image, /f , if are diffuse reflection image and

specular reflection image of the k-th input image, respectively, I k , I k , I k are reconstructed images, and E D and E s are diffuse reflection error and specular reflection error, respectively. The weighting values w D and w s in the error index equation are defined as follows: [Equation 4]

1,

<*< χ ,y) = { otherwise . where W MD and WM S are weighting factors reflecting estimation error and quantization errors, and are constant values if it is assumed that quantization effect is uniformly applied to whole regions, and W SD is a weighting factor defined based on the stability of the estimation of image construction variables and is obtained from the phase of the estimated image construction variables on a PQ map.

There are two methods for obtaining the PQ map for the Lambertian surface from three input images: one method obtains the PQ map on the assumption that albedo of the surface is known, and the other method obtains PQ map and albedo without knowing the albedo of the surface. In the techniques described herein, the latter method is applied. However, the described techniques may also be implemented using the former method. Generally, radiance Li (i= 1,2,3) on the assumption of the Lambertian surface is given by:

[Equation 5]

-Z- Z =^ P C *, « ) , /= 1,2,3,

[ **

where Ej is a radiance of the i-th light source, s \ is a unit positional vector of the light source and n is a unit normal vector. Equation 5 can be expressed in vector form:

[Equation 6]

L = Sn,

If Ei = E 2 = E 3 , Equation 6 may be expressed as follows: [Equation 7]

From Equation 7, the normal vector n is given as follows: [Equation 8] -Eς> p -Ep q -Bp ji ny= . λ . -2 . „2 - "'—

From Equation 8, surface gradients p and q may be obtained as follows: [Equation 9]

P =- - , q=- -

Assuming that errors E 1 , ε 2 , ε 3 are given, from Equation 7, the following equation may be obtained: [Equation 10]

From Equation 10, error vector e is given by: [Equation 11]

where the magnitude of the error vector e is given as follows: [Equation 12]

If the condition value (δ CO nd) is defined as the determinant of the directional vector S, the condition value is given as follows:

[Equation 13] δ C<W = 1 -SI = ^C-S-).

If the condition value (δ cond ) is small, this means that the positions of the three light sources are linearly dependent and that a correct solution cannot be obtained because the magnitude of the error vector becomes large in Equation 12. Thus, any three images with different light sources whose condition value (δ CO nd) is smaller than a predetermined value are referred to as "ill-conditioned light source pair," and is excluded from the estimation of the image construction variables. If all of the light source pairs that play a role in determining the diffuse reflection components at pixel (x, y) are represented as S p (x,y), the weighting factor W SD is given by:

[Equation 14]

" SD = [ σ δ β wCP) ]

The error index E can be obtained from Equations 3, 4 and 14. By estimating the image construction variables that make the error index E to be minimum, the reflection characteristic of the brushstroke and image construction can be determined. However, due to the difficulty in obtaining an optimal solution from

Equation 3 that is non-linear, the error index E is minimized step-by-step in Equation 3 and the estimated image construction variables are repeatedly updated. In this process, the diffuse reflection image is obtained from the input image and the specular reflection image is separated by using the diffuse reflection image subtracted from the original image. In addition, the normal vector of the surface and diffuse reflection albedo are estimated. In this manner, the image construction variables related to the diffuse reflection image obtained by separating the specular reflection image and the diffuse error (E D ) in Equation 3 is minimized. The remaining image construction variables are estimated so that the specular reflection error (Es) is minimized.

FIG. 2 illustrates a flow chart of a photometric stereo method 200 using the hybrid reflection model. Method 200 comprises two main operations: obtaining image construction variables of diffuse reflection that minimize a diffuse reflection error; and obtaining image construction variables of specular reflection that minimize a specular reflection error. Beginning in a start block 202, N images for a sample brushstroke taken by camera 104 are received in block 204. From the N images, all the image pairs available for estimating the image construction variables except the ill-conditioned light source pairs are selected. In block 206, the image construction variables are estimated from the selected image pairs, and the specular reflection regions are separated from the image of the sample brushstroke. Since the image construction variables in the pixels of the specular reflection regions cannot be estimated, the image construction variables in this region are determined by using interpolation from neighboring variables. In this operation, all the image pairs for the respective pixels available for the estimation of the image construction variables, except the ill-conditioned light source pairs, are selected.

In block 208, the normal vectors («) for the respective pixels are estimated and the shadowed regions are separated from the distribution of the normal vectors («). Given a pixel (x, y), an average vector n m (x, y) of the normal vectors are obtained from the image pairs for the pixel (x, y) and a variance n σ (x, y). If the variance n σ (x, y) is smaller than a specific threshold, the average vector n m (x, y) is estimated to be the normal vector of the pixel surface. If the variance n σ (x, y) is larger than a specific threshold, the average vector is repeatedly calculated by excluding the vectors that are far apart from the average, until the variance converges. The threshold may be determined by sensor noise. Using the estimated normal vectors (n), the weighting factor (W $D ) in Equation 14 is obtained. If the weighting factor is too large, the normal vector n is calculated again for a specific pixel by

excluding the component generating a large value in the weighting factor. In addition, diffuse reflection albedo P D and the normal vector n related to the diffuse reflection are estimated in block 208.

In decision block 210, if a minimum error in diffuse reflection is obtained, method 200 loops to block 206 to obtain a minimum error in diffuse reflection, for example, by using Equation 4. If, in decision block 210, a minimum error in diffuse reflection is obtained, method 200 continues at block 212. In block 212, the diffuse reflection image ( /f ) is obtained by using the diffuse reflection albedo P D and the

normal vector n related to the diffuse reflection components obtained in block 208. In block 214, the specular reflection image ( /f ) is obtained as follows:

[Equation 15]

As shown above in Equation 2, the radiance of the specular reflection image Ls is given by:

[Equation 16] exp( - kθ 2 ) v • n

θ = cos " V A • n ) .

Applying logarithm to Equation 16, the following equation is obtained: [Equation 17] lnL^+ ln v • n — lnp s -kQ*,

A = p s '-kJ3, Where λ ~ lni,+lnv n ρ,'~ lnp,, β=0 " _

In Equation 17, A and B are known values. Accordingly, in block 216, ps' and k can be obtained by using the least square algorithm for each pixel if more than two values of A and B are given. In block 218, 3D data on the sample brushstroke is generated by synthesizing the diffuse reflection image and the specular reflection image.

Through the above-explained operations, the 3D data of sample brushstroke patterns are obtained. FIGS. 4-6 show an example of a sample brushstroke pattern data obtained by photometric stereo method 200 explained above. FIG. 4 shows a 2D image of an actual (i.e., real) sample brushstroke. FIG. 5 shows the albedo of the diffuse reflection component of the sample brushstroke of FIG. 4. FIG. 6 shows a 2D image of the 3D reconstruction result. Similarly, FIGS. 7-9 show another

example of a sample brushstroke pattern data obtained by the photometric stereo method, in accordance with one embodiment. Specifically, FIG. 7 shows a 2D image of an actual (i.e., real) sample brushstroke. FIG. 8 shows the albedo of the diffuse reflection component of the sample brushstroke of FIG. 7. FIG. 9 shows a 2D image of the 3D reconstruction result.

FIG. 10 illustrates a flow diagram of a method 600 for reconstructing 3D data having texture effects, in accordance with one embodiment. In block 1002, 3D brushstroke pattern data is generated for several sample brushstrokes. In one embodiment, the 3D brushstroke pattern data may be generated as described in detail above in conjunction with FIGS. 2-9. In general, oil paintings contain numerous brushstrokes having different shapes and heights to each other. Thus, in order to construct oil painting texture effects through image processing, numerous brushstroke patterns may be used. However, it is inefficient to acquire all the possible brushstroke patterns. In order to obtain the necessary brushstroke pattern data, some transformation may be performed on several of the sample brushstroke pattern data acquired by, for example, the photometric stereo technique as shown in FIGS. 4-9.

In block 1004, a 2D image to be 3-dimensionally reconstructed is captured and received from camera 104. In block 1006, a color segmentation is performed to partition the 2D image into different color regions. Because a typical brushstroke in an oil painting contains one color, a region covered by a brushstroke can be drawn by a single color. Accordingly, in one embodiment, it is assumed that brushstrokes exist inside the boundaries of color regions and there are no brushstrokes crossing the boundary of two different color regions. However, it is noted that the boundaries of color segments may be suitably determined by selecting the appropriate color segmentation parameters. Thus, different 3D reconstruction results for the same input image may be obtained by selecting the color segmentation parameters. For example, the color segmentation parameters may be selected to represent characteristic styles of the artists. The color segmentation (block 1006) is applied to the 2-D input image to extract the homogeneous color regions. In this operation, any conventional color segmentation technique in the image processing field may be used for dividing the input image into a plurality of regions according the colors of the regions. An example of a suitable and commercially available product is the Edge Detection and Image Segmentation (EDISON) System, which uses mean shift based image segmentation.

In one embodiment, for each color region obtained in block 1006, each 3D

sample brushstroke obtained in block 1002 is transformed or deformed using, for example, random linear and/or non-linear perspective transformation. An example transformation is given by the following perspective transformation equation: [Equation 18]

A t x 7 = Hpx = X V T J V where x is a 3x3 matrix indicating the position (i.e., x, y and z position) of a point to be processed, A is a 2x2 non-singular matrix, t is a translation 2-vector, v = (V 1 , V 2 ) is a variable vector adjusting the extent of the transformation, and v is a scaling factor. In order to avoid excessive transformation or deformation of brushstroke patterns, linear enlargement of the brushstroke patterns may be limited to α or 1/α times, where α may range between, but not limited to, 1.5-2.

Matrix A is an affϊne matrix which applies two fundamental transformations, namely rotations and non-isotropic scaling such as non-linear distortion. Affine matrix A can be decomposed as follows:

[Equation 19]

A = K(θ)K(~φ)OR(φ) where R(θ) and R(φ) are rotations by angles θ and φ, respectively, and defined as follows: [Equation 20]

and where D is a diagonal matrix defined as follows: [Equation 21]

A 1 0

D = where λj and λ 2 are scaling factors in the rotated x and y directions, respectively.

FIGS. 12 and 13 show exemplary distortions arising from the transformation by affine matrix A. Specifically, FIG. 12 illustrates rotation by R(O), which corresponds to rotating the sample brushstroke pattern obtained in block 1006 of method 1000 by angle θ counterclockwise. FIG. 13 illustrates deformation by R(-φ) D R(φ), which corresponds to rotating the x-axis and y-axis by angle φ and scaling the rotated image by λi in the rotated x direction and by λ 2 in the rotated y direction.

As shown in FIG. 13, the transformation R(-φ) D R(φ) transforms a square into a rotated parallelogram. It should be noted that any linear and/or non-linear transformation may also be used for perspective transformation performed in block 1008 of method 1000. Referring again to FIG. 10, in block 1010, each transformed 3D brushstroke is applied to each color region to generate 3D image with brushstroke effects. FIG. 11 illustrates a flow diagram of a process for applying each transformed 3D brushstroke to each color region. In block 1052, a surface gradient map for each color region is generated to form mesh data (gradient mapping). In block 1054, an image to be mapped to the mesh data is generated by applying brightness of brushstroke (luminance mapping). In block 1056, a 3D image with brushstroke effect is generated by applying luminance map to mesh data. Although FIG. 11 illustrates performing gradient mapping (block 1052) prior to luminance mapping (block 1054), luminance mapping may be performed simultaneously or prior to gradient mapping in other embodiments.

In one embodiment, 3-D structures with brushstroke effects are reconstructed (block 1052) by using gradient mapping. The gradient map for each brushstroke pattern is obtained in photometric stereo method 200, as explained above with reference to FIGS. 2 and 3. In constructing the gradient map corresponding to the reconstructed image with the brushstroke effects, the area where the transformed brushstroke pattern is applied is replaced with the gradient map that corresponds to the transformed brushstroke pattern image, since the brushstroke in oil paintings covers the previous brushstroke in that position. A final gradient map is obtained after applying all the transformed brushstroke patterns. In one embodiment, in order to efficiently reconstruct the corresponding 3-D structure from the gradient map, a surface reconstruction method may be used. However, the described techniques are not limited to a specific surface reconstruction method, and any surface reconstruction method may be used.

In one embodiment, the luminance mapping operation (block 1054) is performed based on the HSI (hue, saturation, intensity) color model. The HSI color model decouples the intensity component from the color-carrying information (hue and saturation) in a color image. For example, human eyes are typically more sensitive to changes in the luminance channel than to changes in color difference channels. Thus, luminance remapping is used to apply the brushstroke effect. In luminance mapping, after processing in luminance space, the color of the output image can be recovered by copying the H and S channels of the input image into the

output image. In one embodiment, the albedo value of the diffuse reflection component in the brushstroke patterns acquired by photometric stereo is used to transform the intensity value of the area where each brushstroke pattern is applied. For example, if y; is the intensity value on a pixel in the area where each brushstroke pattern is applied, and y p is the intensity value on the corresponding pixel in the brushstroke pattern to be applied, then Vj may be remapped as follows: [Equation 22] Vi - Vi + a{y p - μ p ) where μ p is the mean intensity values of the brushstroke pattern image, and α is a scaling factor. When the gradient mapping and luminance mapping operations are completed, 3D image with brushstroke effect may be generated by applying the luminance map to mesh data (block 1056).

Referring again to FIG. 10, one brushstroke pattern is applied to the input image as a result of the operations performed in blocks 1008 and 1010. In decision block 612, if additional transformation is needed to provide various brushstroke data to the input image, then method 1000 loops to block 1008 to perform additional transformation, else method 1000 ends processing. As explained above, numerous brushstroke patterns may be required for providing oil painting texture effects, and it may be inefficient to acquire all possible brushstroke patterns, for example, in block 1002. Accordingly, the perspective transformation (block 1008) is iteratively changed and used for several sample brushstrokes. For each iteration, at least one of the variables in Equation 19 used in the perspective transformation (i.e., affine matrix A, translation 2-vector t, coefficient vector v, and scaling factor v) may be changed randomly. The number of iterations may be determined so that a sufficient number of the perspective transformation may be performed to provide the oil painting texture effects. For example, the size of the input image and the sample brushstroke may be considered in determining the number of iterations. Further, the brushstroke styles of painters of oil paintings may be considered to determine the number of iterations. After sufficient iterations, 3-D reconstructed data with texture effects as well as 2-D image of the 3-D structure in one direction is obtained. FIGS. 8 and 9 show results of exemplary 3-D reconstructions with brushstroke texture effects. In FIGS. 8 and 9, three different brushstrokes are used as sample brushstrokes for obtaining the brushstroke pattern data (block 1006 of method 1000). In these examples, the number of iteration performed is 10,000 times, and the sample brushstrokes are repeatedly transformed by random perspective transformation (block 1008 of method

1000).

Specifically, FIG. 14 shows the input image, which is a 2-D image without any texture effects. Through the color segmentation operation 1006, the input image of FIG. 14 is segmented into 12 regions having identical color therein, as show in FIG. 15. FIGS. 16 and 17 illustrate 2-D images of the reconstructed 3-D data with oil painting brushstroke effects. FIG. 17 shows the rendering results in a light condition different from that of FIG. 16. FIGS. 14-17 thus show renderings of 2-D image and 3-D structure data having brushstroke patterns. As shown, the 3-D effects under various light conditions can be obtained efficiently. FIGS. 18-21 show the reconstructed images on a seascape. Specifically, FIG.

18 shows a 2-D input image, and FIG. 19 shows the color segmentation result with 13 homogeneous regions. The 3-D reconstruction and the rendering results in different light conditions are shown in FIGS. 20 and 21.

From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.