Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR IMAGE SYNTHESIS
Document Type and Number:
WIPO Patent Application WO/2017/021322
Kind Code:
A1
Abstract:
Computer-implemented method for transferring style features from at least one source image to a target image, comprising the steps of generating a result image, based on the source and the target image, wherein one or more spatially-variant features of the result image correspond to one or more spatially variant features of the target image; and wherein a texture of the result image corresponds to a texture of the source image; and outputting the result image, and a corresponding device. According to the invention, the texture corresponds to a summary statistic of spatially variant features of the source image.

Inventors:
BETHGE MATTHIAS (DE)
GATYS LEON (DE)
Application Number:
PCT/EP2016/068206
Publication Date:
February 09, 2017
Filing Date:
July 29, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EBERHARD KARLS UNIVERSITÄT TÜBINGEN (DE)
International Classes:
G06T11/00
Other References:
ZHENG WANG ET AL: "Neural network-based Chinese ink-painting art style learning", INTELLIGENT COMPUTING AND INTELLIGENT SYSTEMS (ICIS), 2010 IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 29 October 2010 (2010-10-29), pages 462 - 466, XP031818371, ISBN: 978-1-4244-6582-8, DOI: 10.1109/ICICISYS.2010.5658312
ATTILA NEUMANN ET AL: "Color Style Transfer Techniques using Hue, Lightness and Saturation Histogram Matching", 1 January 2005 (2005-01-01), XP055270604, Retrieved from the Internet [retrieved on 20160504], DOI: 10.2312/COMPAESTH/COMPAESTH05/111-122
LEON A GATYS ET AL: "Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks", 27 May 2015 (2015-05-27), XP055298121, Retrieved from the Internet [retrieved on 20160829]
LEON A GATYS ET AL: "A Neural Algorithm of Artistic Style", 26 August 2015 (2015-08-26), XP055285175, Retrieved from the Internet [retrieved on 20160831]
LEON A GATYS ET AL: "Image Style Transfer Using Convolutional Neural Networks", 26 June 2016 (2016-06-26), pages 1 - 10, XP055298103, Retrieved from the Internet [retrieved on 20160829]
Attorney, Agent or Firm:
BACH, Alexander (DE)
Download PDF:
Claims:
Claims

Computer-implemented method for transferring style features from at least one source image to a target image, comprising the steps of: generating a result image, based on the source and the target image, wherein one or more spatially-variant features of the result image correspond to one or more spatially variant features of the target image; and

wherein a texture of the result image corresponds to a texture of the source image; and

outputting the result image; characterized in that the texture corresponds to a summary statistic of spatially variant features of the source image.

The method of claim l, wherein the summary statistic corresponds to a correlation.

The method of claim 2, wherein the correlation corresponds to a Gram matrix.

The method of claim l, wherein a spatially variant feature of an image corresponds to a result of a non-linear transformation of that image.

The method of claim 4, wherein the non-linear transformation corresponds to one or more convolutions of the image.

The method of claim 1, wherein the summary statistic corresponds to an average or a power spectrum.

Computer-implemented method transferring style features from at least one source image to a target image, comprising the steps of: extracting a texture of the source image; generating a result image, based on the target image and the texture of the source image characterized in that the texture corresponds to a summary statistic of spatially variant features of the source image.

8. The method of claim 7, wherein the summary statistic corresponds to a correlation.

9. The method of claim 8, wherein the correlation corresponds to a Gram matrix.

10. The method of claim 7, wherein a spatially variant feature of an image corresponds to a result of a non-linear transformation of that image.

11. The method of claim 10, wherein the non-linear transformation corresponds to one or more convolutions of the image.

12. The method of claim 7, wherein the summary statistic corresponds to an average or a power spectrum.

13. The method according to claim 7, characterized in that the spatially- variant feature of the source image is extracted using a neural network.

14. The method according to claim 13, characterized in that the neural network is trained to recognize objects in an image.

15. The method according to claim 13 or 14, characterized in that the neural network is a convolutional neural network.

16. The method according to claim 15, characterized in that the neural network is the VGG network.

17. The method according to claim 7, characterized in that the result image is generated by searching an image, in which one or more spatially variant features of the result image correspond to one or more spatially variant features of the target image; and

wherein the texture of the result image corresponds to the texture of the source image.

18. The method according to claim 17, characterized in that the image search is performed with a gradient method.

19. The method according to claim 18, characterized in that the gradient method is initialized with a random image, wherein a distribution of brightness values of the pixels correspond to a white noise.

The method according to claim 18, characterized in that the gradient method is initialized with the source image.

The method according to claim 18, characterized in that the gradient is calculated based on the feature of the source image.

The method according to claim 21, characterized in that the gradient is further calculated based on a feature of the target image.

The method according to claim 1 or 7, characterized in that the result image is made available in a social network.

The method according to claim 1 or 7, wherein the target image is received from a user or wherein the result image is sent to a user over a telecommunications network, e.g. a wireless communications network or the Internet.

A computer program product comprising a software comprising instructions for performing a method according to claims 1 or 7 on a computer.

Image carrier made of a non-volatile material, which carries an image, which has been generated by a method according to claims 1 or 7.

27. A device for transferring style features from at least one source image to a target image, comprising an extraction section for extracting a texture of the source image; a generating section for generating an output image based on the target image and the texture of the source image, an output unit for outputting the output image produced characterized that the texture corresponds to a summary statistic of spatially variant features of the source image.

The apparatus according to claim 27, further comprising a digital camera for capturing one or more source images which are supplied to the extraction section.

Description:
Method and device for image synthesis

The invention relates to a method and a device for the synthesis of an image, in par- ticular for the synthesis of an image, in which features of a source image, e.g. a texture, are transferred to a target image.

Methods for the transfer of a texture of a source image to objects of a target image are known in the prior art. Ashikhmin („Fast Texture Transfer", IEEE Computer Graphics and Applications 23, 2003, 4, 38 to 43) shows a fast method working on the pixel level. The also pixel based method of Lee at al („Directional Texture Transfer", NPAR 2010, 43 to 50) uses the gradient of the target image, e.g. for simulating the direction of brush strokes. Xie et al („Feature Guided Synthesis for Artistic Style Transfer", DIMEA 2007, 44 to 49) show a method for transferring the texture characteristics of a source image to a target image, based on a feature map of basic statistical features generated from the target image. None of the cited methods equally takes local as well as global texture features of the source image equally into account. Moreover, the methods depend on fixed assumptions on the kind of texture. The parametric texture model for texture synthesis proposed by Portilla and Simoncelli (J. Portilla and E. P. Simoncelli. A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients. International Journal of Computer Vision, 40(1)149-70, October 2000) is based on a set of carefully handcrafted summary statistics computed on the responses of a linear filter bank. Although the model shows very good performance in synthesising a wide range of textures, it still fails to capture the full scope of natural textures.

It is therefore an object of the present invention to provide a general, flexible and efficient method and a device for image synthesis, in particular for transferring style features of a source image to a target image, which better reproduces the local and global texture features of the source image, without significantly impairing the identity of the objects of the target image.

This object is achieved by the methods and the device according to the independent claims. Advantageous embodiments of the invention are defined in the dependent claims. In particular, the method according to the invention generates a result image, based on the source and the target image, wherein one or more spatially-variant features of the result image, i.e. the content of the image in terms of objects and their arrangement in the image, correspond to one or more spatially variant features, i.e. the con- tent, of the target image, and wherein a texture of the result image corresponds to a texture of the source image. The texture corresponds to a summary statistic of spatially variant features of the source image, which is spatially invariant.

The method according to the invention is essentially based on the use of suitable non- linear transformations of the source image for the extraction of relevant features and the use of summary statistics for representing a texture of the source image. The non- linearity allows in particular, taking more complex features of the source image into account. The extracted features represent the image information such that semantic image information (e.g. objects) are simply (e.g. linearly) decodable, e.g. it can already be sufficiently described by a linear classifier, which ensures its efficient consideration during image synthesis in return. Thereby, the method according to the invention achieves altogether a high quality of the generated images at a relatively low cost. When the non-linear transformations are realized with a neural network, the method according to the invention further achieves a high generality and flexibility, as image features must not be hard coded or given, but can be learned from a set of training data.

Figure lA shows first an overview of a method for the extraction of content features according to an embodiment of the invention. The features of one or more digital source images are extracted with a "folding" neural network (convolutional neural network or CNN). CNN consist of layers of small computing units that process visual information hierarchically in a forward-processing manner. Each layer of units can be understood according to the invention as a set of image filters, each of which extracts a particular feature of the input image. Therefore, the output of a given layer consists of so-called "feature maps", that is differently filtered versions of the input image. Typically, the number of feature maps increases in each layer along the processing hierarchy, but their spatial extent can be reduced by down-sampling in order to achieve a reduction in the total number of units per layer. Because each layer defines a nonlinear filter operation on the output of the previous layer, layers higher up in the hier- archy extract increasingly complex features.

The CNN used in the present embodiment is trained in object recognition. In this case, the CNN develops a representation of the image, while making object information along the processing hierarchy increasingly explicit [Leon A. Gatys, Alexander S. Eck- er, and Matthias Bethge. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. arXiv: 1505.07376 [cs, q-bio], May 2015. arXiv: 1505.07376]. In each layer of the network, the input image is represented by a set of feature maps in this layer. More specifically, when Convolutional Neural Networks are trained on object recognition, they develop a representation of the image that makes object information or so-called spatially variant features of the image increasingly explicit along the processing hierarchy. Therefore, along the processing hierarchy of the network, the input image is transformed into representations that are increasingly sensitive to the actual content of the image, but become relatively invariant to its precise appearance. Thus, higher layers in the network capture the high-level content in terms of objects and their arrangement in the input image but do not constrain the exact pixel values of the reconstruction very much. In contrast, reconstructions from the lower layers simply reproduce the exact pixel values of the original im- age. Therefore, the feature responses in higher layers of the network may be referred to as the content representation.

This information on the image contained in each layer can be visualized directly by reconstruction of the image exclusively from these feature maps [Aravindh Mahen- dran and Andrea Vedaldi. Understanding Deep Image Representations by Inverting Theme. arXiv: 1412.0035 [cs], November 2014. arXiv: 1412.0035]. Reconstructions of the lower layers are almost perfect, while reconstructions from higher layers reproduce the exact pixel values of the original image less accurately while they further capture its contents. A certain loss of information is to be expected when the total number of units representing the image, decreases with increasing layers. Because the network is trained in the recognition of objects, its filters are optimized also to reshape the input image into a representation, in which object information is made explicit. Therefore, the input image is transformed along the processing hierarchy of the network into a representation that increasingly represent the semantic content of the image explicitly, compared with its detailed pixel values.

The results according to the embodiment of the invention, were obtained based on the freely available VGG network [Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv: 1409.1556; Yangqing Jia, Evan Shell Hamer, Jeff Donahue, Sergey Garayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675-678. ACM, 2014], which the inventors have suitably modified. In particular, the feature space is used which is provided by the 16 folding and 5 of the pooling layers 19 VGG-layer network. None of the fully connected layers was used. For image synthesis, the max pooling operation in the known network is according to the invention replaced by an average pooling operation, which improves the gradient flow and better image results.

In general, each layer in the network defines a non-linear filter bank whose complexity increases with the position of the layer in the network. Therefore, a given input image x in each layer of the CNN will be encoded by filter responses to this image. A layer with Nl different filters has Nl feature maps of size Ml, where Ml is the height times the width of the feature maps. So the answers in a layer 1 may be stored in a matrix Fl element R, where Fij is the activation of the ith filter at position j in layer 1.

Figure lB shows schematically, how a style representation is constructed according to the invention, on the responses of the CNN in every layer of the network, by calculating a correlation between different filter responses, wherein the expectation is taken over the spatial extent of the input image. This feature correlation is given in the present case by the Gram matrix G, where G is the inner product between the vectored feature map i and j in layer 1:

G v -∑ F ii F fi

By adding the feature correlation of multiple layers, a stationary multi-scale representation of the source image is obtained which captures the texture information of the image, but not the global arrangement. In summary, from the layers of the network two feature spaces are formed, which hold information about the content and the style of a given source image. First, the activation of units in the higher layers of the neural network captures mainly the content of the source image without capture detailed pixel information. Then, the correlations between different filter responses in a number of layers in the network capture the style information of a given source image. This style or texture representation ignores the global configuration of the source image, but preserves the overall appearance in terms of color and local image structures.

The invention thus allows representing the content and style of an image separated from each other. Thereby, content and style can also be manipulated independently. This allows in particular the generation of new images, which combine the contents of any photographs with the appearance of various works of art. Figure 2 shows an overview of a method for generating an image according to an embodiment of the invention. To produce an image that mixes the contents of a target image, such as a photograph with the style of a source image, such as a painted picture, an image search can be performed, which is initialized with an appropriate start image, for example, a random image whose brightness values are distributed according to white noise, or the source image or the target image as initial image. Thereby, the distance of a content and a style representation of the initial image from a content representation of the target image in a layer of the network and the style representation of the source image in a number of layers of the neural network are minimized jointly.

The respective distance between content and stylistic characteristics of the original image and the target or source image can be expressed by means of appropriate loss functions L C ontent and L / e . If the photograph is p and the artwork is a the total loss function that is to be minimized is then: tal (P, a, x) = CC ontent (P, *) + fi tyle («> ) where a and β are weighting factors respectively. The weighting factors are preferably continuously adjustable, for example via a controller as part of a graphical user interface of a software that implements the inventive method. According to one embodiment of the invention, further loss terms may be included in the loss function to con- trol other features of the generated image.

A stronger emphasis on style results in images corresponding to the appearance of the artwork without showing essential content of the target image, i.e. the photograph. With stronger emphasis on the content, the photograph can be identified more clearly, but the style has correspondence to the one of the source image.

Figure 3 shows a schematic representation of a method for synthesizing an image based on the extracted features according to an embodiment of the invention. A random image whose brightness values are distributed according to a white noise is used as input for the neural network to obtain feature activation F in the layers L, a, b, c.

Then, summary statistics G are calculated for the layers a, b, and c. In a further step, a loss function L is calculated, the layers L, a, b and c. The loss for the target image in layer 1 is of the form

L con T tent ( \F 1 , ' F 1 ) = - 2 Y * l , J . (F J' - F J 1 . )

The loss of the source image in layer a, b, c is of the form

E a (d a , G a ) = ,· ,((¾ - G?f .

AN M

Thereafter, the gradient of the loss is calculated in each layer with respect to the feature activation F in this layer. The gradient of the target image in layer 1 is of the form gradient of the source image in layers a, b, c is of the form

Then, the gradient is propagated by error back propagation back through the network and the gradient with respect to the white noise image is calculated.

Thereafter, the white noise image is adjusted, to minimize loss in layers 1, a, b.

This process is continued with the adjusted image, until the loss satisfies an appropriate termination criterion, for example, is sufficiently small. Alternatively, the method may use the source or the target image as an initial image.

In another embodiment of the invention, the explicit and substantially separate representation from the content of the style of an image may serve as a basis for a method of style classification and assigning of works of art to a particular artist. Thereby, the transformation of the source image to be identified into a stationary feature space, such as the style representation according to the invention, ensures a higher degree of efficiency than conventional approaches in which classifiers work directly on the primary network activations.

Figure 4 shows images, which combine the content of a photograph with the style of various well-known artworks. The images were generated according to the invention by searching an image that simultaneously fits the content representation of a photograph and the style representation of the artwork. The original photograph shows the Neckarfront in Tubingen, Germany, and is shown in Figure A. The painting, which has provided the style for each generated image is shown in the lower left corner of each panel. In Figure B, the painting "The Shipwreck of the Minotaur" by J. M. W. Turner, 1805 was used. In image C, the "Starry Night" by Vincent van Gogh, 1889 was used. In Figure D "The Scream" by Edvard Munch, 1893, was used. In Figure E, the "naked sitting woman" by Pablo Picasso was used and image F the "Composition VII" by Wassily Kandinsky from 1913, was used.

In the images shown in Figure 4 a style of representation was used which comprised layers of the entire network hierarchy. Alternatively, style also be defined locally, in which only a smaller number of lower layers is used, resulting in different visual impressions. When the style representations are matched up to higher layers in the net- work, local image structures are adjusted on an increasingly larger scale, resulting in a visually more continuous impression. Therefore, the most visually appealing images are usually achieved by matching the style representation up to the highest layers in the network. IMPLEMENTATION

The methods according to the invention may be implemented on a computer, especially on a graphics card or a smartphone.

Example embodiments may also include computer program products. The computer program products may be stored on computer-readable media for carrying or having computer-executable instructions or data structures. Such computer-readable media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, such computer-readable media may include RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk stor- age or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communi- cations connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer- readable medium. Thus, any such connection is an example of a computer-readable medium. Combinations of the above are also to be included within the scope of com- puter readable media. Computer-executable instructions include, for example, instructions and data, which cause a general-purpose computer, a special purpose computer, or a special purpose processing device to perform a certain function or group of functions. Furthermore, computer-executable instructions include, for example, instructions that have to be processed by a computer to transform the instructions into a format that is executable by a computer. The computer-executable instructions may be in a source format that is compiled or interpreted to obtain the instructions in the executable format. When the computer-executable instructions are transformed, a first computer may for example transform the computer executable instructions into the executable format and a second computer may execute the transformed instructions.

The computer-executable instructions may be organized in a modular way so that a part of the instructions may belong to one module and a further part of the instructions may belong to a further module. However, the differences between different modules may not be obvious and instructions of different modules may be inter- twined.

Example embodiments have been described in the general context of method operations, which may be implemented in one embodiment by a computer program product including computer-executable instructions, such as program code, executed by com- puters in networked environments. Generally, program modules include for example routines, programs, apps for smartphones, objects, components, or data structures that perform particular tasks or implement particular abstract data types. Computer- executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such operations.

Some embodiments may be operated in a networked environment using logical con- nections to one or more remote computers having processors. Logical connections may include for example a local area network (LAN) and a wide area network (WAN). The examples are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the Internet. Those skilled in the art will appreciate that such network computing environments will typically encompass many types of computer system configurations, including personal computers, hand-held devices like mobile phones, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

An example system for implementing the overall system or portions might include a general-purpose computing device in the form of a conventional computer, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The system memory may include read only memory (ROM) and random access memory (RAM). The computer may also include a magnetic hard disk drive for reading from and writing to a magnetic hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to removable optical disk such as a CD-ROM or other optical media. The drives and their associated computer readable media provide nonvolatile storage of computer executable instructions, data structures, program modules and other data for the computer. Software and web implementations could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various database searching steps, correlation steps, comparison steps and decision steps. The words "component" and "section"' as used herein and in the claims is intended to encompass implementations using one or more lines of software code, hardware imple- mentations, or equipment for receiving manual inputs.