Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ELECTRONIC DEVICE AND CONTROLLING METHOD THEREOF
Document Type and Number:
WIPO Patent Application WO/2020/190083
Kind Code:
A1
Abstract:
An electronic device and a controlling method thereof are provided. A controlling method of an electronic device according to the disclosure includes: performing first learning for a neural network model for acquiring a video sequence including a talking head of a random user based on a plurality of learning video sequences including talking heads of a plurality of users, performing second learning for fine-tuning the neural network model based on at least one image including a talking head of a first user different from the plurality of users and first landmark information included in the at least one image, and acquiring a first video sequence including the talking head of the first user based on the at least one image and pre-stored second landmark information using the neural network model for which the first learning and the second learning were performed.

Inventors:
LEMPITSKY VICTOR SERGEEVICH (RU)
SHYSHEYA ALIAKSANDRA PETROVNA (RU)
ZAKHAROV EGOR OLEGOVICH (RU)
BURKOV EGOR ANDREEVICH (RU)
Application Number:
PCT/KR2020/003852
Publication Date:
September 24, 2020
Filing Date:
March 20, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SAMSUNG ELECTRONICS CO LTD (KR)
International Classes:
G06N3/04; G06N3/08
Foreign References:
US20090296985A12009-12-03
US20170243387A12017-08-24
US20160180722A12016-06-23
US20150169938A12015-06-18
KR20170136538A2017-12-11
Other References:
WILES OLIVIA ET AL.: "Reference: X2Face: A Network for Controlling Face Generation Using Images, Audio, and Pose Codes", 15TH EUROPEAN CONFERENCE, MUNICH, GERMANY, SEPTEMBER 8-14, 2018, PROCEEDINGS
O. ALEXANDERM. ROGERSW. LAMBETHJ.-Y. CHIANGW.-C. MAC.-C. WANGP. DEBEVEC: "The Digital Emily project: Achieving a photorealistic digital actor", IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 30, no. 4, 2010, pages 20 - 31
A. ANTONIOUA. J. STORKEYH. EDWARDS: "Augmenting image classifiers using data augmentation generative adversarial networks", ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, 2018, pages 594 - 603
S. ARIKJ. CHENK. PENGW. PINGY. ZHOU: "Neural voice cloning with a few samples", PROC. NIPS, 2018, pages 10040 - 10050
H. AVERBUCH-ELORD. COHEN-ORJ. KOPFM. F. COHEN: "Bringing portraits to life", ACM TRANSACTIONS ON GRAPHICS (TOG, vol. 36, no. 6, 2017, pages 196, XP055693839, DOI: 10.1145/3130800.3130818
V. BLANZT. VETTER ET AL.: "A morphable model for the synthesis of 3d faces", PROC. SIGGRAPH, vol. 99, 1999, pages 187 - 194
A. BULATG. TZIMIROPOULOS: "How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230, 000 3d facial landmarks", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2017, pages 1021 - 1030
J. S. CHUNGA. NAGRANIA. ZISSERMAN: "Voxceleb2: Deep speaker recognition", INTERSPEECH, 2018
J. DENGJ. GUOX. NIANNANS. ZAFEIRIOU: "Arcface: Additive angular margin loss for deep face recognition", CVPR, 2019
C. FINNP. ABBEELS. LEVINE: "Model-agnostic metalearning for fast adaptation of deep networks", PROC. ICML, 2017, pages 1126 - 1135
Y. GANIND. KONONENKOD. SUNGATULLINAV. LEMPITSKY: "European Conference on Computer Vision", 2016, SPRINGER, article "Deepwarp: Photorealistic image resynthesis for gaze manipulation", pages: 311 - 326
I. GOODFELLOWJ. POUGET-ABADIEM. MIRZAB. XUD. WARDE-FARLEYS. OZAIRA. COURVILLEY. BENGIO: "Generative adversarial nets", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2014, pages 2672 - 2680
M. HEUSELH. RAMSAUERT. UNTERTHINERB. NESSLERS. HOCHREITER: "Advances in Neural Information Processing Systems", vol. 30, 2017, CURRAN ASSOCIATES, INC., article "Gans trained by a two time-scale update rule converge to a local nash equilibrium", pages: 6626 - 6637
X. HUANGS. BELONGIE: "Arbitrary style transfer in realtime with adaptive instance normalization", PROC. ICCV, 2017
S. IOFFEC. SZEGEDY: "Batch normalization: Accelerating deep network training by reducing internal covariate shift", PROCEEDINGS OF THE 32ND INTERNATIONAL CONFERENCE ON INTERNATIONAL CONFERENCE ON MACHINE LEARNING, vol. 37, 2015, pages 448 - 456
P. ISOLAJ. ZHUT. ZHOUA. A. EFROS: "Image-to-image translation with conditional adversarial networks", PROC. CVPR, 2017, pages 5967 - 5976
JIA, E. SHELHAMERJ. DONAHUES. KARAYEVJ. LONGR. GIRSHICKS. GUADARRAMAT. DARRELL: "Caffe: Convolutional architecture for fast feature embedding", ARXIV, 2014
Y. JIAY. ZHANGR. WEISSQ. WANGJ. SHENF. RENP. NGUYENR. PANGI. L. MORENOY. WU ET AL.: "Transfer learning from speaker verification to multispeaker text-tospeech synthesis", PROC. NIPS, 2018, pages 4485 - 4495
J. JOHNSONA. ALAHIL. FEI-FEI: "Perceptual losses for real-time style transfer and super-resolution", PROC. ECCV, 2016, pages 694 - 711
H. KIMP. GARRIDOA. TEWARIW. XUJ. THIESM. NIEBNERP. PEREZC. RICHARDTM. ZOLLH: "ofer, and C. Theobalt. Deep video portraits", ARXIV, 2018
D. P. KINGMAJ. BA: "Adam: A method for stochastic optimization", CORR, 2014
S. LOMBARDIJ. SARAGIHT. SIMONY. SHEIKH: "Deep appearance models for face rendering", ACM TRANSACTIONS ON GRAPHICS (TOG, vol. 37, no. 4, 2018, pages 68
S. O. MEHDI MIRZA: "Conditional generative adversarial nets", ARXIV
M. MORI: "The uncanny valley", ENERGY, vol. 7, no. 4, 1970, pages 33 - 35
K. NAGANOJ. SEOJ. XINGL. WEIZ. LIS. SAITOA. AGARWALJ. FURSUNDH. LIR. ROBERTS ET AL.: "paGAN: real-time avatars using dynamic textures", SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018, pages 258
A. NAGRANIJ. S. CHUNGA. ZISSERMAN: "Voxceleb: a large-scale speaker identification dataset", INTERSPEECH, 2017
O. M. PARKHIA. VEDALDIA. ZISSERMAN: "Deep face recognition", PROC. BMVC, 2015
S. M. SEITZC. R. DYER: "View morphing", PROCEEDINGS OF THE 23RD ANNUAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 1996, pages 21 - 30
Z. SHUM. SAHASRABUDHER. ALP GULERD. SAMARASN. PARAGIOSI. KOKKINOS: "Deforming autoencoders: Unsupervised disentangling of shape and appearance", THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV, September 2018 (2018-09-01)
T. A. TERO KARRASSAMULI LAINE: "A style-based generator architecture for generative adversarial networks", ARXIV
J. THIESM. ZOLLHOFERM. STAMMINGERC. THEOBALTM. NIEBNER: "Face2face: Real-time face capture and reenactment of RGB videos", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016, pages 2387 - 2395
D. ULYANOVA. VEDALDIV. S. LEMPITSKY: "Instance normalization: The missing ingredient for fast stylization", CORR, 2016
T.-C. WANGM.-Y. LIUJ.-Y. ZHUG. LIUA. TAOJ. KAUTZB. CATANZARO: "Video-to-video synthesis", ARXIV, 2018
T.-C. WANGM.-Y. LIUJ.-Y. ZHUA. TAOJ. KAUTZB. CATANZARO: "High-resolution image synthesis and semantic manipulation with conditional gans", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2018
Z. WANGA. C. BOVIKH. R. SHEIKHE. P. SIMONCELLI: "Image quality assessment: From error visibility to structural similarity", TRANS. IMG. PROC., vol. 13, no. 4, April 2004 (2004-04-01), pages 600 - 612, XP011110418, DOI: 10.1109/TIP.2003.819861
O. WILESA. SOPHIA KOEPKEA. ZISSERMAN: "X2face: A network for controlling face generation using images, audio, and pose codes", THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV, September 2018 (2018-09-01)
C. YINJ. TANGZ. XUY. WANG: "Adversarial metalearning", CORR, 2018
H. ZHANGI. J. GOODFELLOWD. N. METAXASA. ODENA: "Self-attention generative adversarial networks", ARXIV, 2018
R. ZHANGT. CHEZ. GHAHRAMANIY. BENGIOY. SONG: "Metagan: An adversarial approach to few-shot learning", NEURIPS, 2018, pages 2371 - 2380
Attorney, Agent or Firm:
KIM, Tae-hun et al. (KR)
Download PDF: