Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROCESS OF PRODUCING PERSONALIZED VIDEO CARTOONS
Document Type and Number:
WIPO Patent Application WO/1996/002898
Kind Code:
A1
Abstract:
A personalized cartoon is produced by inserting and overkeying a digitized image of a face, with an animated mouth, into an animated cartoon sequence in a digital image file. To synchronize audio narration files with the animate cartoon sequences, the image files are divided into smaller animation clips, the narration files are divided into smaller narration clips and the timing of the narration clips is adjusted to synchronize each narration clip with a respective one of the animation clips.

Inventors:
DAHL BRADLEY K (CA)
Application Number:
PCT/IB1995/000628
Publication Date:
February 01, 1996
Filing Date:
July 18, 1995
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
477250 B C LTD (CA)
DAHL BRADLEY K (CA)
International Classes:
G06T13/40; G06T13/80; (IPC1-7): G06T15/70
Foreign References:
EP0390701A21990-10-03
GB2250405A1992-06-03
Other References:
MORISHIMA E.A.: "a facial motion synthesis for intelligent man-machine interface", SYSTEMS & COMPUTERS IN JAPAN, vol. 22, no. 5, NEW YORK US, pages 50 - 59, XP000240754
Download PDF:
Claims:
AMENDED CLAIMS[received by the International Bureau on 23 January 1996 (23.01.96); original claims
1. 5 amended; new claims 6. 10 added (4 pages)] 1 A method for producing a personalized computer generated image sequence, comprising the steps of: 5 providing a background. free first image, in an electronic format of a face of a subject to be included jn said image sequence; processing said first image to provide a portion of the face in at least two 10 states; providing a background image sequence in an electronic format; oveilaying said first image including said face portion, onto said background 15 image sequence as a sequence of image frame data; providing audio data defining a plurality of audio sequences; dividing said image frame data into smaller image frame data packets; 20 dividing said audio data into smaller audio data packets; synchronizing said audio data packets with respective ones of said image frame data packets; and ιc producing said image frame data packets together with said audio data packets, synchronized therewith as said personalized computer generated image frame sequence containing audio. 30 2.
2. The method of claim 1 , wherein at least one of said image frame data packets includes a variable length zone; said synchronizing of said audio data packets step including the substep of adjusting a duration of one of said image frame data packets by varying said zone to correspond to lengths of a respective audio data packet.
3. The method of claim 1, further comprising the steps of providing a sound track as MIDI data, having portions which correspond to said image frame data packets; and processing said MIDI data to adjust a playback speed of said sound track to correspond to a duration of said corresponding image frame data packet.
4. The method of claim 1 , further comprising the step of providing an alphanumeric image signal including an identifier of said subject, and incorporating said alphanumeric image signal in said personalized computer generated image frame sequence .
5. The method of claim 1 , wherein said dividing said image frame data step, said dividing said audio data step, and said synchronizing step are effected by selecting a number of frames included in an image frame data packet based on a duration of a corresponding one of said audio data packets.
6. A method for producing a composite image sequence, having a variable foreground image, a variable audio sequence, and a background image sequence, comprising the steps of: providing the variable foreground image in an electronic format, including facial features; processing said variable foreground image to produce a plurality of images having different facial expressions; providing a stored background image sequence in an electronic format; providing audio data defining a plurality of audio sequences; dividing the background image sequence into a plurality of series of contiguous image frames; defining portions of the variable audio sequence in correspondence with said respective ones of said plurality of series of contiguous image frames; forming a composite of one of said series of contiguous frames, said defined portion ofthe variable audio sequence, and said processed variable foreground image; including the substeps of: adjusting a length of said series of frames to correspond to a duration ofthe variable audio sequence; and selecting ones of said plurality of images having different facial expression. . synchronized to the variable audio sequence and said series of contiguous frames; and output ng said series of said composite image sequences.
7. The method according to claim 6, wherein at least one of said series of frames includes a variable length transition zone.
8. The method according to claim 6, further comprising the step of providing a MIDI data sequence and synchronizing a playback of said MIDI data sequence with said series of composite image sequences.
9. The method according to claim 6, further comprising the step of outputting an alphanumeric message with said series of composite image sequences. [ 0. The method according to claim 6, wherein said series of contiguous image frames are predetermined having transitional zones for time synchronization with the variable audio sequence.