Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER-CONTROLLED TALKING FIGURE TOY WITH ANIMATED FEATURES
Document Type and Number:
WIPO Patent Application WO/1997/041936
Kind Code:
A1
Abstract:
An animated toy figure (1) includes a loudspeaker (44) and mechanical drivers (38, 40, 42, 56) for actuating body parts such as its mouth (12) to simulate animation. A multimedia home computer (30) synchronizes actuation of the toy's moving parts with an audio output provided by the computer through a binary, drive control code array (134). This array may be derived from a text file based on predefined rules and speech synthesized using a multimedia sound subsystem (122). The movement of the toy's body parts such as its mouth (12) can be synchronized with the speech of the figure toy without predefining the contents of the speech. Users can input or program their desired audio output for the figure toy through computer controlled devices such as keyboard (30b) or CD-ROM. Stationary or motion pictures can be created by the computer's monitor (30c) in coordination with the sound and animation effects of the toy. A speech recognition system (130) on the computer enables the toy to respond to word commands from children.

Inventors:
SHALONG MAA (US)
Application Number:
PCT/US1997/005146
Publication Date:
November 13, 1997
Filing Date:
April 04, 1997
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SHALONG MAA (US)
International Classes:
A63H3/28; A63H13/00; G09B5/06; (IPC1-7): A63H3/28; G09B5/06
Foreign References:
US5376038A1994-12-27
US4799171A1989-01-17
Download PDF:
Claims:
CLAIMS
1. An animated toy system comprising: a toy figure having a body portion and a movable portion; a loudspeaker situated within the body; an actuator situated within the toy for moving the movable portion in response to the toy receiving a digital data signal; means for transmitting a sound signal to the loudspeaker and the data signal to the actuator; and a multimedia home computer including, a sound subsystem for generating a sound signal representing spoken words for transmission over the means to transmitting to the loudspeaker, memory for storing an array of digital control codes representing movement ofthe actuator for movement ofthe movable portion, a data interface for generating, based on the digital control codes, the digital data signals for transmission over the means for transmission to the actuator, and means for causing sequential transmission ofthe digital data signals according to a predetermined synchronization with the transmission ofthe sound signal by the sound subsystem.
2. The system of Claim 1 wherein the movable portion includes a mouth and the drive control codes represent movement ofthe mouth in synchronization with the transmission ofthe sound signal to simulate speaking.
3. The system of Claim 1 or 2 wherein the multimedia home computer further includes means for deriving the array of digital control codes from a text file and the sound card includes a speech synthesizer for synthesizing a sound signal representing the words in the text file from the text file.
4. The system of Claim 3 further including a sound dictionary file stored on the computer and wherein the speech synthesizer looks up a sound signal for the textual words in the sound dictionary.
5. The system of Claim 4 wherein sound dictionary file includes predetermined digital control codes for each word and the home computer includes means for constructing the array from the digital control codes in the dictionary file.
6. The system of Claim 5 wherein the movable portion includes a mouth and the digital control codes stored in the dictionary represent movement ofthe mouth to simulate speaking.
7. The system of Claim 6 wherein the figure includes a second movable portion, and the digital control codes stored for each word in the dictionary includes a second set of digital control codes for indicating movement ofthe second movable portion in synchronization with the word.
8. The system of Claims 5, 6 or 7 wherein the means for deriving the array of binary digital codes includes means for identifying whether a letter in a word of text is a vowel and for assigning to each letter in each word in the text file a binary digital code indicating whether the mouth is to be open or closed.
9. The system of Claims 1, 2, 3 , 4, 5, 6, 7 or 8 wherein the sound subsystem further includes means for recording spoken words and the computer includes means for recognizing the spoken words.
10. The system of Claim 9 wherein, the means for recognizing the spoken words generates a text file, the multimedia home computer further includes means for deriving the array of binary digital codes from the text file, and the sound card includes a speech synthesizer for synthesizing a sound signal representing the spoken words in the text file from the text file.
11. The system of Claims 1, 2, 3, 4, 5, 8, 9 or 10 wherein the toy includes a second actuator for moving a second, articulating member, and the digital control code array includes a second dimension for storing digital control codes for the second actuator.
12. The system of any ofthe forgoing claims wherein the computer includes a monitor and means for displaying animation on the monitor in coordination with talking of the toy.
13. The system of any ofthe forgoing claims wherein the means for transmitting includes a cable having on one end a first plug for connecting with a first electronic circuit forming part ofthe sound subsystem and a second plug for connecting with a second electronic circuit forming the input/output port, and connecting at the opposite end with the toy.
14. An animated talking toy figure comprising: a small figure with an appearance simulating that of a living animal, being or creature, the figure including a body and a moveable mouth; a loudspeaker situated within the body; an actuator situated inside the figure having only twophases for moving the mouth in first direction in response to receiving a first binary digital data signal representing a first binary data value and in the other direction in response to receiving a second data signal representing a second binary data value; and an elongated cable extending from toy for receiving an audio signal for the loudspeaker and a binary digital control signal to be used as a logic input for a switch for connecting power to drive the actuator.
15. The animated talking toy figure of Claim 14 wherein the actuator further comprises a solenoid and the switch switches current to the solenoid for movement of an element.
16. The system of Claim 15 wherein the element of the solenoid is coupled by a string to a pivoting portion ofthe mouth for applying torque to rotate the pivoting portion in a first direction and wherein the actuator further includes a spring for applying a biasing force to the pivoting portion in an opposite direction to the force applied by the string.
17. The system of Claims 14, 15 or 16 wherein the figure further includes a moving arm and a second actuator having only twophases for moving the arm, the actuator moving the arm in first direction in response to receiving a third binary digital data signal representing ofthe first binary value and in the an opposite direction in response to receiving the fourth binary digital representing the second binary value.
18. A method comprising: storing a digital sound file representing one or more spoken word sounds; storing in a first array a sequence of binary digital values having a predetermined timed relationship to the digital sound file; generating based on the digital sound file and transmitting from an external terminal on the sound subsystem to an animated figure toy a sound signal for play on a loudspeaker in the toy; and sequentially transmitting from a data interface to an actuator for moving a moveable part ofthe animated figure toy a sequence of digital signals corresponding to the sequence of binary digital values stored in the first array in predetermined synchronization with the transmission ofthe sound signal.
19. The method Claim 18 wherein the method further includes deriving the sequence of binary digital values from a text file of words which are spoken in the digital sound file according to a predetermined rule.
20. The method of Claim 19 wherein each binary digital value stored in the array represents one of a plurality of predefined positions ofthe movable part ofthe animated toy figure simulating speaking the word sounds in the digital sound file.
21. The method of Claim 20 wherein deriving the sequence of binary digital values is determined based on the position of vowels in the words.
22. The method of Claim 18 wherein the method further comprises synthesizing the digital sound file from a text file by utilizing a dictionary file containing digital sounds for each word in the text.
23. The method of Claim 22 wherein the method further includes building the array from the text file by reading from the dictionary, for each word in the text file, a predefined sequence of binary digital values and storing the sequence in the array.
Description:
COMPUTER-CONTROLLED TALKING FIGURE TOY WITH ANIMATED FEATURES

This Application claims the benefit of U.S. Provisional Patent Application No. 60/014,905, filed April 5, 1996.

FIELD OF THE INVENTION The present invention relates generally to animated, talking figure toys. BACKGROUND OF THE INVENTION

Talking figure toys ("talking" refers to any sound-producing characteristic of a human being, such as singing, talking, whispering, crying, etc.) have been known for many years. Originally, in the design of such toys, talking sounds were created using conventional, mechanical sound reproduction mechanisms, magnetic tapes and phonographs. More recently, advanced electronic devices such as microprocessors with digitized speech or sound stored on read only memories (ROM) and other types of speech synthesizers have been used. Examples of these types of toys are found in U.S. Pat. No. 5,281,143 (Arad et al.), U.S. Pat. No.4,802,879 (Rissman et al.), U.S. Pat. No. 4,318,245 (Stowell et al.). Talking toy figures have also been animated, with mouths and other body parts moved by actuators synchronized with the utterance of speech. There are a number of examples of animated talking toys of this type, such as those disclosed in U.S. Pat. No. 5,074,821 (McKeefery et al.), U.S. Pat. No. 4,923,428 (Curran), U.S. Pat. No. 4,850,930 (Sato et al.), U.S. Pat. No. 4,767,374 (Yang), U.S. Pat. No. 4,775,352 (Curran et al.), U.S. Pat. No. 4,139,968 (Milner), U.S. Pat. No. 3,685,200 (Noll et al.), and U.S. Pat. No. 3,364,618, (Ryan et al.). One problem of prior art talking figure toys is their limited capacity to store phrases due to signal storage and memory access methods which require speech phrases must be pre-defined, thus restricting both the entertainment and educational benefits ofthe toys. Consequently, a child tends to become tired ofthe toy quickly because ofthe repetitive sounds, even when animated features were provided. As it is impossible or very difficult to reprogram such talking figure toys after manufacture, such toys did little to stimulate a child's imagination. Furthermore, such toys were not designed to develop a child's intelligence through the use of educational programs due to their limitations. According to an article in the February 19, 1996 issue of Newsweek, a child's learning window for skills such as math, logic, language, and music is much earlier than provided by the current education system. '"Early experiences are so powerful, says pediatric neuro- biologist Harry Chugani of Wayne State University, that 'they may completely change the way a person turns out.'"

- -

Another disadvantage of the prior art is that complex drive systems are used, thus increasing manufacturing cost: DC or servo motors are typically used, which require rather complicated gear systems to actuate the toy to create the animation. Furthermore, servo feedback systems are also often needed in order to synchronize the opening and closing of a toy's mouth and eyes with the speech. Despite the apparent sophistication of these systems, many ofthe prior art talking figure toys do not move their mouths realistically. While the allure of these sophisticated electronics and their miniaturization have encouraged the manufacturers to make more elaborate animated talking toys with complex animation, sound producing, and user interaction features, the costs to make animated toy figures have also risen dramatically.

SUMMARY OF THE INVENTION The present invention overcomes the foregoing disadvantages to provide a low cost figure toy which can be used as an early educational tool. Young children's parents or other users are able to define, edit or program the figure toy's speech with any selection of phrases or words, and to synchronize such speech with movement ofthe figure toy's mouth or other parts such as its eyes, head or arms.

According to one aspect ofthe present invention, a sound subsystem of a home computer sends the animated talking toy audio signals that have been digitally stored in data storage means by the computer and converted by the computer's sound subsystem to analog signals for driving a loudspeaker mounted, preferably inside the toy. A sound subsystem typically found on multimedia home computers includes hardware and software to allow the computer to communicate with, and operate, the hardware and application programs for utilizing sound processing services ofthe sound subsystem. The animated talking toy also receives control signals generated by the computer and sent through an input/output (I/O) device or other data interface of the personal computer for causing the toy figure to move its mouth and/or other body parts synchronously with the audio signal. Actuators inside the toy respond to the control signals to move the mouth and/or other body parts ofthe figure toy. Home computers with audio subsystems capable of playing prerecorded audio files or synthesizing speech from text files are now frequently found in homes. Thus, the toy need not be sold with predefined phrases or complex circuitry. Parents and other users can create or edit speech phrases and body movements of the toy using standard text editors by selection of prerecorded audio files. By utilizing preexisting personal computers, the toy's manufacturing costs are kept low, but yet enhanced with programmability.

-

An additional advantage is that the instructional value of such a toy may be enhanced, if desired, by other features of a personal computer. For example, the computer may be programmed to provide on-screen animation coordinating with the animation and the audio effects ofthe actual toy, so that the toy can draw the children's attention to an educational program or game shown on the computer's screen. The personal computer can also be programmed to coordinate the sound effects and actions of multiple figure toys. Also, speech recognition capabilities may be utilized in either programming the toy or interacting with the toy according to a program.

According to another aspect of the invention, an animated figure toy is provided with a relatively simple, two phase actuator for each of its moving body parts. The actuator does not require a gear system and can be controlled with a simple, binary digital code from a personal computer. The actuator is relatively simple and inexpensive, allowing the toy to be manufactured at a cost substantially less than many other prior art animated talking figure toys. Complex communication protocols and control codes and feedback loops are unnecessary, thus eliminating the necessity for data processing within the toy. This two-phase drive device, controlled with a digital signal, thus enables a home computer to control animation of figure toys and dolls with relatively low manufacturing costs.

The foregoing is intended to be merely a summary and not to limit the scope ofthe applied claims.

The invention, however, together with the further objects and advantages thereof, may best be appreciated by reference to the following detailed description of a preferred embodiment, taken in conjunction with the appended drawings.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of a computer-controlled talking figure toy in accordance with the invention. FIG. 2 is a sectioned side of the toy of FIG.1.

FIG. 3 is a side section of a two phase driving device used to actuate the moving parts of the toy of FIG. 1 in a first position and in a second position.

FIG'S 4(a) and 4(b) schematically illustrate a mechanism of controlling the positions of the arm and mouth ofthe toy of FIG. 1. FIG 5 is a schematic representation of data flow in a sound subsystem and animation control system in the multimedia home computer of FIG. 1.

- -

FIG 6 is a schematic diagram ofthe subsystems ofthe multimedia home computer.

FIG 7 is a flow chart showing the flow of a program running on a home computer for deriving binary drive-control codes from a text file for control ofthe animated features ofthe toy of FIG. 1.

DETAILED DESCRIPTION OF THE INVENTION

Referring to FIGS. 1-7, a computer-controlled animated talking figure toy is shown embodying the concept of the present invention. While the present invention is susceptible to embodiment in various forms, the drawings show and the following description describes a presently preferred embodiment, with the understanding that the present disclosure is to be considered an exemplification of the invention, and does not limit the invention to the specific embodiment illustrated.

To briefly summarize the preferred embodiment, an animated talking figure toy according to the present invention is connected to and controlled by a multimedia home computer. The connection means can be provided either by connection cables or by wireless communication means. The computer directs the movement ofcertain ofthe toy's body parts via the drive system based on a series of drive-control codes. These drive-control codes can be derived from the text of the toy's speech based on predefined rules so that the animation of the toy's body parts is synchronized with an audio output derived from the text. A control code translation or derivation software routine running on the computer associates each letter in the text ofthe toy's speech with a set of binary digital codes which are then used by the computer to control the drive devices. For example, software routine may generate codes which will be used to direct the toy to open its mouth once on every or every other utterance of a vowel in the synthesized speech. The animation ofthe toy's other body parts can be controlled based on the appearance ofcertain predefined words or phrases. Files or databases containing these predefined vocabulary groups, such as classes of angry words, happy words, etc., can be established so that the computer can determine whether the current word or phrase belongs to any of the classified groups, and hence determine whether to move certain ofthe toy's body parts. For example, the computer can cause the toy move both its hands whenever it says the words "great" or "yes". Moving parts other than the toy's mouth can also be set to move randomly. As a result, only the rules for creating the drive-control code arrays are predefined based on the characters ofthe toy's speech output, while the actual contents ofthe toy's speech can be defined by the users and can be composed of any words, phrases, or of any

type of sound that the standard sound system or card in a multimedia computer is capable of synthesizing. The users can also input the audio signals from sources and/or input devices such as computer keyboards, microphones, CD-ROMs, floppy disks, radio or TV signals, cables, or any other publicly available data lines, networks, or products that can be accessed by a multimedia home computer. The audio output ofthe figure toy can be regular speech or music such as a song.

Many currently available multimedia computers have pre-installed sound subsystems, or expansion slots in which the hardware for the system can be installed. Those sound subsystems can be applied to convert audio input signals into audio output for the figure toy's loudspeaker in a manner to enable the computer to derive drive control codes and synchronize transmission ofthe control codes to the audio.

In a preferred embodiment of a drive mechanism for an animated toy figure, the neutral positions of the toy's moving parts, such as its eye/eyelid, mouth, arm, leg and/or head, are controlled with return springs and movement limiting fringes or strings attached to them. Each of these moving parts is pivotally connected to the nearby stationary parts of the toy and is manipulated by a two-phase mechanical actuator via cables which cause the part to pivot and thereby articulate. The animation of any ofthe toy's parts can be represented by two statuses, one being a default state and the other one being a temporary or actuated state. For example, the mouth and eyes ofthe figure toy need only to be in "open" (actuated) and "closed" (default) conditions, and the arms need only to be in "up" (actuated) and "down" (default) conditions. The drive device has only two stable positions controlled by data or TTL signals from the computer via an electronic switch connecting the power supply to the drive device.

Referring now to FIG. 1, figure toy has a body portion 3, to which a right arm 4, a left arm 4', a right leg 5, a left leg 5', and a head member 2 are attached. The two arms, the two legs, and the head member may each have the optional feature of moving relative to the body portion ofthe toy. The head member 2 ofthe figure toy includes a pair of ears 6 and 8, a nose 10, a mouth 12 that can open and close, an optional removable head cover 14 which could be the toy's hair, a pair of eyelids 16 and 18, and a pair of eyes 20 and 22. The eyes and eyelids may also be adapted to move. The arms 4 and 4' includes of an upper arm 24 and a forearm 26 so that the arm can bend at the juncture ofthe upper arm and forearm or elbow. The body 3 may be covered with interchangeable cloth.

Control ofthe audio and animation features ofthe toy 1 is provided by multimedia home computer 30 having, in addition to a processing portion, 30a, a keyboard 30b, monitor 30c and

- - mouse 30d, and I/O card 32. The computer 30 is connected to the toy 1 through cable 34 to a sound card and an I/O port (such as a serial or parallel port). Cable 34 includes plugs 35a and 35b for connecting the cable to the computer and a plug 37 for connecting to the toy 1. In order to protect the computer 30 from falling off the table when the child is playing with the figure toy 1 cable 34 can be provided with a breakable connection so that it would brake apart when excessive tension is applied to the cable 34. The connection cable 34 can also be replaced by wireless or remote control devices installed in both the computer 30 and the toy 1 to facilitate the toy being carried around by the child.

Referring to FIG. 2 the computer 30 controls the animation effects ofthe figure toy 1 with drive control circuit 36. The drive control circuit 36 selectively controls power to each of a plurality of actuators or drive devices in response to drive-control signals sent from the computer 30. Drive device 38 moves eyelids 16 and 18, drive device 40 moves the mouth 12, and drive device drive 42 moves the arm 4. In order to simplify the driving mechanism, the two eyelids 16 and 18 may be connected to one drive device 38 so that they open and close together. More drive devices can be provided for moving other moveable parts ofthe toy. Speaker 44 and microphone

46 are connected to audio output and input connectors on the sound card of the multimedia computer 30 shown in FIG. 1 via the drive control circuit 36 and connection cable 34. The speaker 44 is preferably mounted inside the body portion 3 ofthe toy, and the microphone 46 is preferably mounted inside the head portion 2. The toy can be mounted on and removed from a base 54. Base 54 contains a power supply unit 50. A rechargeable battery 52 is mounted at the bottom portion ofthe toy 1 in such a way that when the toy 1 is removed from the base 54, the power for the drive system can be provided by the rechargeable battery 52, and when the toy 1 is mounted onto the base 54 the power supply unit 50 can be applied for the drive system and for charging the rechargeable battery 52. Alternatively, the entire power supply system can be mounted inside the body 3 of the figure toy with the power connection cable combined into cable 34, or regular batteries can be mounted inside the body 3 to replace the entire power supply system described above.

Referring now to FIG. 3, drive devices 38, 40 and 42 in Fig. 2 preferably take the form of two-phase drive device 56. While the drive device 56 is drawn vertically in the figure, it can be positioned horizontally or in any other directions during the application. The drive force is provided by magnetic interaction between the soft core magnet 58 and permanent magnet 60 separated by return springs 62 and 63. The magnet 58 is made of soft magnetic material such as

soft iron, and its function is to enhance the magnetic field generated by solenoid 66. Thus, driving force is also provided by the magnetic interaction between the magnet 60 and the magnetic field from the solenoid 66. Both magnets 58 and 60 have the shape of a rod. The magnet 60 is the moving element ofthe device. It is made of permanent magnetic material, formed in a rod shape with a T-shape end 64. Magnet 58 is firmly attached to the lower portion 68 of the cylindrical interior ofthe frame 70 so that it will not move during the operation. The solenoid 66 is wound against the exterior ofthe lower frame portion 68. The length ofthe solenoid 66 could be longer or shorter than the magnet 58, depending on the detailed requirements for the drive device. The diameter of magnet 58 is substantially equal to that ofthe T-shape end portion 64 of magnet 60 and just fits within the lower portion 68 of the cylindrical interior of the frame 70. Bore 74 of the upper portion of frame 70 has an open end at the top and an inner diameter slightly larger than the elongated shaft portion of magnet 60. Washer 76 prevents the return spring 63 and the T-shape end portion 64 of magnet 60 from sliding into bore 74. The frame 70 can be made of metal if large heat dissipation is needed, otherwise, durable plastic materials can be used. The solenoid 66 is connected to the power supply 50 (Fig. 2) through an electronic switch

82. This switch is a simple logic circuit which receives a control signal 83 generated by computer 30 as a logic input. When the switch 82 is in a first, default position or state (default), the current flows in a first direction through the solenoid 66 to generate a magnetic field which repels the magnet 60 toward an "open" condition as shown in position (a), releasing tension on a string or cable 84. When the switch 82 is in a second position or state, the current flows through the solenoid in the reverse direction, causing the magnetic field generated by the solenoid to attract the permanent magnet 60 and causing the drive device to move to a "closed" condition as shown in position (b), pulling the string 84. Drive control circuit 35 includes at least one electronic switch 82 for each drive device in the toy. However, an extra electronic circuit can be added in the drive- control system if the load of two-phase drive device 56 requires better control of its strength, its speed, or its response times.

The drive connection string 84 attaches the moving permanent magnet 60 of the drive device 56 to its load, such as any ofthe toy's moving parts.

Reference is now made to FIG'S 4(a) and 4(b) in conjunction with FIG. 2 and FIG. 3. The principle for designing any ofthe moving parts of toy 1 is to pivotally connect each moving part to an adjacent part, and to couple the two parts with return springs and limit strings for controlling the neutral positions and the movement range ofthe moving parts. Each moving part is driven by

- - a drive device in the form of drive device 56 via a connection string or cable such as string 84. Because the manufacturing costs for the proposed two-phase drive device is low, each independent moving part ofthe toy can be actuated with the two-phase drive device 56. First referring to FIG. 4(b), the mouth 12 is articulated by drive device 40 (Fig. 2) pulling the lower jaw 13 of the mouth 12 via connection string 86. The two ends of return spring 11 and the two ends ofthe limit string

17 are attached to the upper jaw 15 and lower jaw 13 ofthe mouth 12. The edge ofthe upper jaw 15 and the length ofthe limit string 17 define the movement range ofthe lower jaw 13, which is pivotally connected to the upper jaw 15 and the rest ofthe head 2. Referring to FIG. 4(a), unlike the mouth, the movement ofthe arm 4 involves two degrees of freedom: the moving ofthe forearm 26 relative to the upper arm 24, and the upper arm 24 relative to the body 3. Thus, two sets of return springs and limit strings are applied. Each of the two ends of return spring 27 and limit string 29' are attached to the forearm 26 and the upper arm 24 on the lower side ofthe pivot 31 that connects the two portions ofthe arm 4. The limit string 29 connects between the forearm 26 and the upper arm 24 on the opposite side ofthe pivot 31 from spring 27 and string 29'. The lengths of strings 29 and 29' control the maximum and minimum angles between the forearm 26 and the upper arm 24. In a neutral position, the angle formed by the upper arm and forearm is at about 150 degrees, with limit string 29 being pulled taught by the return spring 27. Return spring 35 and limit string 39' attach the upper arm 24 to a back portion ofthe body 3. Limit string 39 connects between the upper arm 24 and a front portion ofthe body 3. Return spring 35, limit string 39 and limit string 39' are positioned in such ways that string 39 is opposite pivot 41 and connects the upper arm 24 to the body 3. The lengths of spring 35 and string 39' are chosen in such a way that, in the arm's neutral condition, it is positioned vertically with string 39 being pulled taught by the return spring 35. Drive connection string 88 is attached to the forearm via the pivot 41. The strengths ofthe springs 35 and 27 are preferably close to each other so that when the drive device 42 pulls the connection string 88, the arm 4 moves with two degrees of freedom as described above.

Referring now to FIG. 6 synchronous control ofthe moving parts with playback of speech is provided by home computer 30. The home computer 30 is, preferably, a multimedia home computer. It can be an IBM compatible home PC, a Macintosh, a network computer (NC) or other type of computer installed with multimedia system. The home computer 30 has the basic functions of a conventional computer such as data processing, data storage, data movement and operating means to control these functions. Home computer 30 includes the following basic components:

- - at least one central processing unit (CPU) 100 to provide the data processing and operating control functions; memory 102 for storing digital signals that are to be directly accessed by the computer CPU; at least one hard disk drive 104 to permanently store digital files; an I/O port 106 such as a parallel or serial port for moving data between the computer and its external environment; system interconnection such as a bus 1 10 for communication among CPU, memory and all of the I/O devices; expansion bays or slots for physical insertion of additional peripheral devices; and a motherboard, the main circuit board of the computer, on which the computer's main electronic components are installed. The computer 30 also includes such basic peripheral devices as CD- ROM drive 112 for receiving a CD-ROM 113 on which data, text, sound, program and other types of files may be prerecorded, monitor 30c, keyboard 30b, mouse 30d and a network connection device 115 for connection with public data networks such as the Internet or private data networks. The home computer also includes all necessary software for operating its various components and peripheral devices, and executing application programs.

To provide multimedia capabilities, the computer 30 includes sound and graphic/video systems which are comprised of hardware installed as peripheral devices and corresponding software. For example, video graphic adaptor or card 114 includes all necessary memory and circuitry for rendering, with the appropriate software drivers, graphics on monitor 30c. Similarly, audio or sound card 116 includes memory and other circuitry for generating audio signals for driving a loudspeaker once amplified. Software drivers allow the CPU to communicate and operate the sound card, and additional software allows the CPU to provide processing which cannot be handled directly on the card. Together, the sound card hardware and firmware, and the software running on the CPU, are referred to as the sound subsystem.

Referring now also to FIG. 5, the multimedia sound subsystem 120 of home computer 30 includes several services which are accomplished through a combination of hardware on the sound card and software programs. Sound manager 122 plays back digitally recorded sounds; that is, it generates from a digital file or signal an analog signal for speaker 44. It may also have circuitry to alter characteristics of the sound such as its loudness, pitch, timbre and duration. Speech manager 124 has the ability to convert text files into digital sound files representing spoken words for playback by the sound manager 122. The speech manager includes a speech synthesizer which creates digital sound files based on sound dictionaries 125. Sound input manager 126 records digital sound files through microphone 128 or other sound input device. The digital sound files

can be provided to sound manager 122 or to a speech recognition or text converting system 130 for creating an ASCII text file of spoken words.

To synchronize movement of the animated body parts of toy 1 to spoken words, home computer 30 utilizes a timed sequence of binary digital drive control signals represented by binary drive control codes arranged in an N-dimensional array 134, where U is the number of drive devices in toy 1. The array is stored in memory. The transmission of the sequence of binary digital drive signals is synchronized with spoken words to be played back through the toy by sound subsystem 120. During the operation, the control and processing system of computer 30 causes to be processed by the sound manager 122 a digital sound file, or portion thereof, representing the speech to be played back through speaker 44 and to sequentially send each drive control signal, or set of signals, from array 134 at timed intervals to the drive control 36 of Toy 1 through the I/O device 106, thus providing the synchronization. The digital sound file, or portion thereof, may be one word, a phrase or several sentences. For example, the CPU may direct the digital sound file to be played back one word or syllable at a time for proper synchronization with transmission of the drive control codes.

There are three types of source files for the toy's speech: a prerecorded digital sound file with drive-control codes, such as a prerecorded program; a text file, entered for example via the computer's keyboard or stored on a disk drive, and which contains no drive-control codes; and a digital file of spoken words without drive control codes, inputted for example from the computer microphone 128. In the case of the prerecorded sound file and drive codes, no additional processing is required.

In the case of a text file source, such as the text file 132, the home computer 30 derives from the an ASCII text file 132 of words to be spoken the data array 134 according to a method to be subsequently described in connection with FIG. 7, based on the arrangement ofthe vowel letters in the speech text. After the array of drive control codes 134 is created and stored in memory, a digital sound file is synthesized by speech manager 124, which can then be played back in by the sound manager 122. The. speech can be synthesized using any sound dictionaries stored in memory. Both the digital sound codes and the drive control code array are saved in memory for synchronized playback in the manner described above. Alternately, to avoid deriving the drive control codes, drive-control codes for each word could be included in the digital sound dictionary, so the synchronization processes only sequence the digital sound codes and the drive-control codes in the dictionary according to the source text before sending these signals to output devices.

If the source is a sampled or digitized voice from microphone 128 or any other live or recorded source, the speech recognition software system 130 on the home computer translates the sampled or digitized voice into the text file 132, which is then processed in the manner described immediately above. In FIG. 7 is illustrated the process executed by computer 30 for deriving from a text file, such as an ASCII character text file, a drive control code array having, for purposes of illustration only, a single dimension for controlling the mouth 12 of toy 1, as well as digital sound file, and then playing them back. Beginmng at step 140, an ASCII text file is created or read. From the text file at step 142, the total number "N" of letters, symbols and null spaces in the text, as well as the total number of words "n" is determined. A drive code array X(N), a time code array C(N), an array t(n) for the time period of each word and an array m(n) for the number of letters in each word are created. Digital waveforms for each word are looked up on a dictionary and stored in an array or class S(n) at step 146. In step 148, for each word i from i = 1 to i=n, the time period of digital waveform for each word stored in the array or class S(n) is determined and stored in the array t(i), and the number of letters in each word is stored in array m(i). Also, the total time T for playback of the entire array is determined. The drive control array X(N) is then built at step 150 by assigning to each element a binary digit according to rules based on the arrangements of vowels in the text file. These rules will be discussed subsequently in connection with several examples. The time code array C(N) is then built at step 152. For each element in array C(N) corresponding to a particular word i, the value of t(i)/n(i) is assigned to it. At step 156, those elements of C(N) which represent null spaces or typographic symbols, assign a value of T/N. Then, at step 158, the computer initiates playback ofthe digital sound array or class S by the sound manager 122 and transmission of drive control signals corresponding to the drive control codes stored in array X(N) from the I/O port ofthe computer according to timing provided by array C(N). The dimension or the number of rows ofthe drive-control code array 134 is made equal to the number of drive devices in toy 1. For example, if only the mouth 12 needs to be moved, the drive-control code array will be one dimensional. If the movement of the mouth 12 and the two eyelids 16 and 18 is required and 16 and 18 can move together, two drive devices are needed and the dimension ofthe drive-control code array would be 2. If, in the prior case, 16 and 18 need to move independently, there will be three drive devices applied and the dimension ofthe array would be three.

- -

The rules for creating the drive-control code arrays should depend, in part, on how fast the speech is to be spoken, the response time of the drive devices, or, if the speech is of a musical type, the speed of processing the speech phrases by the computer 30. The movement of certain of the toy's body parts, such as its two arms 4 and 4', its two legs 5 and 5', its eyes 20 and 22, its eyelids 16 and 18, or its head 2 may be set to move randomly or based on whether the current word or phrase in the audio output belongs to any ofthe predefined vocabulary groups. The principle of creating the drive-control code array can be exemplified by, but not limited to, the following three cases with the sample audio output speech shown as "yes, Mary, what you see on the screen is an apple, .. very good": In the first case, it is assumed that only the mouth 12 ofthe figure toy can move, and the speed of the output speech is fast. The drive-control code array is one dimensional. In order to simulate the voice of a human, any characters or null spaces in the audio text cannot be assigned the code "1 " unless it is a vowel letter. Since the speech is fast and the response time ofthe drive device 56 is finite, a constraint is placed on the derivation: any two "l"s in the array shall be separated by at least three "0"s unless the two "l"s are next to each other; otherwise, the second

"1" shall be changed to "0". When two "l"s are next to each other, the corresponding drive device will remain in "1" or open condition since it has finite response time.

Below is an example of such a drive control code array:

In a second example, the mouth 12 and the two eyelids 16 and 18 are assumed to be movable, and the speed ofthe output speech is also fast. In this case, the dnve-control code array has two rows. The first row of a two dimensional array is for the mouth control, and the second row is for the control of both eyelids. The rules defined in the first example apply for the first row ofthe dnve-control codes. Concerning the second row, it is defined mat all the letters in the text file shall be assigned to "0" unless the corresponding word belongs to a special group of words which contains the words "yes", "good", "nice", etc. This special word vocabulary can be stored in a file or ui a data base in the computer memory. In addition, it is defined that the " 1 " in the second row of the array shall be changed to "0" if the code in front of it is also "1 ". So the dnve control codes for the example phrase would be:

In a third case, the mouth 12 and the two eyelids 16 and 18 can open and close, and 16 and 18 move together, but the speech is slow. Certain rules can be will changed. For example, if two "l"s are next to each other in the mouth control row (the first row), then the one on the right is changed to "0" and any two "l"s shall be separated by at least two "0"s. In the second row ofthe array, the appearance of "1" shall be random, but the total number of "l"s is less than 40% ofthe total number of codes. This is illustrated as follows:

- -

In the above examples, the phrase "Mary" is the name of the child, which can be input from the computer keyboard or spoken into microphone 46. The computer can be programmed to direct its sound systems to have special tone for the phrase "Mary". The same feature can be applied to other input phrases that the users want to be pronounced specially, such as the age ofthe child, names of his/her parents, or the name of a living person shown on the computer monitor. Various methods can be used to input the words that need to be pronounced specially: the user can use the computer controlled devices such as the keyboard, the mouse or the microphone to fill out tables on the monitor, to highlight phrases in the text files, or to answer questions requested by the computer. Software routines can be provided to detect if any ofthe phrases in the text file need to be pronounced with special tones. For example, the sound systems 122 of a multimedia computer can be programmed to have three types of sound: the first type is a regular voice for the toy; the second one may have a funny tone; and the third one may sound scary. In the text file ofthe above sample phrases, the user can highlight the phrase "Mary", and then choose "funny" in a special tone manual or click on a predefined icon.

The toy's speech can be of any human language, and it can be provided with musical background. For example, the audio effect ofthe figure toy can be a song. In order to increase the entertainment and educational value of a figure toy according to the present invention, software running on the computer can also create motion picture and sound effects on the monitor 30c in accordance with the animation and audio effects ofthe toy.

Also, a child's photograph can be shown on the monitor. The input ofthe photo picture into the computer can be accomplished through an image scanner or a digital camera. The advantage of providing this motion picture background is that the body parts ofthe figure toy do not need to move all the time without losing the child's attention. Thus, the entire system can last a much longer time without overheating the drive devices. With the help of the animated talking toy, children's attention can easily be drawn to the computer monitor in order to participate in computer programs which develop a child's language, math, music, memory, or reasoning abilities at a younger age without much help from the teachers or the parents. The system can be programmed to let the talking figure toy explain figures shown on the computer monitor. These teaching programs can be contained in entertainment programs designed to capture a child's attention. For example, to teach a child the meaning and pronunciation ofthe word "apple", the figure toy can say "this is an apple, Mary" with a

very funny tone, and in the meantime, a simulated figure of an apple is shown on the computer monitor. Other figures such as a background or talking animated beings can also be simulated on the computer monitor to interact with the child or figure toy.

Additional optional features may also be provided to the animated talking figure toy ofthe present invention. In order to provide more interaction between children and the toy, speech recognition system 124 can be used to enable the computer to understand words such as "yes", "not", "stop", etc. from a child. Devices such as sound-activating sensors or touch detectors can be added to the embodiment so that the animation and the talking ofthe figure toy can be activated when the child starts talking to the figure toy or when the child touches certain parts ofthe toy. A joy-stick type device or a special keyboard can be connected to the computer to enhance the interactive features ofthe system. Lights and heating devices may also be enclosed in the embodiment to simulate emotion from the figure toy. For examples, the lights can illuminate the toy's eyes when there is an exciting word from the toy. The heating device can warm up the chest portion ofthe toy to express love. A remote control system can be added to the computer so that the teachers or the parents can remotely control the process ofthe program based on the reaction ofthe children to the toy, or they can remotely change their presentation notes displayed on the computer monitor. The cloth 28 and the hair 14 ofthe toy 1 may be interchangeable to prevent the children from becoming tired of the same figure. The I/O card 32 can be made to have many channels to control more than one toy, thus allowing a story or a "movie" to be presented to the children with "live" dolls or animals.

The invention is not limited to the above-described examples. Rather, it will be understood that various substitutions, rearrangements and other changes to the forms and details ofthe examples can be made by those skilled in the art without departing from the scope ofthe invention as set forth in the appended claims.