Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR GENERATING INTERACTIVE ANIMATED INFORMATION AND ADVERTISEMENTS
Document Type and Number:
WIPO Patent Application WO/2000/070477
Kind Code:
A1
Abstract:
A digital animation system relies on digital media data objects (12) called WordChips (12). Each WordChip (12) contains basic digital media Data (42) that may be either a binary data file (16), HTML/Javascript code (18), executable code (20), or plain text (22), as well as MetaData high level information (44). Each WordChip (12) also contains identifying information (46) as well as elements (34, 38, 40) for interacting with other WordChips. A script (36) controlling WordChip behavior may also be added. The digital system allows the user to create WordChips (12) from basic data as well as to form Metaphors (56) from other WordChips. The WordChips (12) may be combined to form Sentences (58) that include instructions (60) for specifying interaction between WordChips. Finally, a Story (62) may be authored from a raw animation file that is modified by adding Slots (70) for receiving WordChips (12). Subsequent users of the Story (62) insert their own WordChips to complete the Story (62). An animation engine (30) then produces an animated presentation (32) based on the completed Story (62).

Inventors:
LAVINE ADAM (US)
CHEN YU-JEN DENNIS (US)
RODGERS DWIGHT (US)
Application Number:
PCT/US2000/013055
Publication Date:
November 23, 2000
Filing Date:
May 12, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FUNYELLOW INC (US)
LAVINE ADAM (US)
CHEN YU JEN DENNIS (US)
RODGERS DWIGHT (US)
International Classes:
G06F17/24; G06Q30/00; (IPC1-7): G06F15/00
Foreign References:
US5903892A1999-05-11
US5818512A1998-10-06
Other References:
ABOWD ET AL.: "Teaching and learning as multimedia authoring: the classroom 2000 project", ACM MULTIMEDIA 96, pages 187 - 198, XP002930455
JU ET AL.: "Analysis of gesture and action in technical talks for video indexing", IEEE, 1997, pages 595 - 601, XP002930456
Attorney, Agent or Firm:
Woodbridge, Richard C. (P.C. P.O. Box 592 Princeton, NJ, US)
Download PDF:
Claims:
We claim:
1. A media data assembly system 10 comprising the following: creating means (14) for creating digital media data objects (12) where said media data objects comprise digital media information, metadata information, and data for interacting between said media data objects. storage means (24,26) for storing said data objects (12); retrieval means (24,26) for retrieving said data objects (12); display means (28) for displaying said data objects (12) for user preview; selection means (14) for selecting data objects (12) for a given presentation; and, rendering means (30) for rendering a digital media presentation (32) based on said objects, wherein the resulting digital media presentation may be easily assembled and edited.
2. The system of claim 1, wherein said selection means includes querying means for querying said user; and, means for selecting said data objects based on user responses to said queries.
3. The system of claim 2, wherein said display, selection, and querying means are displayed using an HTML web page.
4. The system of claim 1, wherein said storage means (24,26) include a data dictionary (26) for public storage.
5. The system of claim 1, wherein said storage means (24,26) include a user database (24) for personal storage.
6. The system of claim 1, wherein said creation means (14) permit the user to create said data object from binary data sources.
7. The system of claim 1, wherein said creation means (14) include means (56) to form said data objects (12) from other alreadycreated data objects (12).
8. The system of claim 1, wherein said rendering means (30) reside on a different computer than said creation (14), storage (24,26), retrieval (24,26), display (28), and selection (14) means and communicate via a network with said creation, storage, retrieval, display and selection means.
9. A data object (12) for storing and manipulating binary media data comprising the following: raw digital media data (42) wherein said data corresponds to basic content; highlevel information (44) that interacts with similar information found in other similar data objects; basic identification data (46) with said data object (12); communication means (34) for communicating between data objects (12); a field for storing a script (36) of instructions for manipulating the digital media data objects (12); generic user interface information (38); and, a field for storing temporary parameter conditions and actions 40, wherein said data objects (12) may be used to describe and store different types of digital media (16,18,20, 22).
10. The data object (12) of claim 9 wherein the raw digital media data comprises a static binary image (16).
11. The data object (12) of claim 9 wherein the raw digital media data comprises a series of binary images in animation (16).
12. The data object of claim 9 wherein the raw digital media data comprises an HTML script (18).
13. The data object of claim 9 wherein the raw digital media data comprises a sound file (16).
14. The data object of claim 9 wherein the raw digital media data comprises a plugin (20).
15. The data object of claim 9 wherein the raw digital media data comprises a text file (22).
16. The data object of claim 9 wherein the raw digital media data comprises a Flash file (16).
17. The data object of claim 9 wherein the raw digital media data comprises a plurality of multimedia data types.
18. The data object of claim 9 wherein the data object (56) is formed as a combination of a plurality of other similar data objects (12).
19. A data sentence 58 for grouping multiple data objects (12) comprising the following: a first data object (12) as recited in claim 9; and, at least one other data object (12) as recited in claim 9, such that said data objects (12) interact each other in a usersupplied sequence.
20. The data sentence of claim 19, wherein said data objects (12) are sequentially controlled by instructions (60) provided by a user.
21. The data sentence of claim 19, wherein at least one of said data objects (12) represents a multimedia effect.
22. A digital story (62) comprising the following: a plurality of data object receiving means (70) capable of receiving digital media data objects (12); and, a plurality of additional digital media elements, such that an end user may insert digital media data objects (12) into said data object receiving means to provide specifications for producing a complete animated presentation.
23. The digital story (62) of claim 22 wherein said data object receiving means (70) can accept any digital media data objects (12).
24. The digital story (62) of claim 22 wherein said data object receiving means (70) can accept any digital media data objects (12) that match specified keywords.
25. The digital story (62) of claim 22 wherein said data object receiving means (76) can accept only digital media data objects (12) on an enumerated list (77) of particular data objects (12).
26. The digital story (62) of claim 22 wherein said data object receiving means (72) are closed to end user editing, such that only a creator of the Story may insert or change the data objects found in said data object receiving means.
27. The digital story (62) of claim 22 wherein several of the data objects (12) are grouped into a data sentence (58).
28. A method for producing a rendered animated presentation (32) from a plurality of digital media data objects (12) and a raw animated data file (66) comprising the steps of : (a) creating a raw animated data file (66) with portions (68) designated as blank; (b) adding receiving means (70) to said data file (66) for receiving a plurality of digital media data objects so that said receiving means (70) replace said blank portions (68), thereby creating a story file (62); (c) selecting a plurality of digital media data objects (12) from a repository of said digital media data objects (12), where each said digital media data object (12) comprises basic digital media data (42) as well as (1) linking means (34), for communicating with other data objects (12), (2) code (36) that controls the behavior of that data object (12), (3) identification information (46) for said data object (12); and, (4) parameter settings (40) for said data object (12); (d) inserting said selected digital media data objects (12) into said receiving means (70) found within said story file (62); and, (e) rendering an animated presentation (32) from the story file (62) of selected digital media data objects (12).
29. The method of claim 28, further comprising the steps of : (f) editing the rendered presentation (32) by allowing for substitution of other digital media data objects for those in the original story (62); and, (g) presenting a preview 15 of the rendered presentation (32).
30. The method of claim 28 wherein said creation step (a) is accomplished via commercially available animation software (80).
31. The method of claim 28 wherein said creation step (a) is accomplished using a webbased application.
32. The method of claim 28 wherein said adding receiving means (70) includes specifying limits on which digital media data objects (12) may be inserted into said receiving means (70).
33. The method of claim 28 wherein said specifying step comprises limiting digital media data objects (12) based on keyword information found in said data object.
34. The method of claim 28 wherein said specifying step comprises limiting digital media data objects (12) by specifying an enumerated list (77) of particular data objects that can be inserted into said receiving means (70).
35. The method of claim 28 wherein specifying step comprises inserting particular digital media data objects (12) into particular receiving means (70) and preventing further editing of said receiving means (70).
36. The method of claim 28 wherein said selecting step (c) includes selecting from a public dictionary (26) of previously created data objects (12).
37. The method of claim 28 wherein said selecting step (c) includes selecting from a private database (24) of previously created data objects (12).
38. The method of claim 28 wherein said rendering step (e) is comprised of the following steps: (a) creating a sequence of rendering frames based on said raw animation file (66); (b) spatially transforming all of the story's digital media data objects (12) for each frame in said sequence. (c) color transforming all of the story's digital media data objects (12) for each frame in said sequence, such that the resulting sequence of frames contains the digital media data object animated within the frame.
39. The method of claim 28 wherein said rendering step (e) is achieved on a remotely located server accessible via a network.
40. The method of claim 28 wherein said steps (c) (d) are displayed and executed via Internet web page (64).
41. The method of claim 28 wherein steps (a) (b) are completed and the story file (62) is stored in a database at a time prior to completing steps (c) (g).
42. A method for creating and storing a digital media data object (12) that contains basic content data along with identification (46) and metadata (44) information comprising the following steps: a) providing the basic content data (42) for the data object (12); b) compiling said basic content data (42) with high level information (44) for use in communicating and interacting with other data objects; c) previewing said compilation before final editing; and, d) storing said compilation as a data object (12) in a database (24,26) for further retrieval, in order to allow for use of the data object (12) at a future time.
43. The method of claim 42 wherein basic content data comprises binary multimedia data (16) and providing said basic content data comprises creating the binary multimedia data from binary multimedia software.
44. The method of claim 42 wherein basic content data comprises browserreadable code (18) that produces a desired multimedia effect and said providing step (a) comprises generating said code.
45. The method of claim 42 wherein basic content data comprises executable code (20) for producing a desired multimedia effect and said providing step comprises generating said code using a software development tool.
46. The method of claim 42 wherein basic content data comprises executable code (20) for producing a desired multimedia effect and said providing step comprises generating said code using a compiler.
47. The method of claim 42 wherein said storing step (c) comprises storing said data object in a public dictionary (26) for all users.
48. The method of claim 42 wherein said storing step (d) comprises storing said data object in a user's own personal database (28).
49. The method of claim 42 wherein said compiling step (b) includes adding keyword, name, type and author information for said digital media data object (12).
Description:
TITLE: SYSTEM AND METHOD FOR GENERATING INTERACTIVE ANIMATED INFORMATION AND ADVERTISEMENTS BACKGROUND OF THE INVENTION 1. Field of the Invention.

The present invention lies in the area of modular creation of digital media.

2. Description of the Related Art.

Digital media is pervasive; anyone who surfs the web, turns on the television, or plays with a multimedia CD-ROM has experienced digital media. In its most effective form, digital media is entertaining, enlightening, and educational. Currently, there exist many different competing standards for creating and storing digital media that fall into one of several major categories: The first category is the pixel-based image, where each pixel comprises a dot on a computer screen. Each image is stored as hundreds of thousands of such dots, and this type of format, also known as a bitmap or pixmap (for pixel map), is generally used for scanned photographs or digitally generated pictures. Typical examples of this format are the JPEG, GIF, and the PICT.

A second category of digital media is the vector-based image, where the digital media is stored as a series of lines and curves, also known as splines or Beziers. These lines and curves can form or define regions that may be filled with colors and gradients. A vector-based image is generally better than a pixel-based image for representing a drawing and is thus a popular format for clip art. Vector-based images are also more compact than pixel-based images, as vector-based images are based on mathematical descriptions. Typical examples of vector-based format are PostScript and Macromedia Flash, while Adobe Illustrator and CorelDRAW are popular vector drawing programs.

A third category of digital media is the digital video format, where multiple pixel-based image frames are put together to represent video. Digital video, which is really a variation on the pixel-based image format, is often used for CD-ROM titles, including games and multimedia. In addition, streaming video formats such as RealVideo, QuickTime and AVI essentially belong to the digital video format category. Furthermore, most computer animations are stored in digital video format.

There are many ways to create digital media but they generally follow a similar pattern: (a) there is a canvas, source or scene file-a binary image or vector file in which digital artwork is created or imported; (b) this canvas or scene file is modified and then imported for printing or for a Web page. Should animation be involved, a keyframer or timeline would be used to allow for modifying the scene or canvas to account changes in the image over time. Animation software then interpolates over these changes in the image to produce the final animated result.

Even so, digital media tends to be difficult to create. Authoring tools often use a"bottom up"approach, where the scene file often must be created from scratch. Once created, a scene or source file is often difficult to modify, and, in addition, takes up large amounts of hard drive space. Media authoring tools are usually complex and require considerable investments of time and money. Moreover, authoring tools typically are disconnected from each other and may not communicate among themselves. Furthermore, clip art, which could save time in the creation process, is usually difficult to customize. Thus, creating digital media is often an expensive and time-consuming task.

SUMMARY OF THE INVENTION Briefly described, the invention comprises a system and method for creating, storing and retrieving digital media for the purpose of generating animations. The invention comprises a digital media data object system as well as the data objects themselves, called WordChips.

Each WordChip contains fields for basic Data and high level MetaData, as well as pipes for communicating with other WordChips (Frequency Pipes), user interface information (PMAP), identifying information (Standard Info), object parameters (States and Verbs), and a script (ActionScript) for instructing the WordChip on performing basic technique. The Data, or basic digital media data, can be formed from a variety of sources such as binary multimedia files, HTML/Javascript code, executable code or plug ins, and plain text files. An editor (ALICE) is provided for putting these elements together.

Once formed, each WordChip is stored in both a public dictionary and a private database.

The user can then create Metaphors, which are singular WordChips that are defined or derived from other WordChips. Sentences may be formed from both basic WordChips or Metaphors, and may be used to specify a sequence of images as well as background or other effects. A Story, which is a combination of WordChips and Sentences with background animation elements, may be created and saved for future customization. A story author may use a commercially available animation tool to rapidly create the Story's background animation elements, then uses ALICE to specify which types or genre of WordChips fit into the Story. A subsequent user of the Story then can retrieve the Story and fill in particular WordChips to customize the Story for his or her particular use. A Story that has been filled in is then sent to the Media Engine for a final preview and if satisfactory, producing the final animation in a number of different formats.

BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 illustrates an overview of WordChip creation and storage, as well as how the WordChip is used in conjunction with the Media Engine to produce a finished animation product.

Fig. 2 illustrates all the elements of a typical WordChip data structure, including data, metadata and elements used for interacting with other WordChips and with the system.

Fig. 3 illustrates the WordChip creation process.

Fig. 4 illustrates the story editing process.

Fig. 5 illustrates the WordChip in relation to the Metaphor, the Sentence and the Story, all of which build on the basic WordChip.

Fig. 6 illustrates the Metaphor, a WordChip that is defined based on other WordChips.

Fig. 7 illustrates the Sentence, a structure for combining WordChips in a sequence or with background effects.

Fig. 8 illustrates the ALICE, the Animated Language Interactive Commercial Editor.

Fig. 9 illustrates a Web-based wizard for creating a raw animation file.

Fig. 10 is a flow diagram that provides an overview on the story creation and WordChip creation processes.

Fig. 11 illustrates a sample story file with slots for future insertion of WordChips.

Fig. 12 conceptually illustrates a Story with Open, Semi-Open and Closed Slots.

Fig. 13 illustrates the relationship between raw animation file and an unfilled Story.

DETAILED DESCRIPTION OF THE INVENTION During the course of this description, like numbers will be used to identify like elements according to different diagrams illustrating the invention.

The present invention 10 proposes to solve the problems of digital media creation by providing the user with a process and system by which complex arbitrary digital images, animations, and Web pages, can be described quickly and put together to form complex animations. Referring to Fig. l, the key parts to the invention are The Animation Language ("ANIMAL"), which provides a way to describe animations via digital media data objects known as WordChips 12.

WordChips 12 are data objects for naming, sorting, and referencing different types of media elements. The basic WordChip 12 is a digital media data object that contains not only raw digital media content but also additional information for interacting with other WordChips 12 to produce the desired media effects. Referring to Fig. 2, a prototypical WordChip 12 contains (a) Frequency Pipes 34, communication pipes that enable WordChips to communicate information to each other; (b) an ActionScript 36, as a way of encapsulating technique into a WordChip via a script; (c) UI tags 38, generic user interface information for providing information on how to display the WordChip 12; (d) States & Verbs 40, for storing parameters, alternately called "conditions and actions" ; (e) MetaData 44, high level information about a WordChip; (f) Standard Info 46, which constitutes basic information about the Name of the WordChip, keywords, author name and contact info, as well as a preview picture of the digital media content; and (g) Data 42, the raw digital media content itself.

Creation of the basic WordChip starts with creation of the raw digital media content. The content may be graphics, multimedia or an animation created from software such as MacroMedia Flash, but a WordChip can be used for other types of digital media content such as sounds, music, 3D models, vectors (e. g. clip art). Digital media could even include HTML/Javascript code 18 that will produce the desired effects, and may also include text 22. Code-based effects or plug-ins 20, can also supply the digital media content, in which case the resulting WordChip would be termed a CodeChip as opposed to a data-only WordChip. These code-based effects may be generated from a compiler or an object development tool such as Microsoft Visual Studio (g). The Data 42 found in a WordChip 12 could include any combination of these types and can include multiple data elements of each type as well.

Once the digital media content has been created, it is"minted"or compiled into a WordChip 12 using a Java application named the Animation Language Interactive Commercial Editor (ALICE) 14. The digital media content. or Data 42, is combined with MetaData information, which identifies what kind of data it is as well as key information pertaining to the digital media content itself. For example, if the Data is a bitmap image of a target, the MetaData could contain the location of the center on that bitmap, so that the image of an arrow could properly hit the target. In addition, basic information such as the name, keywords, type and author of the WordChip may be entered. Of these, the keywords are the most important because they provide information to the WordChip system regarding the WordChip's compatibility with other WordChips. Referring to Fig. 2, the resulting WordChip may be previewed and further edited, and is displayed as a standard 35mm photographic slide preview 15. Once minted, WordChips 12 may be stored in a WordChip Dictionary 26 as well as a user's own database 24 for faster retrieval or for proprietary graphics. In either storage area the user is able to browse and select WordChips, all of which have a preview image and a title, by using ALICE 14 or a Web page interface 28.

In addition to minting WordChips 12 from basic digital media, WordChips 12 may be created and defined in terms of other WordChips 12; these defined WordChips are termed Metaphors 56 (Fig. 5 & 6). The user defines each Metaphor 56 in terms of slots, which are essentially parameters that match other WordChips 12. WordChips could then be inserted into the specified slots to form the Metaphor. The user specifies the level of generalization for any given slot. For example, as shown in Fig. 6, in order to create a Metaphor 56 that describes a birthday cake, one could specify either"candles"or"an incendiary object"as a slot. Finally, each Metaphor 56 is itself a singular WordChip 12, and so Metaphors 56 are recursive, so that one may create Metaphors 56 based on other Metaphors 56.

After singular WordChips 12 have been created, either from basic digital media data or as Metaphors 56, they may be put together to a form a combination of WordChips known as a Sentence 58. Typically, the Sentence 58 describes animation, motion, or interactivity of some sort, and the user can specify instructions 60 for the interaction between the different WordChips, which may describe image, effects, or backgrounds. These instructions 60 may include conditional branches, such as if-then constructs or an event loop, and allow for flexibility in the final presentation. For example, an explosion effect might need to wait until a mouse-click or rollover event. Thus, as shown in Fig. 7, a somewhat abstract Sentence 58 of WordChips 12 may be used to produce an animated sequence 61 of images and effects without requiring the user to specify particular images or frames.

All of the basic data elements such as Sentences 58, Metaphors 56, and basic WordChips 12, however, find their major use in creating Stories 62, which are combinations of singular or multiple WordChips 12 with background scene animations. As the name suggests, a Story 62 itself is a full combination of digital media elements that is used to create and represent the final complete animation. However, the Story is not a single animation but rather a template with animation variables or Slots 70, parameters into which different WordChips 12 may be inserted.

Consquently, creating an animated presentation involves a creation phase and a use phase. As shown in Fig. 10, a Story 62 is first created by an author who uses an animation tool and ALICE 14 to transform a basic idea into a Story 62 with Slots 70 capable of receiving WordChips 12. A subsequent end user of the Story 62 then inserts his or her own WordChips before sending the Story 62 to a Media Engine 30 for rendering of a final animated media presentation.

Authoring, or creating, a Story 62 requires several different steps. Referring to Fig. 10. a raw animation file 66 is created or prototyped with the use of either web-based software Wizards 64 (see Fig. 9) or a commercially available animation tool 80, such as Macromedia Flash Authoring Kit. The author uses either tool to design a basic animated scene file 66 in which certain portions or elements are left blank so that different WordChips 12 may be filled in later.

Referring to Fig. 13, these WordChip blanks 68 are marked off from other elements 84 in the animation file by drawing or placing a gray square to act as a placeholder for where a WordChip is supposed to be placed. The gray square was selected as a shape because one can more easily tell if a square were undergoing a stretching or a rotation during animation. In addition, a gray color itself would be less likely to conflict with colors in the raw animation file 66 itself. As an example, if one wanted to author a Story 62 in which an object were bounced off a floor, one would use off-the-shelf animation software tool to draw a raw animation file 66 in which a gray square bounces off a floor. For simple animations, a Web-based Wizard (64) may be employed to similarly create the raw animation file based on user prompts as shown in Fig. 9.

Once the raw animation file 66 has been created, the file is read into ALICE 14, where the raw file 66 is effectively turned into a Story 62 with Slots 70 capable of accepting WordChips 12.

The story author uses ALICE 14 to first convert the raw file 66 into a Story file 62 in which the gray blanks 68 are converted into Slots 70. As part of this conversion process, the author may decide that only certain types of WordChips may fit into a given Slot 70. For example, the author could prevent sound WordChips 12 from being inserted into a slot for the rectangular object. In addition, the author may deem certain slots as Closed Slots 72 by filling these slots with WordChips and locking them against end user editing (see Fig. 12). On the other hand, the author will likely leave certain slots as Open Slots 74, where the author permits a later Story user to fill in Slots 70 with any WordChip 12 that matches the slot type. Finally, as illustrated in Fig. 12, the author may restrict WordChip selection by make a Slot 70 a Semi-Open Slot 76, so that only WordChips 12 from a author-specified list 77 may be inserted in that Slot 70. ALICE 14 has access to the WordChip Dictionary 26 in case the user wishes to confirm that certain WordChips exist. Each Story contains many of these slots, and the author may choose to open, close or semi- open any or all of them to his or her preference. In addition, the author may preview the animation in ALICE, which sends the Story to the Media Engine 30 to render a preview. Once again, gray squares serve as placeholders for the various Slots, and the author may make adjustments before either saving the Story in a database or further customizing the Story 62 prior to final animation.

An end user of a Story 62 subsequently customizes a previously authored Story by using an end user editor 78. The end user editor 78 reads the structure 48 of a Story 62 and, using the Story Reader 50, displays the open slots 70 and semi-open slots 76, which are the parameters into which WordChips may be inserted. The end user may then customize the Story by selecting WordChips 12 for insertion into the Story 62. To reduce the complexity of the editing, compatibility matching is done to limit the selection to only WordChips 12 compatible with the open slots 74 and semi-open slots 76 found in the Story 62. Whether or not a given WordChip 12 is compatible with a given slot depends on the keywords found in that WordChip 12. As noted a user may be prevented from inserting a sound WordChip into a graphical slot. Moreover, the user can select WordChips 12 from either the public WordChip Dictionary 26 or the user's own private WordChip database 24. The end user editor 78 is usually implemented as a Web-based Java application to ensure that any Internet-capable user could customize the animation without requiring extensive software installation.

After the Story 62 has been customized or filled in by the user, it is sent to a Media Engine 30 for preview and for final production. The Media Engine 30 is a rendering engine that resides on a high-speed dedicated system for optimized rendering performance, and contains software for interpreting the stories and rendering them in a number of user-specified formats. The rendering process takes place as follows: the Story 62 is read, creating a set of frames 1 to n (a user specified number of frames) for animation. For each frame, the Engine 30 creates a spatial transformation and a color transformation for each Slot 70, and thus each WordChip 12, for that particular story.

In the same way, each component (e. g. raw digital media element) of the WordChip 12 undergoes these transformations during the rendering process. In addition, other animated elements 84 of the Story 62 are transformed within each frame to create, along with the WordChips and open slots, a set of frames capable of being assembled together into an animation.

The Media Engine 30 may be implemented locally but users may also choose to use a central media engine remotely located on a network, such as the Internet. In either case, the Engine 30 requires only a few seconds to render most animations (typically commercials) for preview, allowing the user to go back and make changes if necessary. If satisfied with the results, the user then instructs the Media Engine 30 to produce the animation output 32 in any number of formats, depending on whether or not extensive animation is required: GIF, Flash, HTML, or QuickTime. As noted previously, the Media Engine 30 also serves to render animated previews for Stories while they are being authored in ALICE 14 or customized in the end user editor 78.

The WordChip System thus provides an easy method for creating and editing animations from any number of different sources of digital media. The user may create WordChips from the bitmap image and sound files traditionally associated with multimedia files, but may also use digital media in the form of HTML code or a plug in. The user may further use these WordChips to rapidly produce an animated presentation by selecting and combining particular WordChips in Sentences and Stories. The user does not need to specify a complete sequence of defined images or frames but needs only specify more conceptual aspects of the final animated presentation. The WordChip system and the Media Engine take these conceptual specifications and produce the complete animation, providing the user with a modular way to create new multimedia presentations.

While the invention has been described with reference to the preferred embodiment thereof, it will be appreciated by those of ordinarv skill in the art that modifications can be made to the structure and elements of the invention without departing from the spirit and scope of the invention as a whole.