Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INSTANT MESSAGING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2007/134402
Kind Code:
A1
Abstract:
An instant messaging system, for allowing a first user to communicate via a communications system with one or more other users by sending and/or receiving messages. The system includes: a message receiver, a message sender, and a display device for displaying messages, wherein a natural language processor at least partially determines the meaning of said messages by analysing the natural language used, and an animation controller changes at least one aspect of the appearance of said display device, or an analysis agent generates a message, in accordance with said meaning.

Inventors:
FONG ROBERT CHIN MENG (AU)
CHONG BILLY NAN CHOONG (AU)
Application Number:
PCT/AU2007/000714
Publication Date:
November 29, 2007
Filing Date:
May 24, 2007
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MOR F DYNAMICS PTY LTD (AU)
FONG ROBERT CHIN MENG (AU)
CHONG BILLY NAN CHOONG (AU)
International Classes:
G06F15/16; G06F40/20
Foreign References:
US7039676B12006-05-02
US6876728B22005-04-05
US7013427B22006-03-14
Other References:
BO PANG AND LEE L.: "Thumbs Up? Sentiment Classification Using Machine Learning Techniques", CORNELL UNIVERSITY, USA AND SHIVAKUMAR VAITHYANATHAN, IBM ALMADEN RESEARCH CENTRE, USA. PUBLISHED PROCEEDINGS OF EMNLP, 2002, XP008092527, Retrieved from the Internet
Attorney, Agent or Firm:
PHILLIPS ORMONDE & FITZPATRICK (367 Collins StreetMelbourne, Victoria 3000, AU)
Download PDF:
Claims:

Claims:

1. An instant messaging system, for allowing a first user to communicate via a communications system with one or more other users by sending and/or receiving messages, said system including: a message receiver, a message sender, and a display device for displaying messages, wherein a natural language processor at least partially determines the meaning of said messages by analysing the natural language used, and an animation controller changes at least one aspect of the appearance of said display device, or an analysis agent generates a message, m accordance with said meaning.

2. The instant messaging system according to claim 1, wherein said system also stores at least some of the messages sent and/or received, for use in said analysis.

3. The instant messaging system according to claim 1 or 2 wherein said aspect of appearance that changes is selected from any one or more of:- the background appearance of the display device that displays messages, a character that represents a user, or a character that represents and emulates a virtual user.

4. The instant messaging system according to claim 3. wherein said appearance changes over time according to the sending and/or receipt of additional messages which have their additional meanings analysed.

5. The instant messaging system according to claim 4, wherein said appearance changes by, adapting the previous appearance in an incremental manner to a new appearance according to a further meaning obtained from a further message, and/or replacing the previous appearance with a new appearance according to a further meaning obtained from a further message.

6. The instant messaging system according to any one of claims 1 to 5, wherein said aspect of appearance that changes is the background appearance of the display device that displays messages, which displays a background that includes an image that relates to said meaning.

7. The instant messaging system according to any one of claims 1 to 5, wherein said aspect of appearance that changes is a character (or avatar) that represents a user, and at least one feature of the appearance of said character corresponding with said meaning.

8. The instant messaging system according to claim 7, wherein said character is displayed as an apparent 3 -dimensional character.

9. The instant messaging system according to any one of claims 1 to 5, wherein said aspect of appearance that changes is a character (or avatar) that represents and emulates a virtual user, and at least one feature of the appearance of said character corresponding with said meaning.

10. The instant messaging system according to claim 9, wherein said character is displayed as an apparent 3 -dimensional character.

11. The instant messaging system according to claims 9 or 10, wherein said character represents and emulates a virtual pet that belongs to the user associated with said pet.

12. The instant messaging system according to claim 11, wherein said pet functions as a virtual user on said instant messaging system, able to receive messages from and/or send messages to the user to whom the pet belongs.

13. The instant messaging system according to claim 11, wherein said pet functions as a virtual user on said instant messaging system, able to receive messages from and/or send messages to another user apart from the user to whom the pet belongs.

14. The instant messaging system according to claims 9 to 13, wherein said character emulating a virtual user can interact with a real user in a similar manner to another real user.

15. The instant messaging system according to claims 9 to 13, wherein said character emulating a virtual user can interact with a real user by sending educational messages to and/or receiving educational messages from to said real user.

16. The instant messaging system according to claim 15, wherein said educational messages include, drills for improving the real user's language, spelling, numeracy, or the like skills, stories for improving the user's knowledge, or problems to solve.

17. The instant messaging system according to claims 9 to 13, wherein said character emulating a virtual user can interact with a real user, by accessing an external source of information, including the Internet, to conduct a search and/or to provide any results of said search.

18. The instant messaging system according to claim 17, wherein said accessing of the external source of information is carried out by using the meaning of messages already determined, to construct a query of the external information source, then using said query to conduct said search, and then optionally providing any results of said search as output from said character

19. The instant messaging system according to any one of claims 1 to 18, wherein the messages are stored in said system, and analysed to determine a user's profile.

20. The instant messaging system according to claim 19, wherein said profile is used also to change said aspect of appearance or to generate a message.

21. The instant messaging system according to any one of claims 1 to 13, wherein when said system determines the meaning of one or more of said messages, if said meaning is of a proscribed nature, said system then implements a security function.

22. The instant messaging system according to claim 21, wherein said security function provides a warning, stores identification information about, and/or restricts the access of or to, the user associated with the proscribed messages.

23. The instant messaging system according to claim 21, wherein said meaning of a proscribed nature is of a morally undesirable nature, including of a sexual nature.

24. The instant messaging system according to any one of claims 1 to 23, wherein said system also

obtains information from at least one other external source, and changes at least one aspect of the appearance of said display device in accordance with said information, and/or sends a message to the user.

25. The instant messaging system according to claim 24, wherein said information is related to the meaning of said messages.

26. The instant messaging system according to claim 25, wherein said information is advertising information related to the meaning of said messages, and said aspect of the appearance that changes is to visually display advertising material to a user, and/or to send a message to the user that contains advertising material.

27. The instant messaging system according to claim 24, wherein said information is related to an aspect of the appearance of said display device, being information that allows the creation of an image that represents any one or more of: the background appearance of the display device that displays messages, a character that represents a user, or a character that represents and emulates a virtual user.

28. The instant messaging system according to any one of s 24 to 27 wherein said external source is the Internet.

29. The instant messaging system according to any one of the preceding claims, wherein said user may create at least one image that represents any one or more of: the background appearance of the display device that displays messages, a character that represents a user, or a character that represents and emulates a virtual user, by inputting natural language into said system to at least partially describe said image, together with at least one instruction to indicate to said system to create said image, and wherein said system determines the meaning of said language by analysing it, and determines the appearance of said image in accordance with said meaning, one or more times.

30. The instant messaging system according to any one of the preceding claims, wherein said message sender is a text input device, including a computer keyboard and/or a mouse, and/or a

voice input device, including a microphone, and/or image input device including a camera or a whiteboard.

31. The instant messaging system according to any one of the preceding claims, wherein said message receiver, is text output device, that includes a section on said display device that displays text, and/or voice output device, including a speaker, and/or image output device including an image display device or a whiteboard.

32. The instant messaging system according to any one of the preceding claims, which also includes a section on said display device that shows any characters created by any one or all of said users, and/or any virtual characters created by any one or all of said users.

33. The instant messaging system according to any one of the preceding claims, which operates on an electronic device, including on a computer or a mobile telephone, which may be connected to others of said electronic devices so as to exchange messages.

34. The instant messaging system according to claim 33, which is a software application.

35. An instant messaging system, for allowing a first user to communicate via a communications system with at least one other user by sending and/or receiving messages, said system including: a message receiver, a message sender, and a display device to display messages, characterised in that said system displays at least one character that represents at least one of said users on said display device, wherein said character is displayed as an apparent 3-dimensional character

36. An instant messaging system, for allowing a first user to communicate via a communications system with at least one other user by sending and/or receiving messages, said system including: a message receiver, a message sender, and a display device to display messages, characterised in that said system displays at least one character that represents and emulates a virtual user.

37. The instant messaging system according to claim 36, wherein said character is displayed as an apparent 3-dimensional character.

38. The instant messaging system according to claim 36 or 37, wherein said character represents and emulates a virtual pet that belongs to the user associated with said pet.

39. The instant messaging system according to claim 38, wherein said pet functions as a virtual user on said instant messaging system, able to receive messages from and/or send messages to the user to whom the pet belongs.

40. The instant messaging system according to claim 39, wherein said pet functions as a virtual user on said instant messaging system, able to receive messages from and/or send messages to another user apart from the user to whom the pet belongs.

41. The instant messaging system according to any one of the preceding claims, that is configured for use especially by children.

42. A method of generating an object in an instant messaging system, the method including:

(a) sending and/or receiving one or more messages from one or more users of the instant messaging system;

(b) determining the meaning of said one or more messages by analysing the natural language used m the one or more messages, and

(c) changing at least one aspect of the object in accordance with said meaning determined in step (b).

43. The method of claim 42, wherein the object is a display device.

44. The method of claim 43, wherein said aspect is the appearance on the display device.

45. The instant messaging system according to claim 44 wherein said aspect of the appearance of the display device that is changed is selected from any one or more of: the background appearance of the display device, a character that represents a user, or a character that represents and emulates a virtual user.

46. The method of claim 42, wherein the object is message.

47. The method of claim 42, wherein the method further includes the step of storing at least some of the messages sent and/or received, for use in said analysis.

48. The method of claim 47, wherein at least one aspect of the object changes over time according to the sending and/or receipt of one or more additional messages which have their meanings determined by analysing the natural language used in the one or more additional messages.

49. A method of profiling a user, via a neural network, in an instant messaging system, the method including the steps of:

(a) receiving one or more messages from a user of said instant messaging system;

(b) analysing the natural language used in said one or more messages to determine if said messages have a positive or negative meaning; (c) generating an input layer on the neural network, said input layer including a node associated with each of said one or more messages, each node categorised as having a positive message within it or a negative message within it;

(d) generating a descriptor layer on the neural network having descriptor nodes;

(e) analysing the positive and negative meanings of said messages in said input layer nodes and determining synonyms and hyponyms of said messages;

(f) storing the synonyms and hyponyms in said descriptor nodes and linking said descriptor nodes with one or more corresponding input layer nodes;

(g) generating a personality trait layer on the neural network having personality trait nodes, each node of said personality trait layer having a predetermined personality trait; (h) linking said descriptor nodes with one or more personality trait nodes which most closely correspond to said descriptor nodes;

(i) generating an output layer on the neural network, said output layer including one or more nodes representing a predetermined personality type;

Q) linking said personality trait nodes with one or more personality type nodes which best correspond to the personality trait nodes, in order to determine the personality type of the user.

50. The method of claim 49, wherein at step (a), said one or more messages received from a user of the instant messaging system are via a questionnaire.

51. The method of claim 49, wherein at step (k), said personality type is determined once the number of messages received from a user of the instant messaging system exceeds a predetermined threshold.

52. The method of claim 49, further including the step of:

(a) changing at least one aspect of an object associated with the instant messaging system, based on the result of the personality type.

53. The method of claim 52, wherein the object is a display device.

54. The method of claim 53, wherein said aspect is the appearance of the display device.

55. The instant messaging system according to claim 54, wherein said aspect of the appearance of the display device that is changed is selected from any one or more of: the background appearance of the display device, a character that represents a user, or a character that represents and emulates a virtual user.

56. The method of claim 49, further including the step of matching information with the determined personality type of the user and displaying the information to the user.

Description:

INSTANT MESSAGING SYSTEM

TECHNICAL FIELD

The present invention relates to an interactive instant messaging system, particularly one that operates on computers or other electronic devices such as mobile telephones, that can be used by people to communicate over the Internet, for example.

BACKGROUND

Traditionally, instant messaging allows a user to send electronic messages to one or more persons with minimal delay between the sending of one message and the receipt of a message in response. Just like conversation, instant messaging is a simultaneous give-and-take, but it currently occurs mostly in a text (normally keyboard entered) form, and of late, may also be capable of handling voice communication. In contrast to email, which remains unread in a recipients inbox until opened, instant messaging notifies users when other users are online and able to accept messages, normally in the form of a "buddy list" which lists potential and active messaging participants.

Instant messaging systems may use any one among several methods to deliver messages. (1) A centralized network, which connects users to a series of servers The servers route a message through the network until it is sent to a recipient. These instant messaging systems centrally store the user information including the user names, passwords and "buddy lists". (2) A peer-to-peer network, which uses a single server that tracks who is online. After the system identifies who is logged on, messages are then sent directly from the users computer to the recipient's with no further server involvement. This enables speedy exchanges of messages with graphics and large files. (3) A hybrid of the above-mentioned two methods. In each case, the instant messaging system saves details about the online user's connection information and list of contacts. The system then seeks any other users who have logged on to the system and then informs the user if they are online.

With instant messaging, there are normally two basic ways to communicate; language and images. A user either types, draws or speaks to another user on his or her "buddy list" to communicate using language. Additionally, the user may manually select some images such as emoticons, avatars, backgrounds, wallpapers, animations, etc, to act as a virtual "character" representing a particular user, to communicate using images. Sometimes the user may use images or emoticons to help convey emotion, by selecting, for example, either a "smiling" or a "frowning" face emoticon to indicate the emotion the user currently wishes to convey. Currently,

text and/or voice communication is used just for human-to-human communication, and is not used to drive or affect the image-based communication aspect of the application. The two aspects remain separate in the application. In other words, the language and linguistic components, either in text or voice form, are wasted by merely being used for human dialogue or conversation, and do not influence the image components in the application.

It is also known to use keywords and/or punctuation marks in order to create emoticons, avatars etc, so as to convey and/or dynamically represent the emotional state of a particular user. However, these emoticons, avatars or images do not react to the human language discourse as such that is dynamically happening at the same time, and merely utilise low-level coding to trigger a previously configured or rendered image or action sequence. As a result, most of these emoticons, avatars and images are often static, or have a limited animation capability, and are most of the time without much customisation or creativity. Thus it would be useful to enhance human-to-human communication through an understanding of natural language discourse and dialogue. Currently the emoticons represent the users on the system in this static manner. It would be of use to allow these emoticons to simulate an independent "'character" in the system. This would give the user something to communicate with, albeit in a simple manner, when no other user is available, or to provide another contact point for other users in the network. Such an independent character may be represented as a "pet", to get the maximum benefit from the relatively simple simulation possible using current computing capabilities. The independent character could also be tasked to perform simple tasks in place of the real user, such as managing part of the communication occurring as a 'alter ego" of the real user, or to take messages or conduct searches, while the communication is occurring with other users, or when the user is unavailable or busy with other tasks, for instance.

SUMMARY

The present invention generally relates to a method for an improved or alternative instant messaging application that can operate in the traditional manner using a computer network, or with other human-to-computer communication devices such as mobile phones and Personal Digital Assistants (PDAs), for example.

One embodiment of the invention concerns the analysis and determination of the semantics, meaning and emotional content for the language used in the instant messaging application, m order to firstly create 2- and/or 3-dimensional visuals, graphics and virtual

characters that are subsequently responsive and interactive to the users conversations and dialogues in a seemingly intelligent manner.

In a simple form, this may involve replacing the static emoticon in traditional instant messaging applications with a dynamic character, perhaps representing the user's actual appearance, or else as an avatar or the like which may be an idealised or fanciful character. These characters react to the meaning of the messages flowing through the instant messenger application among the users. Alternatively, the background visuals may dynamically alter according to the meaning conveyed in the messages, such as showing a calm beach scene when the conversation is pleasant, or a fierce storm raging when the communication is unpleasant, for example.

In another embodiment, a user is able to create and design their own virtual character or "pet" which seemingly acts independently to the user. This may be done by simply describing it by either speaking or typing/writing in order to create it (or alternatively by downloading it), and then interacting with the "pet" by conversing naturally with it. The pet may not only be able to interact and converse with its "owner" but may have its own simulated independent personality and intelligence, thus being able to convey emotion, learn from its "owner" or from other users, or from its environment or to react to conversational situations and events, among a whole host of other capabilities. A user may interact with his or her own pet, or with that belonging to other users. Another embodiment concerns an interactive instant messaging application that reacts and responds intelligently and emotionally to a user's dialogue or conversation as they chat to a friend or another person. Based on the meaning of the language utilised by a user in regard to various types of information such as his or her personality, profile, activities, hobbies, etc, a user can create and personalise his or her own customised interactive, responsive, and emotional type of characters, emoticons, pets, avatars, spaces, dynamic display backgrounds, or wallpapers, etc, and then watch these characters interact and chat with them, their friends and their friends' characters. In addition, dynamic display backgrounds or spaces change and fluctuate according to how a user is feeling, and his or her emotional status or moods, as determined from the meaning of the language being used in the instant messaging communications. A natural language processor can determine this meaning, and it is backed by a 3-D graphics engine that can therefore provide beautifully rendered, responsive and seemingly- intelligent real-time 3-D characters and images.

A further embodiment concerns an interactive instant messaging application that determines the user's personality type via a neural network which analyses the user's communications with the instant messaging system.

The invention m one broad form relates to an instant messaging system, which has the usual features of such systems, namely that the system has a message sender and message receiver to send and receive instant messages to and from one or more other users, and a display device to display or communicate these messages. If necessary the system may also display some identification for the users, so that conversations with multiple users can be easily managed on the display device. Generally, the system may be implemented as a software package that runs on a personal computer, connected to the personal computers for other users over the Internet, for example. But users may communicate by other means, such as by mobile telephones, or PDAs, for instance. The software package in a simple form, may display a window divided into sections, with a portion set aside to display the "buddies" available to communicate and a status against each as to whether the buddy is online, busy, etc There usually may be a portion in the window that shows the messages sent and received between the various users, with the user's name or identification linked with each message.

The instant messaging system for the present invention departs from the previously known packages by having one or more other features. There may be a feature whereby the system determines the meaning of the messages by analysing the natural language used using a natural language processor. The level of analysis of the meaning may vary. At least some of the meaning is determined. As a result, at least one aspect of the appearance of the display device changes in accordance with the meaning that has been determined via an animation controller. Alternatively, an analysis agent may generate a message, in accordance with said meaning. Preferably, the system also stores the messages sent and received, and the meanings that have been determined. This allows the system to build up a history of the communication, and develop a profile from the analysis.

Any suitable method of determining the meaning may be utilised. Preferably, a method of analysing the meaning of the natural language used in the messaging that is described in International Patent Application PCT/AU2006/000449 by the current applicants may be employed. General approaches to carrying out this analysis to determine meaning are discussed in more detail below.

The meaning so determined is then used to change the appearance of the display device. This may be done in a variety of ways, but preferably includes altering the background imagery on the screen, or more preferably, by changing the appearance of a character appearing in the

display By "character" is generally meant an image that may represent a user, either an actual user, or a virtual one (which is discussed in more detail below). The character may be an image of a human user, and may be a realistic, fanciful, or imaginary resemblance, or may be of a cartoon character, or some other representation, of a thing that can identify a user to others. Unique images are popular so that a user can be easily identified. The character is commonly known as an avatar or the like. Common characters that may be used in this way include realistic portrayals of the user, derived from photographs, showing the face, or head, or entire body of the user. Otherwise, the user may adopt a well know celebrity to represent them, perhaps with some editing. Cartoon characters are popular also. Characters may also include images of everyday items, such as tools, trains, cars, trees, aliens, angels, etc, or of abstract shapes and artwork. The characters may be animated or static images.

The appearance on the display, such as that of a character, will change in response to the meaning that is determined from the message language. The change can be simple, just by changing the colour of the character's clothing for instance, but may preferably be more complex, so that the character's face may change by smiling, frowning, or crying, for example. The background image may change, so that a landscape may show a cloudy sky, bright sunshine, rain, storms, for instance.

The change in the appearance should be in accordance with the meaning, in at least one aspect. The link between the change in appearance and the meaning determined from the messages may be direct or indirect. For instance, if the message refers to a car or a model of car, then an image of a car may appear in the background, or the user's avatar may change to show a figure sitting in a car. As another example, if a user sends message with language that indicates he is unhappy, a character's face that is the user's avatar may change to look unhappy, or the background may show a cloudy sky, or the character's shirt may change to a blue colour. The linkage between the meaning and the change in appearance may be designed into the system, or may be created by the user. Preferably, there is an editing mechanism that allows the appearance changes to be refined over time. A change may occur once, or each time a message is received or transmitted, or several times during the course or a message. The change in appearance may replace one image with another, including adding or subtracting image elements, or slowly evolve from one image to another, or involve combinations of these.

Preferably the other connected users may see some or all of these appearance changes, especially if the appearance of that user's avatar character is changing. But this may require both sides of the messaging to be using the application. If the other users have another instant messaging application lacking this capability, then only the current user will see the changes

The step of determining the meaning of the language used m the messages is different from the known method of checking for a key word, or allowing a user to select from a list of avatars or '"smileys". The determination is more than just pattern matching of a flagged word to alter the appearance. Meaning determination will often involve the handling of synonyms, disambiguation, and the like.

It is a further aspect of the invention that a character appearing in the instant messaging system is shown in an apparent 3 -dimensional appearance. Previously only 2-dimensional images were available as characters in instant messaging systems. The 3-dimensional appearance can be used with the preferred step of meaning analysis in the current invention, as discussed above, or with standard instant messaging systems. When the character or avatar appears 3- dimensional, the character normally appears more life-like, and if it represents a real person for example, the character can then shake his head, nod, turn sideways, wave a hand, or the like, as a result. The character is held in the system as fully 3-dimensional, even if only one view is being shown. Rotating or moving the character will then make other views of the character visible. It is another preferred feature of the invention that the character may represent a virtual user. That is to say, the character may emulate the character or avatar seemingly belonging to another user, but no such user is actually connected to the instant messaging network. The instant messaging system will create and maintain such a virtual user. Generally, the original and real user of the instant messaging system can then interact with this virtual user as if it were another real user, by sending messages to that virtual character, and optionally receiving message responses back. The appearance of this virtual user may preferably also change in accordance with the messages sent and returned, or alternatively, the virtual user may not react in such a manner, but just act to send and receive messages. It may preferably also appear as a 3- dimensional character. As another preferred option, the virtual character may emulate a "pet" that belongs to the original user. This option makes an advantage of the difficulty and complexity of the instant messaging system emulating another user. Pets are normally insects or animals that have a limited set of responses and simple communication skills. As mentioned previously, there is no need for the pet to just appear as a traditional type of pet, such as a dog or cat, but any type of appearance may be utilised. The meaning of "pet" should be understood as representing a character that can communicate in a simpler or more limited manner than real people Although it may also communicate in a more sophisticated manner as well, in some circumstances, as when accessing external data, for instance, which is discussed in more detail below.

The original user may use the instant messaging system to communicate with his or her

"pet" character, when there are no other users available, or when there are. The user may communicate with their own pet or with the pets belonging to other users. The pet may be able to receive messages from and/or send messages to another user apart from the user to whom the pet belongs. The pet characters may be designated in the instant messaging system along with real users, preferably linked to the users they belong to, or else may be kept separate from real users. Ideally, some method of identifying the pets and keeping them separate can be employed. A user may also have more than one pet, if they wish. A user may use a different pet at different times, of several pets at the same time. When the user sends a message to his pet, the pet can respond, by changing its appearance or by sending a message back in return, or by both of these approaches. Preferably, the pet responds in accordance with the meaning that has been determined from the message sent. A number of pre-designed responses may be utilised. For example, the messages sent to the pet may have their meaning analysed according to their emotional content. This emotional content can be divided into various actions the pet can perform, such as those that lead to a change in appearance or a message in response.

As another alternative, the pet may serve as an observer to messages sent by the original user to other real users, and respond to outgoing messages sent by the user, for instance. Or it may react to all the messages, according to their emotional content, or subject content. As a simple example, the meanings of messages can be divided into a number of emotions, such as happy, sad, angry and bored. With happy-type meanings, the pet can smile, or wag its tail (assuming it has one), with sad one, it may cry or look sad, with angry ones, it can shake a fist or snarl, and with bored ones, may sit down, or look bored.

The pet character may be set up to respond to some meanings by sending a message back to a user who has sent a message to them. There can be pre-programmed responses, so that, for example, if the meaning is analysed as a greeting then the pet can send a greeting message back in response. Again, the pet does not react to a key or trigger word, such as "hello", but instead analyses the meaning of a message for a concept, and if the concept is that of a greeting, it then responds in kind, so this reply is sent even if the word "hello" is not used, but "hi" is instead. The pet character may also be configured to perform an educational role, especially with children. The pet can send message to the user and receive replies that have an educational function. For example, the pet can perform drills that allow a user to practice and improve their skills in language, spelling, numeracy, and the like. The pet can tell or message stories to the user, or provide the user with problems to solve. The user then responds by sending a message

back to the pet containing the answer, and the pet then can generate a response according to where the answer is right or wrong. The response can be in a variety of ways, such as by appearance, where the pet cheers a right answer for instance, or by providing another text message back with a comment, and perhaps the next problem to solve. This educational function also has applications with adults. It can help teach foreign languages, for instance. This mode can also function to review the user's spelling abilities. As text messages are typed, the system can utilise a spell checker function, and notify the user of spelling mistakes. A pet character can also rewrite messages in correct spelling, and offer to send that message instead of the user's first attempt. As a further embodiment, the pet can interact with one or more external sources of information. The external data source can be included with the messaging system, such as providing an encyclopaedia on a disc to run on a computer, or more preferably the system can interact with sources of information over the Internet. For example the system may utilise Internet search engines such as "Google " or "Ask Jeeves " for example, or may access particular Internet sites which may provide information arranged optimally for the instant messaging system of the invention, or containing specific subject material. The system may utilise existing search functionality, such as that provided by "Google", or carry out its own search. The results of the search can then be provided by the pet to its owner in the form of a message, and optionally reformatted or mined for specific data. The appearance of the pet may change also, such as according to the success of the search, or the time taken to get a result.

For example, if the user sends a message to its pet "Do you know anything about tropical fish?" then the system can analyse the meaning of this message. The meaning would be determined that the message concerns "fish", and the user wants information about this topic. The system can then formulate an enquiry on "Google", or on a dictionary Internet web site, for example, and then send a message back to the user, with a passage from most relevant material found. More advanced data mining techniques can be used to provide information in a variety of ways. For example, the message "Do you know the meaning of tropical fish?" can trigger a dictionary or encyclopaedia type search.

As a further embodiment of the invention, the analysis of the meaning of the messages are stored in a system, and analysed to determine a user's profile. The profile may be used also to change said aspect of appearance or to generate a message.

As a further embodiment of the invention, the analysis of the meaning of the text messages can implement a security type of function within the system, if the meaning is determined to fall within a proscribed nature. For example, if the meaning is interpreted by the

system as being sexual in nature, then a security function can be triggered, which can act to help protect children using the system from communicating with undesirable strangers or straying into unhealthy discussions with their friends. The language analysis feature of the present invention can function more effectively than the prior approach of flagging individual key words of a sexual nature; which users can avoid by using euphemisms or slang, for instance. This security function can be triggered when the general tone of the conversation moves into undesirable areas.

The security function may operate in any useful manner. The system may issue a warning to the users, store information about the users and the content of the messages for review by adults or supervisors, or restrict or break communication with the users who are discussing the proscribed topics.

Other security issues may also be flagged for attention, apart from issues concerning children. The messaging system may be implemented within an office or work environment, and the proscribed issues may involve non-work related matters for example. For instance, discussions of sporting results within an office environment may trigger a warning message, or an entry into a logging system.

Another aspect of the invention involves obtaining information from external sources, which cause the appearance of the instant messaging to change, or messages to be generated, preferably m response to the meaning of the messages determined One preferred application of this is for the instant messaging system to provide or display advertising material to a user.

The advertising material may be randomly provided to the user. For example, a version of the instant messaging system may be provided to users for free, in return for the system displaying advertising. The source of the advertising material may be included in the system, or the system may obtain the material from external sites, such as over the Internet. The advertisers would pay for the privilege of displaying the advertising material.

However, it is preferred that the meaning determined from the messages will relate directly or indirectly with the advertising material displayed. For example the subject content of the messages can trigger advertising for products directly related to that content. The determination of the meaning can also affect the advertising material indirectly, for example, if the analysis determines the user's favourite colour then advertising material for clothing can display items in that colour, or the user's emotional state, either happy or sad, can alter whether advertising for party venues or medical practices appear.

The advertising material can be provided in any variety of ways. One approach is to alter the background images to display advertising images. For example billboards may appear in a

landscape-type background image. If an avatar character is being used, an advertising logo may appear on that characters t-shirt. If a pet is used, the appearance of the pet may alter to reflect the sponsor's interests. Popular characters from movies or books may be sold or given away for use as avatars or pets in this manner. The pet may send messages to their owner, with advertising offers in them, especially when a related topic is being discussed among real users. Many other approaches may be utilised in accordance with this general embodiment and the features for the present invention.

The invention may also utilise knowledge mining and learning capabilities. The implementation of the language analysis and its seemingly artificial intelligence type of interaction within the system allows this to be used in the natural language search, advertising and user customizability for learning features in the instant messaging application. Knowledge mining and learning generates an information resource bank from two sources. Firstly, from the Internet, through such resources like the "British National Corpus", the "Wikipedia" or "Google", for instance. A second resource is from the user itself. This means that the system is able to do the following. Should a user have a particular interest in certain areas, the system, through the pet, is not only able to provide useful information that it automatically retrieves from the net, but is also able to store such data, to remember user preferences, habits, traits, etc, and to update this knowledge progressively. Having memory means that a user's profile, likes and dislikes, etc, can be remembered by the system and based on this data and preferences, then selective advertising messengers, again through the pet, can be delivered in timely fashions according to the situation. The system is also able to progressively learn from the user in connection with specialised information that can be stored, retrieved, updated (either by the user or new information added from the web), and can also be conveyed across to other users connected to the network. The characters utilised in the invention may be generated in a number or ways. Some characters may be provided with the system. The system may allow the user to create their own characters, utilising an interface for this purpose. One preferred such interface, is to use the message system to follow instructions on creating the character, which are sent as message by the user to a special pet, who may be presented as a "wizard" for example The user can instruct the system to create a character, either to be used as an avatar, or as a pet, by specifying the size, shape, colour number of legs and arms, etc for a character.

Another approach is to allow the user to create or download characters over the Internet. The user may be charged for each such character. Users can establish a trading system, for creating, updating and exchanging characters among themselves.

When the character is a pet, then users may be allowed to treat the character something like an actual pet, if desired Such a character can resemble a real or cartoon or fantasy type of pet in appearance. As well as behaving like the instant messaging pet as discussed above, this pet can have added functions, such as requiring petting, feeding, grooming, being allowed to sleep, and so forth. Such a pet will be especially useful with children. These functions can be set up by parents to help control the child's access to the system, such as insisting it be allowed to sleep, when the child has his or her bedtime as well. Some of these functions may involve some payment by the user, which may help provide income to the system's providers, or to the copyright owner of the pet's character, for instance. In one preferred embodiment, the instant messaging system is configured specifically for use by children. Children will be attracted to the pet feature, which can incorporate educational features, and they can play with this character. The pets can be cartoons designed to appeal to children. The security feature can be set to protect the children from undesirable communication

An alternative approach is to configure a version of the instant messaging system specifically for office or work environments. This instant messaging system can operate within a workplace m place of interoffice email, or as an adjunct to it. Work environments often require an instant response. Emails can sit in a user's inbox unread, whereas instant messaging alerts the user when a workmate is away from their desk, busy with other jobs and cannot be interrupted, or is at lunch The present system may allow each worker to create an alter-ego or avatar that can resemble their actual appearance, which can facilitate personal contact among workers. Collaboration among several workers is easy to organise using instant messaging. The present system's analysis of message text meaning can be utilised to access an organisations databases, accounts system, stored documents and the like, to present information to the worker as they are messaging. The pet feature can be configured as an office assistant, and can hold and relay information while the user is busy, make data searches, and the like. The appearance of the instant messaging system can be customised to match the workplace, or organisation. The knowledge mining and learning function mentioned above may also be used in this enterprise arrangement.

In the instant messaging system, the message sender may be any text input device, including a computer keyboard and/or a mouse, and/or a voice input device, including a microphone, and/or an image input device including a camera or a whiteboard. The message receiver may be any text output device that includes a section on the display device that displays text, and/or voice output device, including a speaker, and/or image output device including an image display device or a whiteboard. The instant messaging system may also include a section

on the display device that shows any characters created by any one or all of the users, and/or any virtual characters created by any one or all of the users.

The instant messaging system may operate on any electronic device, including on a computer or a mobile telephone, which may be connected to other electronic devices so as to exchange messages.

The instant messaging system may be a software application.

The invention in another broad form allows a first user to communicate via a communications system with at least one other user by sending and/or receiving messages, said system including: a message receiver, a message sender, and a display device to display messages, characterised m that said system displays at least one character that represents at least one of said users on said display device, wherein said character is displayed as an apparent 3- dimensional character.

The invention in a further broad form allows a first user to communicate via a communications system with at least one other user by sending and/or receiving messages, said system including: a message receiver, a message sender, and a display device to display messages, characterised in that said system displays at least one character that represents and emulates a virtual user. The character may be displayed as an apparent 3 -dimensional character.

The character may represent and emulate a virtual pet that belongs to the user associated with said pet. The pet may further function as a virtual user on said instant messaging system, able to receive messages from and/or send messages to the user to whom the pet belongs.

Alternatively, the pet may function as a virtual user on said instant messaging system, able to receive messages from and/or send messages to another user apart from the user to whom the pet belongs.

The invention in another broad form provides a method of generating an object in an instant messaging system, the method including: (a) sending and/or receiving one or more messages from one or more users of the instant messaging system: (b) determining the meaning of said one or more messages by analysing the natural language used in the one or more messages, and (c) changing at least one aspect of the object m accordance with said meaning determined in step (b). Preferably, the object is a display device. The aspect is preferably the appearance on the display device.

Preferably, the said aspect of the appearance of the display device that is changed is selected from any one or more of: the background appearance of the display device, a character that represents a user, or a character that represents and emulates a virtual user.

Preferably, the object is message.

Preferably, the method further includes the step of storing at least some of the messages sent and/or received, for use in said analysis.

Preferably, at least one aspect of the object changes over time according to the sending and/or receipt of one or more additional messages which have their meanings determined by analysing the natural language used m the one or more additional messages.

The invention in yet another broad form includes a method of profiling a user, via a neural network, in an instant messaging system, the method including the steps of: (a) receiving one or more messages from a user of said instant messaging system; (b) analysing the natural language used in said one or more messages to determine if said messages have a positive or negative meaning; (c) generating an input layer on the neural network, said input layer including a node associated with each of said one or more messages, each node categorised as having a positive message within it or a negative message within it; (d) generating a descriptor layer on the neural network having descriptor nodes; (e) analysing the positive and negative meanings of said messages in said input layer nodes and determining synonyms and hyponyms of said messages; (f) storing the synonyms and hyponyms in said descriptor nodes and linking said descriptor nodes with one or more corresponding input layer nodes; (g) generating a personality trait layer on the neural network having personality trait nodes, each node of said personality trait layer having a predetermined personality trait; (h) linking said descriptor nodes with one or more personality trait nodes which most closely correspond to said descriptor nodes; (i) generating an output layer on the neural network, said output layer including one or more nodes representing a predetermined personality type; (j) linking said personality trait nodes with one or more personality type nodes which best correspond to the personality trait nodes, in order to determine the personality type of the user. Preferably, at step (a), said one or more messages received from a user of the instant messaging system are via a questionnaire.

Preferably, at step (k), said personality type is determined once the number of messages received from a user of the instant messaging system exceeds a predetermined threshold.

Preferably, the method further includes the step of: (a) changing at least one aspect of an object associated with the instant messaging system, based on the result of the personality type.

Preferably, the object is a display device

Preferably, said aspect is the appearance of the display device.

Preferably, said aspect of the appearance of the display device that is changed is selected from any one or more of: the background appearance of the display device, a character that represents a user, or a character that represents and emulates a virtual user.

Preferably, the method further includes the step of matching information with the determined personality type of the user and displaying the information to the user.

Operating System

The instant messaging system of the present invention may be developed as an application that operates on a computer or other communications device, using any suitable programming language, or using any suitable operating system. In a preferred embodiment, it may be developed as a pure "Java Platform" application (from "Sun Microsystems"), which has an advantage that any operating system running a recent version of the Java Runtime Environment (JRE) would be able to run the application. Most (if not all) personal computers running a web browser have the JRE installed. However, other programming systems and operating systems may also be utilised. Although the present invention is preferably an application, it can alternatively be run as an applet embedded within a web-site on the Internet. This applet version will not require a download onto the user's communications device, and it will therefore allow a user to quickly access and login to the chat application when on-the-go, for example.

Protocol Support

There are a number of protocols currently in use among today's instant messengers. The present invention may be created to utilise any suitable protocol. For instance, "AOL Instant Messenger" (AIM from "AOL Time Warner") and "MSN Messenger" (from "Microsoft") have the largest user bases, although they both use a closed protocol for communication. Some instant messaging applications, like "Trillian" (maintained by "Cerulean Studios") and "Qnext" (from "Qnext Corp"), try to support all the major protocols. This approach of complete protocol support is currently a popular choice among users with inter-messaging communication needs. The drawback to this approach is the difficulty of ensuring the legality of any reverse engineering that may be required, in order to connect to a closed protocol. The major instant messaging companies have to this date not taken any legal actions to shut down third party instant messengers, but they have previously changed protocols and included hashing algorithms to validate a client application connected to their server that does not have their approval and support.

However, "Google Talk" (from "Google") and "Trillian" both Support the "Jabber" protocol. The Jabber protocol is an open source decentralized server protocol, allowing anyone to run a Jabber server. Besides being open protocol, Jabber has several gateway servers to the most common protocols like "MSN" and "AIM". A number of Jabber instant messenger (IM) applications can allow a user to locate a gateway server with the right IM protocol and seamlessly log on to their account. The fact that "Google Talk" is using the Jabber protocol is a confirmation of its future, and is therefore a good and preferred choice for a suitable protocol.

Implementation The present invention, in one preferred form may be written in "Java" using the "Swing"

GUI with a suitable look-and-feel. The look-and-feel may be changeable, and provided by third parties, for example. Asymmetric and symmetric encryption can be used for instant messaging (IM) and peer-to-peer content exploration sessions. Encryption may be an optional setting for the user, depending on their needs and on the application. Encrypted file transferring can be supported in a decentralized server or pure peer-to-peer network connection model

The instant messaging system of the invention may allow the use of both the traditional 2-dimentional type of graphical "smileys", as well as an interactive pet, if desired. There may also be additional features such as built in games, but preferably these are not included in the IM application, although a third party plug-in development kit can be made available to other software developers interested in developing extended functionality to the chat application, for example. The ability to log all IM conversations is preferably in the application. The user may also have a choice in terms of skins/themes for the look and feel of the application. The skins/themes can be supported by a choice of different Java Application look and feels

Conferencing Features

Audio and video chat is present m many of the instant messaging applications on the market today. Therefore, it would be useful to preferably include a similar multi-way audio and video chat, features, which are currently also in some messengers applications currently available. Another useful feature that may be included in the application of the present invention is a conferencing feature such as white-board sharing. If white-board sharing is available in the present invention, it may additionally include a form of sketch recognition, to its common drawing area, so as to allow rapid sketching of symbols and diagrams.

Features

The present invention may preferably have a number of features not currently found in other instant messaging, such as: the (1) Chat pet (2) 3-D emoticons (3) dynamic backgrounds, and (4) natural language analysis.

The instant messaging system of the invention "learns" In some ways it models the way humans learn language, facts, context and rules. By learning from humans, the system can then utilise any 2- or 3-dimensional image or graphic through verbal or textual descriptions. By learning from human dialogue and conversations, and enhanced by the ability to extract meaning and emotion from the language used in messages, the invention can remember, and graft itself onto a user's personality, or develop its own personality. It can assist a user as a Personal Assistant in information retrieval, reminders or even education.

Additionally, it has preferred features that functions as a type of virtual pet. People are familiar with emoticons that convey human emotion in a simplistic, usually boring and non- personal or creative graphical form in previous instant messaging applications, or in emails or mobile messages. People are also familiar with the "Tamagotchi™" or "Pokemon™" sensations. These virtual pets are cute, adorable, seemingly-intelligent, interactive and through the sheer fact that they demand attention, can be personalised and "owned" by a user. The invention allows a user to create his/her own pet, avatar or image and to assign or describe a personality for that pet that could be a mirror of a user's own personality or a personality of its own. The pet, once created, can then interact intelligently with a user on a variety of levels, just by chatting or speaking with it. Creating a personal pet that generates a sense of ownership and emotional attachment to a user is a unique concept for an instant messaging and one that drastically affects our business and revenue models.

The system may store everything everyone has ever messaged, and can then find an appropriate thing to say or action to generate using contextual pattern matching techniques. In messaging to a user it can use prior learnt material. Without hard-coded rules, it relies entirely on the principles of feedback. This is very different to the majority of other chat bots, which are rule-bound and finite. If you use a foreign language it can learn it, and respond appropriately if it has enough to go on. It can be taught slang English, word games, jokes and any other form of identifiable language traits. Additionally, the pet may learn by responding to input from the user that is presently unknown to the pet (this may occur in cases when no meaning can be extracted from the input). Learning from unknown input allows the pet to evolve and increase its linguistic capabilities. For example, if the user advises the pet that its previous response was nonsensical (in light of the previous input from the user), then through further interaction with the pet using natural

language, the user may suggest a more appropriate response. The more appropriate response may be appended to a database for later use by the pet. For instance, if the user input is "G' day mate" and the pet has no awareness of Australian slang, the pet may respond with "What?" (or similar) to which the user would reply "'That means hello in Australian" at which point the pet would determine that the concept behind "G 'day mate" requires a response similar to that for a greeting and retain this learned knowledge m a database. Alternatively, if the user does not respond to the pet ' s question ("What?"), a detection system may be m place to determine if a nonsensical reply has been generated, and the pet may remember to ask the user at a later time what an appropriate response would have been. For example, using the same input "G' day mate", the pet may divert attention from the fact that it is incapable of generating a valid reply by replying with a general statement such as "How are you", while at the same time remember that "G'day mate" was an unrecognised input. At a later point in time the pet may initiate conversation with the user by enquiring "So what does G'day mate mean?" at which point the user may provide the same explanation, "It means hello in Australian". The pet may then append this information to a database for later use by the pet.

Chat Pet

The Chat Pet will be a living-breathing 3D character constructed by the user within a character context. For instance a user may begin from any well-known character, like the "Care- bears", for example. (The "Care Bears" trademark and the copyrights in the character designs are owned by "Those Characters from Cleveland", a subsidiary of the "American Greetings" company See http://www.carebears.com/CareBears/html/index.html.) The user can be allowed to create his or her own "Care Bear". This "Care Bear" will have a unique look-and- feel specified by the user. The character context will specify the constraints (or freedom) applied to the character creation. After a character has created his/her unique and personalized character, the character has a vast of amount of "actions" it can perform. These "actions" will apply to almost any character, and can be triggered by either user-input or character personality (AI). As well as having a personality, the character will live in a constrained world where it may interact with its surroundings. If a user tires of his/her pet, a new character can be created in either the same character-context or any new context the user may acquire (purchase). The introduction of a life span for the character may be an option, depending on how attached a user gets to his/her pet.

The owners of such characters would licence their use in this application, and can generate an income stream from users. As well, the characters may be part of a publicity

campaign to promote movies, character franchises and the like. For example, users may have to register with the character's owners to access their characters, or else purchase the requisite software on a disc or download it from a web site over the Internet, for instance.

3-D Emoticon

Although static type emoticons can be found in most commercial instant messaging applications, none of them supports real 3-D type emoticons. In other words, existing emoticons provided in previously available instant messenger platforms are either just static 2-D images or animated 2-D flash-based images. The present invention aims to provide a realistic 3- dimensional emoticon. Such a 3-D emoticon may either be a user created character or a default or pre-created character provided by third parties There may also be the opportunity of the character performing an action on the remote machine's personalized character, (for example such as a kiss, hug or perhaps a punch).

Another possible role of the user's personalized character is to act as the receiver for the display of simple emotions parsed from the dialogue between two or more users. The dialogue is analysed and emotionally loaded text can affect the character's physical appearance or emotional state, or perhaps even the characters personality.

Dynamic Background The basis of the personalized character also can be applied to a dynamically changing background. This may initially be based on the user's profile, and then driven by the emotions extracted from the conversation. As an example, users may use a photograph of themselves, or their room, or their home, as the base for a background. This photograph will be affected by various filters (algorithms) to create a unique but personalized background image. Alternatively, the user may have a general theme selected for them from an analysis of the user's profile, which is dynamically altered according to the ebb and flow of the dialogue taking place in the instant messaging application. For example, the city where the user lives may become the theme, and the images changed or filtered according to the meaning of the language. A scene of a famous landmark, or a stream of different landmarks, in that city may be displayed as the background, and further images of sunniness, amount of clouds, ram. etc, may be superimposed on that image according to the dialogue occurring. Ideally, the images shown would match the tenor or the conversation, so that an angry discussion may show a storm brewing, for instance.

Natural Language

By using a natural language parser, a user can give specific textual instructions to create and control a character. The user can specify character properties and behaviour, as well as telling the character what to do. Then, user IM sessions can be parsed continually using the natural language parser, and as a result, the character can then react to what is being said For instance, it may react to an insult from another user by appearing to cry, or to the user's own happy mood by appearing to smile, where the natural language parser identifies the insult or happy mood from an analysis of the instant messaging dialogue.

The natural language dialogue may also operate with a conversation between a user and their character "pet", or another user's pet. It is best to make the pet character seem as natural as possible when conversing with a user.

Details of Character Generation

A 3-D rendered and physically animated character provides one of the mam components for the present invention. This can be further broken up into its visual and its non- visual components. The visual component is the rendering of a 3-D model of a character. The non- visual components consist of a natural language and artificial intelligence sub-system that will help to make the character or pet appear intelligent and to work in a helpful manner. The visual part of the character may be a skeleton-deformed skin that can also be animated, if desired. Such a character creation process is generally known m the computer imaging and animation art.

This animation may consist of frame-by-frame modification of the characters skeleton. As the bones of the skeleton moves about, the bones will "drag about" any vertexes that make up the skin mesh. This method is a low user of computer memory, but can be computationally expensive; however the realistic look of the character and its interaction with other characters and the world about it may make this computational cost worth bearing. The character can work using a rag-doll" methodology This method differs from other methodology used in many games, in that unplanned for forces can cause the character's limbs to move in a realistic manner. For example, a character that was designed before the introduction of a chair will stumble as expected in accidentally colliding with this object. To create a character that moves in a physically realistic manner will preferably require a physics engine. The first part of the physics engine is a "Forward Kinematics" system, where animations are used to drive the character into taking up a variety of different poses. A more complex part of the physics engine may be the "Inverse Kinematics" system that will allow forces external to the character to modify whatever animation driven motion that it is trying to

carry out. This can allow the character to provide a very large number of responses to interactions with its surrounding world, instead of working with only a pre-generated series of poses. Characters can preferably be generated in two ways. One uses 3-D editing software like "3DS Max" or "Alias Maya" to create a character with a full skeleton. This data gets saved in a suitable character data format. Using various animation tools one can edit a series of animations for this hand-made character. This is a quick and relatively easy method.

A second way of generating a character will be to use the Natural Language system and talk to it, describing the sort of character that is desired. Due to the simple nature of characters, this will consist of a simple series of instructions ("make left leg longer", "add another arm to the right") made up of simple language and instructions. If necessary, a user can be provided with examples of permitted language that will operate to generate a suitable character. This conversation can be used to generate a skeleton that can then be "clothed" in a mesh.

Some experiments have created an algorithm for generating a rather rounded looking character, but further simple development can improve the character's fine detail, like the face area. Animations are generally calculated bone by bone and no bone is affected unless explicitly referenced. This means that several separate animations can be run at the same time. For example, a character can be given a "walking" animation at the same time as a "wave his hand" animation. A character preferably has a default or preferred pose and if no specific animations are provided then it can slowly drift to this pose in order to provide a consistent starting point for any further animations. If several animations are used at the same time on the same bone then the average of the animations may be used.

Preferably, there are several levels through which interaction with the character can occur. The first is having the character passively listen into the conversations that the human user generates. This lets the character derive important user information about the human user, and provides cues for generating actions in the character. This simply takes the form of loading and running animations according to their suitability.

Another method of interaction is through direct conversation with the character. This takes the form of a chat session where the user may type text into a specific text area on the Graphical User Interface (GUI) and the character can respond either or both through its own words and changes to its body pose as mentioned above.

BRIEF DESCRIPTION OF DRAWINGS

The invention is now discussed with reference to drawings, where:

Figure 1 illustrates an outline of the overall architecture of one embodiment of the instant messaging system of the present invention;

Figure 2 illustrates an overview of an example of architecture of an instant messaging system that incorporates natural language processor, conversation manager and 3-D engine modules, in order to generate customised, intelligent seeming and emotive 2- and 3- dimensional images;

Figure 3 illustrates a flow chart of one example of a natural language processor to process natural human discourse and dialogue;

Figure 4 illustrates an overview of an example of architecture for a 3-D domain knowledge system that functions as a communications interchange protocol between a natural language parser and a 3-D engine, which may be used to create customised 2- or 3-dimensional user creations as well as to send instructions for real-time actions, animations and emotional expressions;

Figure 5 shows a data flow diagram for the domain knowledge system that may be used with the invention;

Figure 6 shows the architecture of one example of a method for creating real-time 2-and 3- dimensional objects, images and entities as well as real time actions, animations and responses in 2- and 3- dimensional objects, images and entities;

Figures 7A and 7B illustrate an instant messaging system window according to the invention of a messaging client that understands a human dialogue and displays a 3 -dimensional character, for each of two users that expresses a particular emotion in real-time. Figure 7 A displays a realistic (or avatar) character type, while 7B displays a "pet" type of character;

Figures 8A and 8B illustrate the same window of a messaging client, where a 3- dimensional character (located on the left of the window) responds to some user dialogue as well as to the first character;

Figures 9A and 9B illustrate the same window of a messaging client, where the 3- dimensional characters respond to more natural human discourse and dialogue between two users, by adapting their appearance according to the meaning of the language being used;

Figure 10 shows a flow chart of one possible method for creating a user-customised 2-or 3- dimensional character by speaking naturally;

Figure 11 illustrates a flow chart of one possible method for creating responses, reactions, intelligence and machine learning capabilities withm a 2- or 3- dimensional object in real time, as well as real time updates and notifications to other entities and servers that belong to a user network; and

Figure 12 illustrates a flow chart of one possible method for profiling a user, via a neural network, in an instant messaging system.

To facilitate understanding, identical reference numerals have been used, wherever possible, to designate identical elements that are common to the figures.

DETAILED DESCRIPTION OF THE DRAWINGS

Figure 1 illustrates the overall architecture of a system (100) for the creation and implementation of an interactive, seemingly intelligent and emotive instant messaging client application in accordance with the present invention. This is just one example of an instant messaging application according to the invention, and other applications with different or less or additional features may also be utilised, without departing from the essence of the present invention.

In the example shown in Figure 1, the system (100) comprises a payment system (110), a core engine system (120), a graphic system (130), a natural language dialogue controller (140), instant messaging clients (150, 160 & 170), a voice recognition device (180), a user database (190), a neural network (195), and a chat application system (200) Although the present system is described as composing of a number of disparate and separate systems, other configurations of the same parts can also be deployed to achieve the same result in the present invention. For example, the chat application system (200) may not be implemented as disparate to other chat based instant messaging systems and could be integrated as a whole.

A user starts out by using the instant messaging system or other pertinent human-to human communication clients (150, 160, 170) and begins chatting with an artificially intelligent chat bot (Al Chat Bot) via a natural language dialogue controller (140). The chat bot can conduct a natural conversation or dialogue with a user in order to ascertain things like the user's preferences, likes and dislikes, age, personality, intelligence and so on, to assist the user in creating a 2- or 3- dimensional character, image, etc, according to the user's own choice. The natural dialogue is processed and understood by a natural language processor (NLP) (300) (which is shown in Figure 3 in more detail), and an artificial intelligence module (122) and compared and enhanced by an ontological knowledge resource (123). Meanings generated semantically by the NLP is interpreted by the 3-D Domain

Knowledge Communications Interchange Protocol (400) (which is described in more detail in Figure 4) which operates as an application switcher to process the selected output data as instruction strings to a 3-D physics engine (125), a 3-D render engine (126) and/or a 2-D sprite engine (127). The physics engine (125), render engine (126) and sprite engine (127)

(respectively) process the information in order to create and generate the corresponding 2- or 3- dimensional object described by the user through the natural language dialogue controller (140). Some of the 3-D data may be derived, assembled and constructed from a graphic server (130) that contains a 3-D repository (131), a 2-D repository (133) and/or a file storage system (132). The resulting image or 3-D model is then displayed to the user who can then keep the desired creation or continue making modifications again by chatting naturally to the Al Chat Bot natural language dialogue controller (140).

Once a user is satisfied with his/her creation a payment system (110) may be optionally employed either through some sort of billing server (111) or through the transaction of virtual currency. The user then proceeds to download the creation to his/her instant messaging client or other pertinent clients (150, 160, 170) as well as a chat application system (200) (which is described in more detail in Figure 2). The chat application system (200) operates not only as an instant messaging client but also allows the user to continue to adapt, interact, alter and modify his/her creations in real time with continual updates occurring through an event router that sends updated notifications to the instant messaging client as well as a user database (UDB) (190). This system therefore enables complete control over emoticons, characters, images, wallpapers, etc, including intelligent response, reactions, actions, animations and emotion through the constant mining of information through the NLP. The chat application system (200) can also determine the profile or personality type of the user. The user begins chatting with an artificially intelligent chat bot (Al Chat Bot) via a natural language dialogue controller (140). The AI Chat Bot can conduct a natural conversation or dialogue with a user m order to ascertain things like the user's preferences, likes and dislikes, age, personality, intelligence and so on, and uses a neural network 195 to ascertain what the personality type of the user is, based on their likes and dislikes through analysing the conversation between the Al Chat Bot and the user. The analysis may be stored on a user database (UDB) (190).

In some cases, a voice recognition device (180) can be employed as an alternate method of chatting or conversing through text only.

Figure 2 illustrates a flow chart of a chat application system (200) that incorporates a Natural Language Processor (214), Conversation Manager (212) and a graphics generator or 3-D Engine (shown in Figure 6) in order to generate unique, customisable, intelligent and emotive 2- and 3-dimensional images, characters, backgrounds avatars, emoticons and other related artwork, as well as to operate normally as an instant messaging client. The chat application system (200) consists of a graphic user interface (GUI) (201), a chat system (202), a settings system (203), a files system (204), a pet category (205), a human category (206), a friends category (207) and

user profile database (208), a transferring system (209), a search mechanism (210), a friend management control (211), a conversation manager (212), a networking protocol system (213), a natural language processor (214), a personal assistant module (215), a conversational agent (216), an artificial intelligence module (217), a user profile database (218) and a web/data mining system (219).

This figure provides just one example of a system in which the chat application might work, and other alternative approaches are also possible. In this example, the main interface operates primarily through a typical Graphical User Interface (201) that allows the user to start a chat session (202) either with his/her friend (206) or pet (205). AU natural conversations, discourse and dialogue is handled by a conversation manager (212) that sends data across to a natural language processor (214) in order to mine and extract meaning, content, context, etc. contained within the dialogue in order to send instructions to a graphics generator or 3-D engine (shown in Figure 6) that controls actions or animations triggered within 2- and 3-D content such as the pet, images, wallpapers, backgrounds etc. The Natural Language Processor (214) communicates directly with a personal assistant manager (215), a conversation agent (216) for natural dialogue and feedback capabilities, an artificial intelligence module (217), a user profile manager (218) and a web/data mining portal (219) for knowledge extraction from the web. The Natural Language Processor (214) generates an information resource bank from two sources which preferably compliment each other Firstly, from the Internet, via the web/data mining portal, through such resources like the "British National Corpus", the "Wikipedia" or "Google", for instance. The second resource is communication from the user itself.

A friend management module (211) is also utilised for a user to organise his/her buddy or friend list (207). User Profiles (208) and Friend Management (211) are controlled by a user through the settings manager (203). The instant messenger also allows for files transfers (204/209) as well as files searches (204/210), which once again, handles queries via natural language, through the natural language processor (214) and the web/data mining portal (219).

Figure 3 illustrates a flow chart of a preferred natural language processor or semantic natural language parser which consists of a text or voice input acquisition module (301), a semantic parser (304), a rhetorical structure module (305), a conversation analysis agent (306), an ontological knowledge engine (302), a context update engine (303) and a text output module (307).

Essentially, a user starts out by inputting some text, or starts a natural conversation or dialogue through the text input module (301). The meaning of the dialogue is analysed and parsed by the semantic parser (304) and rhetorical structure module (305). Once meaning is

derived and understood, the parser communicates with the conversation analysis agent (306) in order to provide a response to a text output module (307) that communicates with the user once again. Alternatively, the semantic parser (304) and rhetorical structure (305) sends data streams to a 3-D Domain Knowledge which in turn sends instructions to the graphics generator or 3-D Engine (shown in Figure 6) in order to affect a change within the virtual scene or environment. In addition, knowledge mining and extraction is supported by the ontological knowledge engine (302) that extracts information from its own corpus as well as the Internet. Any information or knowledge update including new forms of knowledge that can potentially be trained by a user is supported by the context update engine (303). Figure 4 illustrates a flow chart of an example architecture for a 3-D Domain Knowledge system that incorporates a Graphical User Interface (410), a domain knowledge engine (420), a natural language processor (430) and a graphics engine (440). The GUI (410) allows the user to create, modify, expand and alter artificial entities e.g. rooms etc., natural entities e.g. scenery etc. and characters e.g. humans, animals, pets etc. contained withm a domain knowledge interface (411). The user communicates with his/her creations within the domain knowledge interface (411) through a natural language dialogue controller (412/430) that handles dialogue between a user and sends feedback to the user as well. The domain knowledge interface (411) and the dialogue controller (412) then processes the user dialogue or conversation and sends instructions to the domain knowledge engine (420) that contains information and data about 2- and 3- dimensional objects. The domain knowledge engine (420) constructs, creates, modifies etc. a particular user-defined object by sending specific requests and commands to the graphics engine (440) that contains its own domain knowledge libraries and information (441) as well as a settings controller (442) for handling further modifications and updates.

Figure 5 illustrates a flow chart of a detailed domain knowledge system that may be used with the invention. It incorporates a natural language processor (501), a graphical user interface (502), a character domain knowledge (503/510). an artificial domain knowledge (504/508), a natural domain knowledge (505/509), a domain knowledge controller (506), a primary domain knowledge engine (507), a graphics engine (511), a domain knowledge model controller (512), an artificial inventory model look-up (513), a natural inventory model look-up (514), a character model look-up (515), a generic 3D model look-up (516) and a 3D model repository (517). Based on a users request in a natural conversation through the GUI (502) and NLP (501), the NLP (501) determines the meaning, objective, design, context etc. by chatting naturally to a user. The user may choose to create an artificial object, a natural object or a particular custom-designed character or a combination of different multiple objects by describing his/her thoughts and ideas

through the NLP (501). The NLP (501) then sends a command string to the DK Controller (506) that processes the request through a primary domain knowledge engine (507) and fills templates contained with an artificial DK (504/508), natural DK (505/509) and/or character DK (503/5 10). Once the templates have been filled and the right 3D object has been ascertained, the DK Controller (506) works with a graphics engine (511) and sends model data to a DK Model controller (512) that selects model parts from a 3D model repository (517) through a 3D model look-up (516). The final model is then assembled through the graphics engine (511) and displayed to the user on the GUI (502). The user may choose to modify the object at this stage, and the process is repeated. Figure 6 illustrates a flowchart of an exemplary method for a graphics generator in the form of a real-time 3D graphic engine (600) that incorporates a 3D model repository (610), a 3D factory (620), a 2D model repository (630), a domain knowledge engine (640) and an animation controller (650). This engine is used to create either 2- or 3- dimensional objects in real time, based on natural language input from a user and to then animate or modify the user's creation. Initially, a user's request is processed through the domain knowledge engine (640) and instructions and commands are sent to the 3D model repository (640) and/or 2D model repository (630) and finally created and constructed by the 3D factory (620). The final model is then animated, controlled, modified etc. by the animation controller (650).

Figure 7A and 7B illustrates an instant messaging system window (700) according to the invention of a messaging client that understands a human dialogue and displays a 3 -dimensional character, for each of two users that expresses a particular emotion in real-time. Figure 7A illustrates a dialogue window (701) together with realistic (or avatar) 3-dimensional character types (702, 703). A message is entered by a user into the dialogue window (701). In this case, the user associated with 3-dimensional character (702) asks the user represented by the 3- dimensional character (703) about how he feels today, as well as advising the user represented by the 3-dimensional character (703) that he is feeling good. The messaging client analyses the message and understands that the user associated with 3-dimensional character (702) is feeling good. Therefore, the 3-dimensional character (702) is updated to show the character smiling. The user associated with 3-dimensional character (703) responds by entering a message in the dialogue window (701) and either or both 3-dimensional character (702) and 3-dimensional character (703) can change their appearance on the basis of the meaning of the messages entered into the dialogue window (701). The 3-dimensional characters (702, 703) understand the mood conveyed by the user and progressively display an emotion that represents how that particular user is feeling.

Figure 7B illustrates a dialogue window (701) together with "pet" type 3-dimensional characters (702, 703). The "pet" type 3-dimensional characters (702, 703) may be associated with a user or one of the "pet" type 3-dimensional characters (702, 703) may act independently to the user and interact with the user as a "pet". The operation is the same as that described in Figure 7A in that the appearance of the 3-dimensional characters change in response to messages sent from the dialogue window (701).

Figure 8A illustrates an instant messaging system window (800) of a messaging client, including realistic (or avatar) 3-dimensional characters (702, 703) wherein the user associated with 3-dimensional character (703) has asked a question of the user associated with 3- dimensional character (702) ("how are you today") together with advising the user associated with 3-dimensional character (702) that he is feeling really good today. The messaging client analyses the message and understands that the user associated with 3-dimensional character (703) is feeling good and therefore the 3-dimensional character (703) is updated to show the character smiling. As the user associated with 3-dimensional character (702) has yet to respond, there is no change to the 3-dimensional character (702) until the user associated with 3- dimensional character (702) responds by sending a message back to 3-dimensional character (703) via the dialogue window (701).

Figure 8B illustrates an instant messaging system window (800) of a messaging client, and messages as described in Figure 8A but including "pet" type 3-dimensional characters (702, 703). The "pet" type 3-dimensional characters (702, 703) may be associated with a user or one of the "pet" type 3-dimensional characters (702, 703) may act independently to the user and interact with the user as a "pet". The operation is the same as that described in Figure 8B in that the appearance of the 3-dimensional characters change in response to messages sent from the dialogue window (701). Figure 9A illustrates an instant messaging system window (800) of a messaging client including realistic (or avatar) 3-dimensional character types (702, 703). A message is entered by a user into the dialogue window (701). In this case, the user associated with 3-dimensional character (702) communicates to the user represented by the 3 -dimensional character (703) with "where are you, you are late, I hate people who are late". The messaging client analyses the message and understands that the user associated with 3-dimensional character (702) is angry. Therefore, the 3-dimensional character (702) is updated to show the character angry. The user associated with 3-dimensional character (703) responds by entering a message in the dialogue window 701. In this case, the user associated with 3-dimensional character (703) responds by saymg "I am sorry. Please don't be angry". The messaging client analyses the message and

understands that the user associated with 3-dimensional character (702) is sorry or worried.

Therefore, the 3-dimensional character (703) is updated to show the character sorry or worried. The 3-dimensional characters (702, 703) respond to more natural human discourse and dialogue between two users, by adapting their appearance according to the meaning of the language being used. The 3-dimensional characters (702, 703) understand the mood conveyed by the user and progressively display an emotion that represents how that particular user is feeling.

Figure 9B illustrates an instant messaging system window (800) of a messaging client together with the messages as described in Figure 9A but including "pet" type 3-dimensional characters (702, 703). The "pet" type 3-dimensional characters (702, 703) may be associated with a user or one of the "pet" type 3-dimensional characters (702, 703) may act independently to the user and interact with the user as a "pet". The operation is the same as that described in Figure 9B in that the appearance of the 3-dimensional characters change in response to messages sent from the dialogue window (701).

Figure 10 illustrates a flow chart of one possible method for creating a customised 2- or 3- dimensional object by speaking naturally. A user initially starts (1010) by sending one or more messages by chatting to an AI chat bot system in order to specify the type of object the user is intending to create. Once an objective is ascertained (1020) the elements in the dialogue are analysed, parsed and processed (1030) and a result is assembled, produced and displayed to the user (1040). This is done by determining the meaning of the one or more messages by analysing the natural language used in the one or more messages. The user can continue to send one or more messages by chatting naturally to modify the object (1050) or to add elements to the object such as personalities, intelligence and other attributes (1060). Once the user is satisfied with his/her creation, the user downloads the finished product to his/her computer (1070).

Figure 11 illustrates a flow chart of one possible method for creating real time responses, reactions and artificial intelligence withm a 2- or 3- dimensional object. Once a user has created a particular object with default personalities, characterisation, intelligence etc., a user can either be chatting to a friend, or a group of friends of his/her creation. The dialogue session and all natural language will be processed and analysed (1120) and can either learn new knowledge, information, slang, lingo etc. through a machine learning module (1130), the created object may respond in natural dialogue (1140), may learn new actions or respond to certain natural language inputs through triggered animations in real time (1150), receive instructions from a user to connect to the Internet to search for information or knowledge (1160). At the end of each session, all new information, actions, animations etc. will contributed to enhanced or new

intelligence within the created object and updates and notifications are sent to all pertinent servers and buddy notifications systems (1170).

Figure 12 illustrates a flow chart of one possible method (1200) for profiling a user, via a neural network, in an instant messaging system. At step (1205) one or more messages are received from a user. The messages may be received from the user during use of the instant messaging system or may be via a questionnaire associated with the system (such as when signing up to use the system or to create an account on the messaging system). At step (1210), the messages are processed and understood by a natural language processor (NLP) which then determines whether the one or more messages are positive or negative in nature. For example, use of the word dislike or hate may (in certain contexts) be determined to be negative in nature, whereas use of the word like, or messages that convey hope or desires, may be determined to be positive in nature. Once the positive or negative meaning of the one or more messages has been determined, at step (1215) an input layer on the neural network is generated. The input layer includes one or more nodes which store the content of the messages and the nodes are categorised as having a positive or negative message within it. For example, if a user says "I like soccer" this message is stored in a node on the input layer of the neural network and the node will be categorized as positive. At step (1220) a descriptor layer is generated on the neural network. The descriptor layer includes one or more descriptor nodes. The descriptor layer is a breakdown of the input layer. At step (1225), the positive and negative messages in the input layer nodes are put through a lexical database to determine one or more synonyms and hyponyms associated with the message. For example, the positively characterised node that contains the message "I like soccer" is analysed and one or more synonyms or hyponyms are produced (e.g. sport, ball). At step (1230), the synonyms and hyponyms associated with the message on a node of the input layer are then stored in descriptor nodes associated with the descriptor layer and the descriptor nodes are linked to corresponding messages stored in the input layer nodes. For example, the hyponyms "sport" and "ball" are stored in descriptor nodes which are linked to the input layer node which contains the message "I like soccer". At step (1235), a personality trait layer is generated on the neural network. The personality trait layer includes one or more personality trait nodes each having a predetermined personality trait. For example, the predetermined personality trait nodes could include personality traits such as "quiet", "dependable", "practical", "sociable" "friendly", "sensitive", "energetic", "good at reading people", "loves outdoor physical activities". At step (1240), the descriptor nodes are linked with one or more of the personality trait nodes which most closely correspond to the descriptor nodes (e.g. the descriptor node containing "sport, ball" may be linked to the personal

trait node "loves outdoor physical activities"). At step (1245), an output layer is generated on the neural network. The output layer includes one or one or more output nodes representing a predetermined personality type. The personality type may be based on a personality types such as the Myers-Briggs personality types. For example, the output nodes could include the sixteen Myers-Briggs personality types. At step (1250), the personality trait nodes are linked with one or more personality type nodes which best correspond to the personality trait nodes, in order to determine the personality type of the user. This result may be returned to the user or may be stored by the instant messaging system. The method (1200) is preferably repeated a predetermined number of times m order for the neural network to more accurately determine the personality type of the user.

In a preferred form, the method of profiling a user is used in combination with the method of generating an object. The method further includes the step of changing at least one aspect of an object associated with the instant messaging system, based on the result of the personality type. For example, a pet created by the user may change to better compliment or reflect the personality type of the user. In the case of a user which has been determined to have an ISTJ (Introverted Sensing Thinking Judging) Myers-Briggs personality type, the system may alter the pet to behave or appear in a fashion that that is similar to the user so that the user can better empathise with the pet. Preferably, the instant messaging system analyses the personality type of the user and matches information with the determined personality type of the user and displays the information to the user. The information may be in the form of targeted advertising, suggesting another user who has a compatible personality type who the user may wish to contact Preferably, the input nodes may provide suitable product recommendations and shopping product searches to the user based on user likes and dislikes which are associated with the input nodes. The neural network may use a back propagation learning algorithm in order to train the neural network to achieve accurate outcomes and map the input nodes, descriptor nodes and personality trait nodes to the personality types. Preferably, when using a back propagation algorithm to train the neural network, it has been found that an acceptable error value in the neural network is 0.4. This means that even at 40% error the neural network will still return a correct result. An error value above 0.4 results in the personality types returned (given input in the form of messages) may not be correctly mapped. The error value can be lowered below 0.4 to produce a more accurate neural network but results in increased learning time. It has been found that an error rate of 0.4 provides a good balance between learning time and acceptable error.

A neural network momentum associated with the back propagation learning algorithm is also added to the neural network. This is to increase the speed of learning and to help prevent the network getting stuck in local minimums while learning. The momentum starts at a value of 9999 but if an error occurs in the learning process caused by the momentum (i e. the error is increasing or the error is not decreasing) the momentum value will be slightly reduced and the learning process is restarted.

The learning process may be a learning process that is typically carried out in neural networks and for example, may be carried out in four steps: In step one, the neural network configured, with new nodes at the descriptor layer created, input values set, and links created. In step 2, a learning algorithm applied and the neural network is run and a result is returned. Back propagation is then used to determine new link weights In step 3, the neural network is stripped for learning and only nodes linked with the highest weights are extracted. In step 4, a learnt network is added to the current network to reinforce learnt links and dependencies.

The above learning process is used to accept learning patterns and structure the network to the specific pattern given. The back propagation learning algorithm described above is used to fine tune the weights between the nodes and create a network to give the correct result. The resulting network is then pruned and stripped of all non essential nodes. Only nodes and links which lead to the result are retained. And then out of these groups of nodes only the nodes and links with the highest contributing weights are not discarded. The stripped network then will only have a handful of nodes and links with give a direct path the result. These paths are then added to the current network reinforcing already saved paths, creating new paths or modifying existing paths to become more correct. As in a real biological neural network, weights between neurons are increased or reinforced when patterns are found to reiterate the relationship between the two neurones. This reinforcing makes it more likely for that relationship to be chosen in the result.

The neural network may train from a variety of sources in the instant messaging system. These sources include a questionnaire upon signing up to use the instant messaging system or from messages received by a chat bot.

In order to train the neural network, all messages will be fed back into the neural network. If a relationship is found in the message, a new node link will be formed between the nodes (e.g. input nodes and descriptor nodes, descriptor nodes and personality trait nodes, personality trait nodes and output nodes) or an already existing node will be reinforced (and given greater weighting when determining the personality type). This helps to ensure that the neural network is always kept up-to-date with new information In particular, a user's

personality type will also be refined over time as they use the instant messaging system.

Typically, upon registering to use such as system, the user will not tell the registration process (e.g. via a questionnaire) a lot about themselves, which limits the neural network results. However, as the user communicates to a chat bot (for example) the instant messaging system feeds the information back to the neural network which more accurately refines the user's personality type

It will be apparent that obvious variations or modifications may be made in accordance with the spirit of the invention that are intended to be part of the invention, and any such obvious variations or modification are therefore within the scope of the invention.




 
Previous Patent: REPTILE FARMING SYSTEM

Next Patent: A FENCE SYSTEM