Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ADAPTABLE SMART TECHNIQUES FOR USER INTERACTION
Document Type and Number:
WIPO Patent Application WO/2001/069573
Kind Code:
A1
Abstract:
A stochastic learning system is described which uses as stochastic learning technique to learn from a user's actions to various circumstances. The method includes learning the actions in response to other action probabilities (200), storing information in the database (205), assessing probabilities from the database stochastically (210) and for any set of actions using the highest probability response (215).

Inventors:
PAPAVASSILOPOULOS GEORGE (US)
ANSARI ARIF MOHAMED (US)
Application Number:
PCT/US2001/008593
Publication Date:
September 20, 2001
Filing Date:
March 16, 2001
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV SOUTHERN CALIFORNIA (US)
PAPAVASSILOPOULOS GEORGE (US)
ANSARI ARIF MOHAMED (US)
International Classes:
G09B7/02; (IPC1-7): G09B7/04
Foreign References:
US5103498A1992-04-07
US5498003A1996-03-12
US6098065A2000-08-01
US6199067B12001-03-06
US5938531A1999-08-17
US5361201A1994-11-01
Attorney, Agent or Firm:
Harris, Scott C. (CA, US)
Download PDF:
Claims:
What is claimed is:
1. A learning system, comprising: a first monitoring element which monitors a user's actions relative to specified displayed criteria on a computer; a storage element which stores information indicative of previously learned actions, which include user responses to specified criteria; and a stochastic learning technique, which learns from said information in said storage element, and provides a response based on a highest probability input at a specified time.
2. A system as in claim 1, wherein said monitoring element monitors a user's actions during playing an electronic game.
3. A system as in claim 1, wherein said monitoring element monitors a user's actions on the Internet.
4. A system as in claim 3, wherein said users actions on the Internet comprises the users actions when accessing an Internet search engine.
5. A system as in claim 1, wherein said monitoring element monitor the users actions in moving a mouse with an operating system.
6. A method, comprising: on a computer, monitoring a user's actions and computer state during said user's actions ; storing learning information indicative of both said user's actions and said computer state; and determining a current computer state ; and using a stochastic learning technique, determining a highest probability likely user's response to said current computer state, and carrying out said highest probability response.
7. A method as in claim 6, wherein said monitoring element monitors a user's actions during playing an electronic game.
8. A method as in claim 6, wherein said monitoring element monitors a user's actions on the Internet.
9. A method as in claim 8, wherein said users actions on the Internet comprises the users actions when accessing an Internet search engine.
10. A method as in claim 6 wherein said monitoring element monitor the users actions in moving a mouse with an operating system.
Description:
ADAPTABLE SMART TECHNIQUES FOR USER INTERACTION CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claim benefit of U. S.

Provisional application no. 60/190,339, filed March 16, 2000.

BACKGROUND [0002] Commercial computer games provide a user with different options for playing, such as different skill levels and the like. After a while, however, the user can start to predict the game's actions. The user typically eventually becomes bored with the game. After that time, the user will often stop playing the game.

SUMMARY [0003] The present application teaches a learning module which learns responses as a function of a user's actions, and determines, based on probability, responses, based on the previously-learned actions, and the current actions. One embodiment is as part of a game that learns a game players'strategies and tactics by watching the user's actions. The game correspondingly adjusts its own strategies and tactics to compensate, thereby providing more challenges for the game player. The game therefore

learns as the player learns, and therefore may automatically compensate for the player's adjusting skill level.

BRIEF DESCRIPTION OF THE DRAWINGS [0004] These and other aspects will now be described in detail with reference to the accompanying drawings, wherein: [0005] Figure 1 shows hardware used according to an embodiment ; and [0006] Figure 2 shows a flowchart of operation.

DETAILED DESCRIPTION [0007] An embodiment is shown in Figure 1. In this first embodiment, a learning module 100 is included as part of the game 110 running as a software level on computer 99. The learning module may use any kind of machine intelligence, to learn user strategies and actions. The learning module has the capability of being activated and deactivated, to either store and use, or discard, any previously learned machine strategies. The learned strategies may include actions, and responses to those actions. By matching the player's skill level, the player's interest in the game may be sustained.

[0008] The user interacts with the game using user interface 120. The game module 110 carries out the flowchart of Figure 2.

[0009] Many games provide the player with a finite set of alternatives. For each action taken by the player, the computer may select a predetermined action. That action may depend on the status of the game. These actions in response to a finite set are learned.

[0010] As a user makes various moves within the game, the computer learns these moves at 200. These moves are learned as probabilities of how the user will act, when confronted with a certain set of game conditions. The learned moves at 200 are used to adjust the contents of the database at 205. The database includes information about previously learned moves. The computer may carry out its actions based on this database.

[0011] Each player movement results in success, failure or draw. The player observes the result of their action, and may select a next action based on what the player has learned so far. This is the way in which an average player learns strategies of the game, as the game progresses.

[0012] By monitoring the players moves at 200, and storing them at 205, the computer is able to learn and

make use of the players strategies, based on assessing probabilities of how the user will act, when confronted with a finite set of alternatives.

[0013] At 210, a stochastic learning technique is used to find the best action from us finite set of actions by selecting the action and updating the probability of selecting the a next action in a specific matter. This is done to update the database of possible actions, and the user's likely response to those possible actions.

[0014] There is one optimal action for any situation.

The stochastic learning technique at 210 attempts to find this. As the player improves, the user's responses to situations change, and the game also improves.

Conversely, as the player's actions may get less skillful, e. g. due to fatigue or other factors, the technique may also learn from that, and consequently begin getting easier. By operating in this way, the deterministic game is converted into a probability type game. Instead of discrete levels of difficulty, there are a basically infinite number of continuous levels of difficulty.

[0015] As part of the stochastic learning process, the module learns from any input and the success/failure result of the input. Instead of finding a single best

alternative, multiple best alternatives are selected.

The best alternative with the highest probability is used at 215.

[0016] The above has described using this application for games. However, the application may also be used for other applications.

[0017] Another embodiment is for an Internet search engine. The search list for such engines are typically provided based on the number of matches between the search criteria and the document. However, this does not take into account the sites or the documents that the user actually selects. This embodiment uses the flowchart of Figure 2 to follow user selection, and determine which kinds of documents the user selects. The probability of the user selecting any document can be assessed. The search engine can essentially learn from the user's moves and provide search results (documents) based on the user's moves.

[0018] Similar operations can be carried out for shopping manipulation.

[0019] Another embodiment relates to icon manipulation. In Windows type operating systems, the mouse and icons remains static at the location which was last accessed by the user. The present system may be

used to learn from past movements of the mouse relative to the icons. The cursor is caused to move to new location as appropriate. This may reduce time that is otherwise used in moving to the new locations.

[0020] Although only a few embodiments have been described in detail above, other modifications are possible.