Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INPUT APPARATUS AND METHOD
Document Type and Number:
WIPO Patent Application WO/2010/031123
Kind Code:
A1
Abstract:
An input apparatus, including a first input device adapted for operation by a user's first hand; and a second input device adapted for operation by said user's second hand; each said input device having a plurality of input sensors, said sensors in each said input device being spatially arranged for independent operation by a different respective one of said user's thumb or fingers for providing input, such that when said user operates one or more of said sensors to provide input based on a first set of input actions associated with said sensors in a first input mode, at least one said input device generates input data representing said input for controlling a processor of a computing device; wherein, in response to said user operating a predefined combination of one or more of said sensors, said sensors are reconfigurable to provide different said input based on a set of input actions associated with said sensors corresponding to a different said input mode.

Inventors:
WALSH TIMOTHY MICHAEL (AU)
Application Number:
PCT/AU2009/001228
Publication Date:
March 25, 2010
Filing Date:
September 16, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WALSH TIMOTHY MICHAEL (AU)
International Classes:
G06F3/02; G06F3/03
Other References:
"The Keypaw: ECE 476 Spring 2004 Final Project by Maudie Hampden and Sumul Shah", Retrieved from the Internet
"Power Glove - Wikipedia", Retrieved from the Internet
Attorney, Agent or Firm:
MALLESONS, Stephen, Jaques (600 Bourke StreetMelbourne, VIC 3000, AU)
Download PDF:
Claims:
CLAIMS

1. An input apparatus, including: a first input device adapted for operation by a user's first hand; and a second input device adapted for operation by said user's second hand; each said input device having a plurality of input sensors, said sensors in each said input device being spatially arranged for independent operation by a different respective one of said user's thumb or fingers for providing input, such that when said user operates one or more of said sensors to provide input based on a first set of input actions associated with said sensors in a first input mode, at least one said input device generates input data representing said input for controlling a processor of a computing device; wherein, in response to said user operating a predefined combination of one or more of said sensors, said sensors are reconfigurable to provide different said input based on a set of input actions associated with said sensors corresponding to a different said input mode.

2. An apparatus as claimed in claim 1, wherein at least one of said input devices includes a motion sensor for detecting changes in position of said input device relative to a reference position, and is adapted to generate said input data including motion data representing said change in position of said input device for controlling said processor.

3. An apparatus as claimed in claim 2, wherein each said motion sensor generates said motion data for adjusting a position of a different corresponding pointer on a display controlled by said processor.

4. An apparatus as claimed in claim 2, wherein each said motion sensor includes one or more of the following: an optical sensor for detecting a said change in position relative to a reference position on a control surface; and one or more acceleration-sensitive sensors positioned for detecting a said change in position along a directional axis of said input device relative to a reference position in a three dimensional space.

5. An apparatus as claimed in claim 1, wherein at least some of said input sensors are normally configured to an inactive state, and are operable by said user from said inactive state to an active state for providing input.

6. An apparatus as claimed in claim 1, wherein said input sensors include one or more of the following: i) a contact switch; ii) a touch-sensitive sensor; iii) a light-sensitive sensor; iv) a capacitive sensor; and v) a directional control sensor that is adjustable by said user in two or more directions relative to a reference point for providing directional input.

7. An apparatus as claimed in claim 6, wherein each said input device includes: four separate directional control sensors, each said sensor being arranged for operation by a different respective finger of said user, each said sensor being activatable in three different directions relative to a corresponding reference point for providing input; and a single directional control sensor positioned for operation by a thumb of said user, and being adjustable in five different directions relative to a corresponding reference point for providing input.

8. An apparatus as claimed in claim 1, wherein said input data includes data identifying one or more different said input sensors operated by said user for providing input.

9. An apparatus as claimed in claim 1, wherein said processor generates, based on said input data, data representing one or more of the following: i) a character; ii) a numeral; iii) a symbol; and iv) a command, signal, parameter or instruction for controlling said processor to perform a specific function

10. An apparatus as claimed in claim 1, wherein said input data is generated in response to at least one of the following actions by said user: i) simultaneous operation of a predefined plurality of said sensors; ii) repeated operation of a predefined one of said sensors; and iii) sequential operation of a predefined plurality of said sensors.

11. An apparatus as claimed in claim 1 , wherein said apparatus includes a transmitter for sending an electromagnetic signal, representing said input data, to a receiver coupled to said processor.

12. An input method using an apparatus as claimed in claim 1, including the steps of: associating each said sensor with a different predefined set of symbols; receiving input from said sensors; generating, for a specific one of said sensors, a count value representing a number of sequential input operations by said user using the specific said sensor within a predetermined period of time; selecting, based on said count value, a symbol from said set of symbols associated with said sensor; and generating said input data including data representing said selected symbol.

13. An input method as claimed in claim 12, using an apparatus as claimed in claim 1, said method including the steps of: associating different combinations of said sensors with a different symbol; receiving input from said sensors; and generating, in response to determining that said input corresponds to a particular one of said combinations, said input data including data representing said symbol associated with said particular one of said combinations.

14. An input method as claimed in claim 12, using an apparatus as claimed in claim 2, said method including the steps of: receiving, from said first motion sensor, motion input representing a change in position of said first control portion relative to said reference position; generating, based on said motion input, said input data including data for controlling said processor to modify the position of a pointer on a display.

15. An input method as claimed in claim 13, including the further steps of: receiving key activation input from one or more of said input sensors; generating, based on said motion input and said key activation input, said input data including data for controlling said processor to perform a specific function.

Description:
INPUT APPARATUS AND METHOD FIELD

The present invention relates to a user input apparatus and method for providing user input to a processing device or system.

BACKGROUND There are many forms of user input devices for controlling a processing device. Such devices can be classified into key-based data entry devices (e.g. keyboards) and pointing devices (e.g. a mouse or trackball). Keyboards are typically used for providing input representing text, numbers, punctuation or a special function (such as "enter" or "delete"). Pointing devices typically enable a user to physically control (e.g. by way of movement) the position of a corresponding pointer (or cursor) on a graphical display, and even to perform various functions using the pointer such as selecting and moving files and launching applications.

Most English language computer keyboards have the QWERTY key layout. The QWERTY layout positions certain key (for commonly occurring sequences of letters in the English language) far apart, which was originally designed to minimise jamming of mechanical typewriter arms when the keys are pressed in rapid succession. The QWERTY key layout is widely used, but this is not because of any efficacy or ease of typing that the layout affords (in contrast, the QWERTY layout was initially chosen because it slows down typing). There were several attempts to rearrange the keys in a more logical order, most notably by Dvorak. However, these alternatives did not find wide acceptance, possibly because the Dvorak key layout simply rearranged the position of the individual letters' keys within the same key arrangement, without addressing more fundamental underlying problems of keyboard design.

With the advent of mobile telephones, and the widespread use of SMS messaging, it is apparent that people are willing and able to learn to type in data using a different style of keyboard with a totally different key arrangement, and a much reduced number of keys.

For example, a mobile phone "keyboard" can be emulated on a numeric keypad where different keys are assigning different letters of the alphabet. A user typically uses one or both thumbs to actuate the keys to enter data, which is highly inefficient, as the remaining eight fingers are merely being used to hold the device. Several attempts have been made to improve upon existing user input devices. For example, US 4917516 describes an input device consisting of two hand-pieces, each of which can provide keyboard and mouse control functionality. The input device requires a user to operate multiple keys per finger, with each key corresponding to a single key on a standard keyboard. The different keys for each finger are arranged around each fingertip as the finger sits in a well. The operator activates the individual keys by moving his or her fingertips forward, backward, left, right or downwards. The input device operates in a similar manner to a standard keyboard, and has the same problems as existing keyboards in that a user may inadvertently activate multiple adjacent keys instead of one. The device does not attempt to address or simplify a user's experience in providing data entry using a key-based device. Such an arrangement is unlikely to help reduce the user's risk of suffering from repetitive strain injury (RSI) due to the lateral movement of the fingertips required to activate some of the keys. Further, the arrangement of multiple keys around each fingertip is no easier to learn than a traditional keyboard because it requires the user to memorize the locations of each letter key.

Another example is described in US 4,584,443 which relates to an input device having a set of cups for engaging the tips of a user's fingers. Each cup is activated by a different finger or thumb. Each cup can move in an orthogonal direction in response to movement of the user's finger or thumb to provide input. GB 2076743 describes a similar apparatus is is controllable by one hand. A different sensor is activated by a different finger or thumb of a user.

However, none of the above examples provide any flexibility for a user to selectively choose between different forms of key-based or motion-based input. The functions corresponding to each key or sensor are predefined, and cannot be changed to a different configuration based on a user's preference. Accordingly, it is desired to address at least some of the above issues, or to at least provide a useful alternative for users to input data.

SUMMARY

The present specification describes an input apparatus that attempts to address the above problems using an entirely different approach. In one embodiment, the input apparatus has different keys that are positioned so that a different key corresponds to each different finger and thumb position of a user's hands. The position of the keys allows users to operate each key (e.g. for data entry or data manipulation) with less finger movement and in a manner more consistent with a user's natural finger motion. This helps reduce the risk of the user developing RSI from regular use of the input apparatus.

Having only one key for each of the users ten digits (i.e. fingers and thumbs) means that the user's digits never have to leave their respective keys. This allows for true "eyes on the copy" touch typing, with minimal possibility of accidentally striking the wrong key. Compare this to touch typing on a QWERTY keyboard, where the user has to place his hands in an awkward and uncomfortable position just to have his fingertips on the "home row", and then must make even more awkward and uncomfortable finger movements to reach some of the most commonly used keys. Meanwhile, the user's thumbs, which have a greater natural range of motion than his fingers are doing nothing but pressing the spacebar. The approach described herein will clearly have an enormous impact on productivity, not to mention on the health and comfort of the user. It will also be far more intuitive to users who have never used a QWERTY keyboard before (e.g. children).

When the user operates one or more keys, the input apparatus generates a control signal that is processed by software to generate instructions or commands that control a processor (or software application) to perform a specific task. Each key may be associated with one or more characters to be provided as input. This makes the input apparatus easier to learn because users only need to remember which letters are activated by each finger rather than which key the finger must activate in order to enter the desired letter. According to a described embodiment, there is provided an input apparatus, including: a first input device adapted for operation by a user's first hand; and a second input device adapted for operation by said user's second hand; each said input device having a plurality of input sensors, said sensors in each said input device being spatially arranged for independent operation by a different respective one of said user's thumb or fingers for providing input, such that when said user operates one or more of said sensors to provide input based on a first set of input actions associated with said sensors in a first input mode, at least one said input device generates input data representing said input for controlling a processor of a computing device; wherein, in response to said user operating a predefined combination of one or more of said sensors, said sensors are reconfϊgurable to provide different said input based on a set of input actions associated with said sensors corresponding to a different said input mode.

BRIEF DESCRIPTION OF THE DRAWINGS

Preferred embodiments of the present invention are herein described, by way of example only, with reference to the accompanying drawings, wherein:

Figures IA and IB are block diagrams of the components of a data processing system;

Figure 2 is a top view of the input apparatus;

Figure 3 is a top view of the input apparatus when controlled by a user;

Figure 4 is a front view of the input apparatus; Figure 5 is a side view of the input apparatus;

Figure 6 is a flowchart of a mode setting process performed by the system;

Figure 7 is a flowchart of a sequential key input process performed by the system;

Figure 8 is a flowchart of a dictionary input process performed by the system;

Figure 9 is a flowchart of a chording input process performed by the system; and Figure 10 is a flowchart of a motion control process performed by the system.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

A data processing system 100, as shown in Figures IA and IB, includes an input apparatus 102 that receives input from a user and then communicates that input to a processing unit 104 via one or more communications channels 101, 103 and 105. The processing unit 104 includes an input control module 110 and a processor 112 (e.g. a microprocessor).

The input apparatus 102 includes one or more input devices 106 and 108, each of which has one or more control portions. Each control portion has one or more input sensors that are independently operable by a user for providing input. The input apparatus 102 (at predetermined time intervals) monitors whether any of the input sensors have been operated by the user. The input apparatus 102 generates input data representing the one or more input sensors operated by a user (representing input provided by the user). For example, the input data may be generated in response to the user operating one or more input sensors simultaneously, repeatedly or in accordance with a particular sequence. The input apparatus 102 then transmits the input data to the input control module 110 for analysis. The input control module 110 then generates, based on the analysis of the input data, control signals (which may include one or more commands and instructions) for controlling the functions performed by the processor 112. The functions performed by the input control module 110 can also be performed (in whole or in part) by the input apparatus 102.

The input devices 106 and 108 communicate with the input control module 110 either via a single common communications channel 101 (as shown in Figure IA), or alternatively, via separate communications channels 103 and 105 for each respective input device 106 and 108 (as shown in Figure IB). A communications channel 101 refers to any means of transferring data between an input device 106 and 108 and the input control module 110 (e.g. electromagnetic signal transmitted along a wire, or as a radio signal on any radio frequency such as Infrared or Bluetooth). At least one of the input devices 106 and 108 includes a transmitter that transmits a data input signal representing the input provided by the user to a receiver coupled to the input control module 110. The receiver may also transmit a feedback signal to the transmitter to confirm that the data input signal was received correctly.

The input control module 110 module is provided by computer program code in languages such as C and C# (e.g. as a software interface) that is executed in the processing unit 104, which is a standard personal computer (such as that provided by IBM Corporation <http ://www.ibm. com>) running a standard operating system, such as Windows. Those skilled in the art will also appreciate that the processes performed by the input control module 110 can also be executed at least in part by dedicated hardware circuits, e.g. Application Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). In one representative embodiment of the invention, the input apparatus 102 includes a plurality of input sensors. Each input sensor is independently operable by a user's thumb or finger for providing input. The input sensors are spatially arranged in one or more control portions of the input apparatus 102 so only a different one of the input sensors correspond to each respective one of the thumb and finger positions of a user's hands when using the input apparatus 102. The one or more control portions are portions of a single input device, or portions of several different input devices 106 and 108. When the user operates one or more of the input sensors to provide input, the input apparatus 102 generates input data including data representing the input provided by the user. The input data is provided to the input control module 110 of the processing unit 104, which controls the operation of a processor 112 based on the input data.

Figure 2 shows another representative embodiment of the invention having an input apparatus 102 consisting of two separate input devices 106 and 108. A user may operate both input devices 106 and 108 simultaneously using both hands. Both input devices 106 and 108 are independently moveable relative to a reference point or position within either a two-dimensional control area (e.g. a control surface on which the devices 106 and 108 are placed for use) or within a three-dimensional space. The representative embodiment shown in Figures 2 to 5 can be operable on a control surface. However, it will be apparent that each input device 106 and 108 can be much smaller in size and adapted to be attached to different respective hands of a user (e.g. by straps or other attachment means) so that each input device 106 and 108 can follow the movement of the user's hands within a three- dimensional space. Each input device 106 and 108 has a body (preferably shaped as a palm rest), a single input sensor corresponding to each finger position of a user's hand 202 and 204 which may have one or more directions of input, and a directional control input sensor corresponding to the thumb position of a user's hand 206 and 208. In a representative embodiment, the input sensors 202 and 204 are capable of receiving multidirectional input (e.g. similar to a rocker switch) such that forward, backward and downward actuation of a sensor 202 and 204 represents a different input. The input sensors 202 and 206 belong to a first control portion of a first input device 106. The input sensors 204 and 208 belong to a second control portion of a second input device 108.

In Figure 2, the finger-operated input sensors 202 and 204 are located at the front of each input device 106 and 108 at positions and angles corresponding to the locations of the user's fingers (or fingertips). The thumb-controlled input sensors 206 and 208 are located at the side of each input device 106 and 108 to correspond to the locations of the user's thumbs. This arrangement of the input sensors 202, 204, 206 and 208 is more ergonomic, and is particularly advantageous as it minimises the degree of movement of the user's thumb and fingers in order to provide input, which helps reduce the risk of the user developing RSI. Each of the input sensors 202, 204, 206 and 208 are independently operable by a user to provide input. For example, an input sensor 202, 204, 206 and 208 may include one or more of the following: i) a contact switch; ii) a touch-sensitive sensor; iii) a light-sensitive sensor; iv) a capacitance sensor; and v) a multidirectional control sensor that is adjustable by a user in two or more (e.g. orthogonal) directions relative to a reference point for providing directional input. When an input sensor 202, 204, 206 and 208 does not detect any user operation of the input sensor, the input sensor 202, 204, 206 and 208 is in a default inactive state (e.g. an "off position). When a user operates an input sensor 202, 204, 206 and 208 (e.g. by bringing a part of the user's body in close proximity to, touching or moving the input sensor), the input sensor 202, 204, 206 and 208 is configured to an active state (e.g. an "on" position). The input apparatus 102 may detect the state of each input sensor 202, 204, 206 and 208 at predetermined time intervals, and generate input data representing the identifiers for the one or more input sensors 202, 204, 206 and 208 that are detected to be in the active state to represent user input at that particular point in time. Alternatively, the input apparatus 102 may receive a response signal from those one or more input sensors 202, 204, 206 and 208 configured to the active state in response to operation by a user. The input apparatus 102 may generate input data representing the identifiers for the one or more input sensors 202, 204, 206 and 208 sending a response signal (e.g. in real time).

In a representative embodiment, the input apparatus 102 includes one or more directional control input sensors 206 and 208 that are operable by a user's thumbs for providing directional input. For example, as shown in Figure 2, each directional control sensor 206 and 208 is similar to a joystick, and includes a lever that is operable by a user to move in several directions (e.g. including up, down, forwards, backwards) relative to a reference point or axis. For example, the reference axis may be substantially coaxial with an axis along the body of the lever. Each directional control input sensor 206 and 208 may be normally configured to be in an inactive state, and is configured to be in an active state when the user operates the lever in one direction along an axis that runs along the body of the lever (e.g. by pushing the lever in towards the body of the input device 106 and 108). In response to detecting user operation of a directional control input sensor 206 and 208 (in any direction), the input apparatus 102 generates input data including direction data representing the direction of operation of the lever, and the active or inactive state, of the directional control input sensor 206 and 208.

In a representative embodiment, the input apparatus includes one or more directional input sensors that are operable by the user's fingers. These finger sensors may be moved in one of up to three directions to provide input. For example, the finger sensors may be moved forwards away from the user, backwards towards the user, or downwards by applying a pushing force towards the input device 106 and 108.

Figure 3 shows the input devices 106 and 108 shown in Figure 2 during use. Figures 4 and 5 are respectively the front and side views of the input devices 106 and 108 shown in Figure 2.

In one representative embodiment, one or more of the input devices 106 and 108 has a motion sensor. Preferably, each input device 106 and 108 has a separate motion sensor. The motion sensor detects changes in position of a particular input device 106 and 108 relative to the control surface (on which the input devices 106 and 108 are placed for use). The input data generated by the input apparatus 102 may include motion data representing a relative change in position of a particular input device 106 and 108. For example, the motion data represents one or more parameters representing the magnitude and direction of a change in the position of a particular input device 106 and 108 (in real time or over a predefined period of time) relative to an original (or reference) position of the input device 106 and 108 on the control surface.

In another representative embodiment, the input devices 106 and 108 are adapted to be held by (or are otherwise attachable to - such as by way of a strap or similar device) the user's hands, rather than rested on a flat surface such as a table top. In this case, each input device 106 and 108 includes motion sensors for detecting movement of the respective input device 106 and 108 in a three-dimensional space. For example, each input device

106 and 108 may include a set of three different motion sensors (e.g. accelerometers) for monitoring changes in the acceleration (or deceleration) of the input device 106 and 108 along any one of the three coordinate axes defined along a width, height and length of the input device 106 and 108. The acceleration and deceleration characteristics of each motion sensor may be sampled many times within a predefined time frame (e.g. 1 second), which can then be used to calculate an estimated magnitude of movement based on a magnitude or a change in acceleration (or deceleration) along any of the three dimension axes within the predefined timeframe. Each input device 106 and 108 may also include suitable sensors adapted for detecting a magnitude or a change in the pitch, roll and/or yaw of the input device 106 and 108 in a three-dimensional space.

The processor 112 performs processes under the control of one or more application modules (not shown in Figures IA and IB). For example, an application module for a word processing application may control the processor 112 to perform a data entry process. Other forms of applications modules can be provided to perform different processes.

An application module instructs the processor 112 to perform a different function depending on the input provided by a user. The input control module 110 analyses the input data received from the input apparatus 102, and generates (based on the input data received from the input apparatus 102) the appropriate commands or instructions for controlling the functions performed by the processor 112 (in the context of other the processes to be performed by the processor 112 at that time). For example, processor 112 performs a data entry process (under the control of an application module) up to the point where user input can be provided. The input control module 110 determines that the input data received from the input apparatus 102 represents a character, number or symbol to be provided as input for the data entry process. Alternatively, the input control module 110 may determine that the input data represents a trigger for performing a particular function (e.g. to cut a portion of text from one portion of a document to another part of the same document), and accordingly, the input module 110 generates commands or instructions for controlling the processor to perform such a function. It will be understood that the functions of the input control module 110 described above can be performed (at least in part) by the input apparatus 102.

A user operates the input sensors 202, 204, 206 and 208 and/or motion sensors of the input devices 106 and 108 in different ways in order to provide different forms of input to the input control module 110. For example, the following scenarios each relate to a different mode of providing user input, each of which causes the input control module 110 to generate different commands or instructions for controlling the processor 112 in different ways: i) A user simultaneously operating a predefined plurality (or combination) of the input sensors 202, 204. 206 and 208. For example, different combinations of sensors may represent the input of different input characters, or instructions for performing different input actions (e.g. a copy action, a paste action, etc.); ii) A user repeatedly operating a predefined one or more of the input sensors 202, 204, 206 and 208 within a predefined time frame. For example, each input sensor may be associated with a different predefined set of characters, wherein based on the number of times a particular input sensor is operated, a different character is selected from the set of characters corresponding to that particular input sensor; iii) A user sequentially operating a predefined one or more of the input sensors 202, 204, 206 and 208. For example, different combinations of sensors may represent the input of different input characters, or instructions for performing different input actions; iv) A user only moving one or more of the input devices 106 and 108. For example, the relative movement of each input device 106 and 108 may be used to adjust the position (or positions) of one or more pointer (or pointers) (e.g. a mouse cursor) for display on graphical display interface generated by the processor 112; and v) A user moving one or more of the input devices 106 and 108, and at the same time, operating one or more of the input sensors 202, 204, 206 and 208. This is similar to the function described above in (iv) including the operation one or more input sensors. Such a combination of input sensor operation and movement can be interpreted as a "gesture" or trigger for performing a certain input action (e.g. a copy action, a paste action, a move action, etc.).

User input modes The input apparatus 102 (when used in conjunction with the input control module 110) can provide many different user input modes. These modes (operated by a user) are controlled by the input control module 110 (e.g. by software drivers). The following describes an example of some of the possible user input modes provided by the input control module 110, and it should be understood that the present invention is not limited to the examples described herein. The short names contained in Table 1 are used to describe the different finger positions (and direction of movement - e.g. for a directional input sensor 206 and 208) of the input sensors 202, 204, 206 and 208 of the input apparatus 102. The short names are used in the following description of the user input modes.

Switching user input modes

Figure 6 is a flowchart of a mode setting process 600 performed by the processor 112 under the control of the input control module 110. The mode setting process 600 allows a user to switch between different user input modes by operating a predefined combination of one or more input sensors 202, 204, 206 and 208 of the input apparatus 102.

The mode setting process 600 begins at step 602 by defining a combination of one or more input sensors 202, 204, 206 and 208 that a user operates in order to switch to a different user input mode. For example, the mode setting process 600 switches to a different user input mode by detecting whether the user simultaneously operates a predefined combination of input sensors 202, 204, 206 and 208. Step 602 may define the manner in which the combination of one or more input sensors 202, 204, 206 and 208 are operated in order to switch to a different user input mode. For example, the combination may include one of the directional input sensors 206 and 208, and step 602 may specify the type of directional input required from the directional input sensor 206 and 208 in order to switch the user input mode.

The input control module 110 supports one or more user input modes. The mouse input mode and various character input modes (e.g. the sequential key input mode, dictionary input mode, QWERTY input mode, and chording input mode) as well as music mode are described in greater detail below. The term character refers to a letter of an alphabet (e.g. including an ideographic character), numeral, symbol, punctuation character and stop character (e.g. a space character and a new line character). Preferably, only one text input mode can be used at any time. However, a text input mode can be used in conjunction with (e.g. at the same time as) the mouse input mode. The user input modes are arranged in a predefined sequence. At step 603, the input control module 110 selects one of the user input modes in the sequence, and performs an input process in accordance with the selected user input mode.

At step 604, the input apparatus 102 provides input data to the input control module 110. At step 606, the input control module 110 determines whether the input data represents a combination of activated input sensors 202, 204, 206 and 208 that corresponds to the specific combination of sensors (and manner of operation) as defined at step 602. If there is no match, step 606 returns to 604 to receive further input from the input apparatus 102. Otherwise, step 606 proceeds to step 608. At step 608, input control module 110 selects the next input mode in the sequence, and performs an input process in accordance with the selected user input mode. At step 610, the input control module 110 generates commands or instructions for controlling the processor 112 to generate a graphical display interface (e.g. a pop-up window) indicating the user input mode has changed successfully, and preferably, also indicating the current user input mode.

Mouse input mode

In this mode, at least one of the input devices 106 and 108 (that is equipped with a motion sensor) functions in the same manner as a standard computer mouse (i.e. the movement of the at least one input device 106 and 108 controls the corresponding position of a pointer displayed on a graphical display generated by the processor 112). Preferably, both input devices 106 and 108 operate in this manner. Advantageously, because there are two separate input devices 106 and 108, the user is able to manipulate two pointers on the graphical display. Further, because each input device 106 and 108 has more buttons than traditional mice, it is possible to have more predefined "shortcut" functions assigned to each of the buttons (or combinations of one or more of the buttons), for actions such as cutting, copying, pasting, etc.

In the mouse input mode, the movement of each input device 106 and 108 on a control surface (e.g. the tabletop) is translated by the input control module 110 (e.g. via software) to control the movement of pointers or cursors on a graphical display on a computer screen. Since there are two hand-pieces, the graphical display can have two separate pointers, one corresponding to each input device 106 and 108 controlled by different hands of the user. In the mouse input mode, a first (e.g. index) finger and second (e.g. middle) finger input sensors 204 and 204 on each input device 106 and 108 can be configured so as to correspond to (i.e. perform the same function as) the left and right buttons on a standard computer mouse. In this configuration, a first finger input sensor 202 and 204 can be used for selecting files or objects, or for positioning the cursor at a particular place within text in a word processor or similar application. A second finger input sensor 202 and 204 can be used to bring up a context-sensitive menu of options from which the user may then make a selection. Once a menu has been brought up (e.g. by using the second finger input sensor), the menu items can be scrolled-through by moving a directional input sensor 206 and 208 of the corresponding input device 106 and 108 forwards or backwards. Once the desired menu item has been selected (or highlighted) in this way, the selection can be confirmed by depressing the lever of the directional input sensor 206 and 208 of the corresponding input device 106 and 108. This method of selecting menu items using the thumb controls can be more efficient (e.g. faster) than moving the entire input device 106 and 108 and then using a first finger key to select the desired menu item (as is currently done with standard mice), and provides an added advantage of leaving the pointer unmoved after the menu item has been selected.

A third (e.g. ring) finger input sensor 202 and 204 may be used in the mouse input mode for cutting, or copying files or portions of text (e.g. in a document containing text). Further, a fourth (e.g. pinky) finger input sensor 202 and 204 may be used in the mouse input mode for pasting objects which have just been cut or copied with the third finger input sensor 202 and 204.

One advantage of an input apparatus 102 with two separate input devices 106 and 108 is that a user can separately control two different pointers, which makes is more efficient for a user to perform operations like cutting and pasting files (or other objects) from one place to another. For example, a file (or other object) can be cut from one folder (or file storage location) using a first input device 106 controlled by one of the user's hands, and then pasted into another folder (or file storage location) using a second input device 108 controlled by the other of the user's hands. Data manipulation performed in this way reduces the need to make substantial movement of either input device 106 and 108.

In the mouse input mode, the direction control sensors 206 and 208 may be used for providing panning and/or zooming control functionality for the contents displayed on a graphical display. Modern computer mice usually come with a small scroll wheel located between the left and right buttons. Such a scroll wheel is used for vertical panning (or scrolling) through documents which are too long to be displayed entirely on the screen. The direction control sensors 206 and 208 of the input apparatus 102 can also be used for this type of scrolling of documents. Further, since the directional control sensors 206 and 208 can have four lateral directions of movement, such sensors 206 and 208 can be used for horizontal panning as well as vertical panning (scrolling) control purposes.

The direction control sensors 206 and 208 may also be used for zooming control. For example, a directional control sensor 208 (for control by a user's right thumb) can be used for panning control, while a directional control sensor 206 (for control by a user's left thumb) can be used for zooming control.

Figure 10 is a flowchart of a motion control process 1000 performed by input apparatus 102 working in conjunction with the input control module 110. The motion control process 1000 begins at step 1002 with the input apparatus 102 generating (e.g. in real time or at predefined time intervals) motion data based on the movement or displacement of each input device 106 and 108 relative to a control area or space (based on the information provided by the motion sensor for each input device 106 and 108).

At step 1004, the input apparatus 102 generates receives a response signal from those input sensors 202,204, 206 and 208 in the active state (i.e. have been operated by a user). At step 1006, the input apparatus 102 generates input data including the motion data and data representing one or more unique identifiers of the input sensors 202, 204, 206 and 208 in the active state. The motion data may represent a magnitude or change in position relative to a reference point within a two-dimensional region, or alternatively, within a three- dimensional space. The motion data may also represent a magnitude or change in the pitch, roll and/or yaw of each input device 106 and 108 within a three-dimensional space.

At step 1006, the input apparatus 102 sends the input data to the input control module 110. The input control module 110 analyses the input data, and based on the analysis, generates commands or instructions for controlling the operation of a process performed by the processor 112 under the control of an application module. The command or instructions may provide data (e.g. characters, symbols, numbers, etc.) to a process performed by the processor 112, or may interact with the process (e.g. to perform data manipulation or other functions). Character input modes

Because of the flexibility of functionality afforded to the input apparatus 102 by its design, there are several methods of inputting characters available to the user as different character input modes. The user is able to choose a character input method best suited to the user's personal preferences. Changing between different character input modes and mouse mode (and other possible modes) is performed by the input control module 110 (as described above) and may be triggered by the user pressing a predefined control sequence of input sensors 202, 204, 206 and 208 (for example, by depressing all keys simultaneously).

General key input mode A sensible way to arrange the English alphabet on the input sensors 202, 204, 206 and 208 is to assign a letter to each of the 24 possible finger movements. The English alphabet has 26 letters. The two least commonly used letters X and Q may be selected using a thumb key as a "modifier" key, and pressing one of the finger keys at the same time. The arrangement of the other 24 letters under the individual fingers is a matter of preference, which can be defined by the input control module 110 using configuration data.

Dvorak analyzed the letter frequency of the English language when he devised his keyboard layout, and placed the most commonly used letters on the home row. A similar key arrangement can be associated with the input sensors 202, 204, 206 and 208 (e.g. using configuration data to define an association between different user input (e.g. including letters, characters, musical notes, etc.) and the different input actions of each input sensor 202, 204, 206 and 208), where the eight most frequently used letters are placed on the axial finger key inputs. The next eight most frequently used letters may be placed on the forward finger key inputs. The third most frequently used group of eight letters may be placed on the backward finger key inputs. The actual arrangement of the letters may depend on which language is being typed, and the letter frequencies and alphabetical differences of each particular language.

Sequential key input mode

In the sequential key input mode, a different character (e.g. a letter of an alphabet, numeral, punctuation character and stop character) is assigned to a particular input sensor 202, 204, 206 and 208 of the input devices 106 and 108. A character is selected by the user operating a single input sensor 202, 204, 206 and 208 one or more times within a predefined time period (or timeout). This text input mode is therefore easy to learn, although it may not be the fastest mode for inputting text or data.

The letters of the alphabet, numerals and punctuation found on a standard QWERTY keyboard may be assigned to different input sensors 202, 204, 206 and 208 (operated by the fingers and thumbs of a user) as shown in Table 2. Table 2 shows an example of the key layout for the Latin alphabet in sequential key input mode. A different set of unique characters is assigned to each input sensor 202, 204, 206 and 208. For a user to type the letter "a" in the sequential key input mode, the user presses the L4 key once. Similarly, to type the letter "z", the user presses the R4 key four times within a certain time limit.

The key code L4, L3, L2, Ll and LT from Table 2 may each be associated with an input signal selected from the following respectively corresponding sets of signals (IA, IF, IB), (2A, 2F, 2B), (3 A, 3F, 3B), (4A, 4F, 4B) and (5 A, 5F, 5B, 5U, 5D) as defined in Table 1. Similarly, the key codes RT, Rl, R2, R3 and R4 from Table 2 may each be associated with an input signal selected from the following respectively corresponding sets of signals (6A, 6F, 6B, 6U, 6D), (7A, 7F, 7B), (8A, 8F, 8B), (9A, 9F, 9B) and (OA, OF, OB) as defined in Table 1.

The sequential key input mode is analogous to the standard text input mode on mobile phones. This mode has the advantage that it is very easy to learn, but requires multiple keypresses of an input sensor 202, 204, 206 and 208 to select some of the characters. Thus, the sequential key input mode can be a slower method for inputting text, because in order to type two or more letters which correspond to an individual key in succession, the user waits for the timeout after selecting the first letter before selecting the next letter from the same input sensor 202, 204, 206 and 208. Users can deactivate the timeout period, for example, by pressing one of the direction input sensors 206 and 208 (e.g. the left thumb key).

Figure 7 is a flowchart of a sequential key input process 700 performed by the processor

112 under the control of the input control module 110. The sequential key input process 700 begins at step 702 with the input control module 110 associating each input sensor

202, 204, 206 and 208 (e.g. a unique identifier for each input sensor) with a different set of one or more characters. At step 704, the input control module initialises an input cache for storing the unique identifier for the input sensor 202, 204, 206 and 208 last operated by the user. At step 705, the input control module 110 initialises a count value (e.g. representing a starting integer value, such as 0). At step 706, the input control module 110 initialises a timer value.

At step 708, the input control module 110 determines whether input data has been received from the input apparatus 102. If so, step 708 proceeds to step 712 to analyse the input data. Otherwise, step 708 proceeds to step 710 to determine whether the timer indicates that a predetermined timeout period (e.g. 2 seconds) has elapsed. If so, step 710 proceeds to step 728 to generate output for display. Otherwise, step 710 proceeds to step 708 to detect whether any input data has been received.

At step 712, the input control module 110 analyses the input data. After the analysis at step 712, step 714 assesses whether the input data represents a predefined control sequence of input sensors 202, 204, 206 and 208 to end the input process 700. If so, process 700 ends and for example a new input process is selected using process 600. Otherwise, step 714 proceeds to step 716 to determine whether the count value is equal to the value set at step 705. If so, step 716 proceeds to step 718 to store the key identifier in the input cache, and restart the timer at step 720. Otherwise, step 716 proceeds to step 722. Step 722 determines whether the input data represents a unique identifier for an input sensor 202, 204, 206 and 208 that corresponds to (e.g. is the same as) the identifier for an input sensor stored in the input cache. If so, step 722 proceeds to step 724 to increase the count value by 1 , and then step 724 proceeds to step 708 to detect further input within the timeout period. Otherwise, step 722 proceeds to step 726 to store the unique identifier for the new input sensor 202, 204, 206 and 208 (operated by user) in the input cache. Step 726 then proceeds to step 727 to set the count value to 1. Step 727 then proceeds to step 729 (which performs the same function as step 728) and then to step 731 (which performs the same function as step 730). Step 731 then proceeds to step 706 to reinitialise the timer, and then begin receiving further input from the input apparatus 102.

At step 728, the input control module 110 determines the key position of the input sensor 202, 204, 206 and 208 providing the input (based on the identifier for the input sensor providing the input) and selects a character from the set of characters associated with the identified input sensor 202, 204, 206 and 208 (based on the count value). For example, referring to Table 2, input data representing input from the L4 key together with a count value of "3" directs the input control module 110 to retrieve the character "c".

At step 730, the input control module 110 generates commands or instructions for the processor 112 to generate a graphical display interface based on (or including) the character retrieved at step 728. Step 730 then proceeds to step 704 to reinitialise the input cache, count value and timer, and then begin receiving further input from the input apparatus 102 within the next timeout period.

Dictionary input mode In the dictionary input mode, different characters are each assigned to an input sensor 202, 204, 206 and 208 in the same way as in the sequential key input mode. This mode is analogous to the dictionary mode for entering text on mobile phones.

To select a character, the user presses a corresponding input sensor 202, 204, 206 and 208 once per letter. After one or more input sensor 202, 204, 206 and 208 letter keys have been pressed, the input control module 110 searches a dictionary (e.g. a list of words stored in the memory of the processing unit 104) to generate a selected list of words that begin with the combination of one or more letters already entered by the user as part of entering a word. The input control module 110 may generate commands or instructions for the processor 112 to generate a display interface that shows the selected list of words below a cursor on the display interface. The user may then use a direction input sensor 206 and 208 to select the desired word from the list, or continue entering additional characters to narrow down the list of possible completions (i.e. full words containing or beginning with the one or more characters already entered by the user). The user may select the desired word using a directional input sensor 206 and 208, or continue typing all of the require characters (one at a time) until the entire word is complete. For example, assuming that the input sensors 202, 204, 206 and 208 are configured according to Table 2, a user may operate each of the input sensors L3, Rl and L2 once (in that sequence), which causes the input module 110 to query the dictionary and retrieve potential words beginning with the combinations of characters associated with those input sensors in the sequence entered by the user (e.g. the word "dog").

If the user wants to type a word which is not in the dictionary, then the user can train the input control module 110 software to recognise that word. For example, this may involve entering the word (one letter at a time) using the sequential key input mode, and then preferably, selecting an option for storing the completed word into the dictionary.

Figure 8 is a flowchart of a dictionary input process 800 performed by the processor 112 under the control of the input control module 110. The dictionary input process 800 begins at step 802 with the input control module 110 associating each input sensor 202, 204, 206 and 208 with a different set of one or more characters. At step 804, the input control module 110 initialises an input string for storing one or more characters entered by a user corresponding to a particular word. At step 805, the input control module 110 receives input data from the input apparatus 102. At step 806, the input control module 110 analyses the input data received from the input apparatus 102 and assesses whether the input data represents character input. If so, step 806 proceeds to step 808. Otherwise, step 806 proceeds to step 814. At step 808, the input control module 110 adds the character represented by the input data to the input string. At step 810, the input control module 110 queries the dictionary and retrieves a selected list of words (from the dictionary) beginning with the one or more characters contained in the input string. At step 812, the input control module 110 generates commands or instructions for the processor 112 to generate a graphical display interface (e.g. a pop-up menu) including the selected list of words. The first word of the selected list displayed in the graphical display interface is selected by default. Step 812 then proceeds to step 814. At step 814, the input control module 110 analyses the input data received from the input apparatus 102 and assesses whether the input data represents an action to select the next word in the list. If so, step 814 proceeds to step 816. Otherwise, step 814 proceeds to step 818. At step 816, the input control module 110 generates commands or instructions for the processor 112 to update the graphical display interface (e.g. a pop-up menu) to select the next word displayed in graphical display interface. The next word can be either a word that immediately precedes the presently selected word in the list, or alternatively, a word that immediately follows the presently selected word in the list. Step 816 then proceeds to step 818.

At step 818, the input control module 110 analyses the input data received from the input apparatus 102 and assesses whether the input data represents an action to confirm the selection of a word in the graphical display interface. If so, step 818 proceeds to step 820. Otherwise, step 818 proceeds to step 822. At step 820, the input control module 110 generates commands or instructions providing the characters stored in the input string as input for another process performed by the processor (e.g. under the control of an application module). Step 820 then proceeds to step 804. At step 822, the input control module 110 analyses the input data received from the input apparatus 102 and assesses whether the input data represents an action to end the input process 800. If so, process 800 ends. If not, step 822 proceeds to step 805 to process further input.

QWERTY input mode Another possible text input mode is the QWERTY input mode. This mode is suitable for users who are experienced touch typists on QWERTY keyboards, and do not want to learn a new letter arrangement.

In this mode, a desired character is selected by a pressing one (or a combination) of input sensors 202, 204, 206 and 208 together with slight movement of the corresponding input device 106 and 108 relative to a control surface. The user imagines that there is a virtual QWERTY keyboard underneath his hands, and each key on the virtual keyboard is accessible by a operating (or pressing) an appropriate input sensor 202, 204, 206 and 208 (for home row keys), or a slight movement of the input device 106 and 108 accompanied by operation of an input sensor 202, 204, 206 and 208 (for non home row keys). For example, to type the letters "a", "s", "d", and "f ', the user presses the L4, L3, L2, and Ll input sensors 202 without moving the input device 106 controlled by the user's left hand. To type the letter "g", the user presses the Ll input sensor 202 and moving the input device 106 controlled by the user's left hand slightly to the right (relative to the control surface). To type the letter "q", the user presses the L4 input sensor 202 and moving the input device 106 controlled by the user's left hand slightly forward (relative to the control surface), and so on. An alternative letter arrangement may be suitable for users who are experienced QWERTY touch typists. The letters could be arranged under the same fingers as they are on a traditional QWERTY keyboard. Since each index finger has six letters which it has to type when touch typing on a QWERTY keyboard, the "TGB" and "YHN" columns could be accessed by moving one of the thumb joysticks in the down direction while pressing the appropriate index finger input.

In a representative embodiment, each of the finger input sensors 202 and 204 may be adapted so as to have three different operable states (e.g. as defined in Table 1). Each finger input sensor 202 and 204 may be associated with a unique letter of an alphabet (e.g. under the control of the input control module 110). For example, each finger input sensor 202 and 204 may be associated with 24 different letters of the English alphabet. The thumb input sensors 206 and 208 may be associated with the remaining two letters of the English alphabet. In this way, different input actions of the input sensors 202, 204, 206 and 208 of the input device 106 and 108 can be used to provide input representing different letters of the English alphabet (or any other alphabet).

Chording input mode

In the chording input mode, different characters (e.g. letters of an alphabet) are selected by simultaneously pressing multiple input sensors 202, 204, 206 and 208. In the chording input method is flexible in that a user can define any combinations of input sensors 202, 204, 206 and 208 to be used for entering a particular character. One example of such an arrangement is described below.

Table 3 is an example of the mapping used by the input control module 110 for determining the type of input (e.g. a character in a Latin alphabet, punctuation character or stop character) represented by different combinations of input sensors 202, 204, 206 and 208 operated by a user (as indicated by an asterisk "*" symbol) in Table 3. Different characters (or functions) can be associated with any unique combination of one or more input sensors 202, 204, 206 and 208. As shown in Table 3, different functions of a standard computer keyboard can be performed by pressing one or more of the input sensors 202, 204, 206 and 208 simultaneously.

In Table 3, the mapping of characters keys of a traditional computer keyboard map into combinations of input sensors 202, 204, 206 and 208 of the input apparatus 102 may appear to be quite complicated. However, there is a pattern to the key layout which, once it is understood, will make the combination layout easier to learn. The key code L4, L3, L2, Ll and LT from Table 3 may each be associated with an input signal selected from the following respectively corresponding sets of signals (IA, IF, IB), (2A, 2F, 2B), (3A, 3F, 3B), (4A, 4F 5 4B) and (5 A, 5F, 5B, 5U, 5D) as defined in Table 1. Similarly, the key codes RT, Rl, R2, R3 and R4 from Table 3 may each be associated with an input signal selected from the following respectively corresponding sets of signals (6A, 6F 5 6B, 6U, 6D), (7 A, 7F, 7B), (8 A, 8F, 8B), (9A, 9F, 9B) and (OA, OF, OB) as defined in Table 1.

The mapping described in Table 3 conforms to some simple rules. Each function or character is accessed by operating (or pressing) either zero or one of the (finger-operated) input sensors 202 and 204 in combination with either zero, one or two (thumb-operated) directional input sensors 206 and 208. Also, in Table 3, the English alphabet is laid out on the (finger-operated) input sensors 202 and 204 in the same way as for the dictionary input mode and sequential key input mode (i.e. "a", "b" and "c" are associated with the L4 input sensor, "d", "e" and "f ' are associated with the L3 input sensor, and so on).

The key layout described in Table 3 aims to avoid the use of pressing more than one (finger-operated) input sensor 202 and 204 at a time in order to produce input. However, other schemes could be devised and implemented by the input control module 110 (e.g. via software) which may require a user to simultaneously operate a plurality of the (finger- operated) input sensor 202 and 204 in order to provide input (e.g. character input).

Figure 9 is a flowchart of the chording input process 900 performed by the processor 112 under the control of the input control module 110. The chording input process 900 begins at step 902 with the input control module 110 associating each different character with a unique combination of input sensors 202, 204, 206 and 208. At step 904, the input control module 110 receives input data from the input apparatus 102. At step 906, the input control module 110 analyses the input data from the input apparatus 102 and determines whether the input represents a known combination (e.g. as defined in Table 3). If there is a match, step 906 proceeds to step 908. Otherwise, step 906 proceeds to step 910.

At step 908, the input control module 110 retrieves (e.g. based on Table 3) the character associated with the combination represented by the input data. The input control module 110 then generates commands or instructions for providing the retrieved character as input to a process performed by the processor 112 under the control of an application module. These instructions direct the processor 102 to generate an updated graphical display interface that includes the character provided as input. Step 908 then proceeds to step 904 to begin receiving and processing further input.

At step 910, the input control module 110 analyses the input data received from the input apparatus 102 and assesses whether the input represents an action to end the input process 900. If so, process 900 ends. Otherwise, step 910 proceeds to step 904 to begin receiving and processing further input.

Number input mode

If the user needs to perform a task such as numerical data entry, the number mode provides the most convenient way to do this. It is no coincidence that we use a base 10 number system, given that we have ten digits (fingers and thumbs) on our two hands. In number mode, the ten numerals 1, 2, 3, 4, 5, 6, 7, 8, 9 and 0 may be assigned to each of the user's finger and thumb keys from left to right as shown in Table 4:

Table 4

The other directions of the thumb and finger keys can then be assigned the mathematical operators; +, -, x, ÷ and be used to navigate between cells in a spreadsheet application. Numerical data entry performed with the device in this mode would be significantly faster and have a higher rate of accuracy (lower rate of typos) than numerical data entry performed with the "standard" numerical keypad found on most qwerty keyboards. Punctuation input

Since the characters in an alphabet are associated with the input sensors 202, 204, 206 and 208 of the input device 106 and 108 in alphabetical order from left to right, the arrangement of the characters on the input sensors 202, 204, 206 and 208 should be easy to memorise by a user. However, there is no such mnemonic for the punctuation characters, since these do not have any standard "order" in which they occur, as the letters of the alphabet do. Thus, the locations of the punctuation characters could be more difficult to memorise.

To overcome this difficulty, another function can be implemented using the input apparatus 102, which defines a generic punctuation key combination (which is activated by the user operating a predefined one or more of the input sensors 202, 204, 206 and 208).

When this punctuation key combination is operated by a user, the input control module 110 generates commands or instructions for the processor 112 to generate a graphical display interface including a table of available punctuation characters (e.g. a pop-up menu on a computer display). The user then scrolls through the punctuation characters using one of the directional control keys 206 and 208, and selects the desired punctuation character by depressing that lever of that direction control key 206 and 208.

Languages other than English

Since the functions of the individual input sensors 202, 204, 206 and 208 of the input apparatus 102 are not fixed, as they are on a traditional keyboard, but rather are defined by software depending on the mode selected, the input apparatus 102 is adaptable for use for providing input in languages other than English, which may have alphabets with more or less letters than English, or even to languages which do not actually have alphabets as such, for example Chinese. The different input actions performed by each individual input sensor 202, 204, 206 and 208 may be assigned to (or associated with) a different predefined structural or categorical identifier for a particular input method for entering ideographical characters. This association may be performed under the control of the input control module 110. For Chinese language input, different input actions of each input sensors 202, 204, 206 and 208 may be assigned to a different type of brush stroke or radical (which make up different Chinese characters) in accordance with a Chinese input method, such as the Wubihua and any other Chinese character input method. In use, the input control module 110 may generate a graphical user interface display including a list of ideographic characters based on the user's input actions using the input sensors 202, 204, 206 and 208. A user may select an appropriate character for input from the displayed list (e.g. by activating either of the thumb controlled sensors 206 and 208). In a similar manner, the input actions of the input sensors 202, 204, 206 and 208 may be adapted for inputting ideographic characters of other languages, such as Japanese and Korean.

As described above, different input actions of the input sensors 202, 204, 206 and 208 of the input device 106 and 108 can be used to provide input representing different letters of the English alphabet. In another representative embodiment, the sequence of letters provided by a user (entered using the input sensors 202, 204, 206 and 208 when configured in the above manner) is processed or interpreted by a module (e.g. the input control module 110) to translate or convert the sequence of letters into one or more ideographic characters. For example, the sequence of letters may represent one or more words in pinyin, which are converted into one or more corresponding Chinese characters. A similar approach can be used to provide ideographic character input in other languages, such as Japanese and Korean.

Music input mode

The input device described here lends itself quite nicely to be used as a musical instrument. There are many and various ways in which this could be accomplished. For example, the software could assign functions to the keys of the device such that it emulates a traditional musical instrument such as a flute or a piano.

If a flute, or similar instrument, is being emulated, then the finger keys would correspond to the note(s) produced on the traditional instrument when those fingers activate the levers, or cover the holes of the traditional instrument. In this case the thumb keys could be used for controlling the volume of each note, and other musical nuances.

If a piano keyboard is being emulated, then the gross movement of the user's hands can be used to emulate shifting the hands to a different octave on the traditional keyboard. In the embodiment of the invention in which each finger key has more than one input direction, then a forward movement of each finger can be used to emulate the black keys on a piano keyboard, and the downward movement of each finger can be used to emulate the white keys of a piano. The real strength of the music mode, however, would be not to emulate an existing instrument, but to use the available input motions of the user's hands along with appropriate software to create entirely new instruments. The possibilities here are only limited by imagination. Other modes

Other modes of operation for the input apparatus 102 are possible depending on the particular application for which the input apparatus 102 is being used. For example, particular software packages (e.g. computer technical drawing drafting software or computer games) may wish to redefine the functions of each key (i.e. define specialised functions associated with the operation of specific combinations of one or more of the input sensors 202, 204, 206 and 208), and even allow switching between standard and additionally defined modes within a particular software application. These features can be implemented in the input control module 110.

Advantages The input apparatus 102 described herein takes the typical anatomical features of human hands, along with the 26 letter English alphabet and 10 digit number system, and other common functions required for computer data input and manipulation, and creates an input apparatus 102 which is comfortable easy and natural to use. The input apparatus 102 provides even greater productivity by doubling as not one, but two pointing devices (when the input devices 106 and 108 are equipped with motion sensors). Each input device 106 and 108 serves as a pointing device which controls one of two pointers on the computer screen.

Modifications and improvements to the invention will be readily apparent to those skilled in the art. For example, the input devices 106 and 108 could be in the form of a glove that receives a user's hand, rather than physical devices that are gripped by a user's hand. Such modifications and improvements are intended to be within the scope of this invention.

In this specification where a document, act or item of knowledge is referred to or discussed, this reference or discussion is not an admission that the document, act or item of knowledge or any combination thereof was at the priority date, publicly available, known to the public, part of common general knowledge; or known to be relevant to an attempt to solve any problem with which this specification is concerned. The word 'comprising' and forms of the word 'comprising' as used in this description and in the claims does not limit the invention claimed to exclude any variants or additions.