Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TEXT INPUT METHOD FOR AUGMENTED REALITY DEVICES
Document Type and Number:
WIPO Patent Application WO/2022/246334
Kind Code:
A1
Abstract:
Systems and methods are provided for generating a virtual keyboard for text input on virtual reality and augmented reality devices. The systems and methods disclosed herein can generate a virtual keyboard on a mobile device. The virtual keyboard comprises an operation area, a plurality of virtual key areas, and a plurality of borders, each border at an interface between the first operation area and each virtual key area of the plurality of virtual key areas. The systems and methods provide for detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders, configuring a confirmation criteria, detecting a second trajectory of the user input that crosses the first border, selecting an input key of the virtual keyboard based detecting the first and second trajectories that satisfy the confirmation criteria, and displaying text based on the selected input key.

Inventors:
MEI CHAO (US)
XU BUYI (US)
XU YI (US)
Application Number:
PCT/US2022/031890
Publication Date:
November 24, 2022
Filing Date:
June 02, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
G06F3/04886
Foreign References:
US20190073117A12019-03-07
US20130271379A12013-10-17
Attorney, Agent or Firm:
CATANESE, Mark, W. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A method for text input comprising: generating a virtual keyboard on a mobile device, wherein the virtual keyboard comprises an operation area, a plurality of virtual key areas, and a plurality of borders, each border at an interface between the first operation area and each virtual key area of the plurality of virtual key areas; detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders; configuring a confirmation criteria; detecting a second trajectory of the user input that crosses the first border; selecting an input key of the virtual keyboard based detecting the first and second trajectories that satisfy the confirmation criteria; and displaying text based on the selected input key.

2. The method of claim 1, wherein detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders comprises: detecting that the user input crosses the first border along the first trajectory by moving from the operation area into a first virtual key area of the plurality of virtual key areas; and triggering a pre-selection state in response to detecting the user input crossing the first border along the first trajectory.

3. The method of claim 2, wherein detecting a second trajectory of the user input that crosses the first border further comprises: detecting that the user input crosses the first boundary along the second trajectory by moving from the first virtual key area into the operation area; and confirming the selection of the input key in response to detecting the user input crossing the first border along the second trajectory.

4. The method of any one of claims 1-3, further comprising: obtaining a threshold angle; detecting an angle between the first trajectory and the second trajectory; and selecting the input key of the virtual keyboard responsive to determining that the angle is less than or equal to the threshold angle.

5. The method of any one of claims 1-3, further comprising: obtaining a first designated direction component as the confirmation criteria; triggering a pre-selection state in response to detecting the first trajectory comprises the first designated direction component; and confirming the selection of the input key in response to detecting the user input crossing the first border along the second trajectory.

6. The method of one any one of claims 5, further comprising: obtaining a second designated direction component as the confirmation criteria, wherein confirming the selection of the input key in response to detecting the user input crossing the first border along the second trajectory is responsive to detecting the second trajectory comprises the second designated direction component.

7. The method of one any one of claims 1-3, wherein the confirmation criteria is configured responsive to detecting the first trajectory of a user input on the mobile device crosses the first border.

8. The method of claim 7, wherein each of the plurality of borders comprises a plurality of edges, wherein the method comprises: detecting the first trajectory crosses a first edge of the first border; setting the first edge as a validation boundary; detecting the second trajectory of the user input crosses the first edge; selecting the input key of the virtual keyboard responsive to detecting the first and second crossing trajectories cross the first edge.

9. The method of any one of claims 1-8, further comprising: obtaining a time threshold; detecting an amount of time between the first trajectory crossing the first border and the second trajectory crossing the first border; and selecting the input key of the virtual keyboard responsive to determining that the amount of time is less than or equal to the time threshold.

10. The method of any one of claims 1-9, wherein the user input is a continuous user input.

11. The method of any one of claims 1-10, further comprising displaying a graphical representation of the virtual keyboard on a display device that is external to the mobile device.

12. The method of claim 11, wherein the display device is a head-mounted display device, wherein displaying a graphical representation of the virtual keyboard comprises superimposing the virtual keyboard over a field of view of the head-mounted display device.

13. The method of one of claims 11 and 12, further comprising generating a graphical icon representative of the detected user input and displaying the graphical icon on the virtual keyboard at positions corresponding positions of the user input.

14. The method of any one of claims 1-13, wherein the mobile device comprises a touch screen and the user input is a physical contact with the touch screen.

15. The method of any one of claims 1-14, further comprising displaying the virtual keyboard on a display surface of the mobile device.

16. The method of any one of claims 1-15, further comprising assigning an input key to each of the virtual key areas, wherein input keys comprise one of an alpha-numeric character input key and command input key.

17. A system for text input, the system comprising: a memory configured to store instructions; and one or more processors communicably coupled to the memory and configured to execute the instructions to perform a method comprising: generating a virtual keyboard on a mobile device, wherein the virtual keyboard comprises an operation area, a plurality of virtual key areas, and a plurality of borders, each border at an interface between the first operation area and each virtual key area of the plurality of virtual key areas; detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders; configuring a confirmation criteria; detecting a second trajectory of the user input that crosses the first border; selecting an input key of the virtual keyboard based detecting the first and second trajectories that satisfy the confirmation criteria; and displaying text based on the selected input key.

18. The system of claim 17, wherein the method further comprises: obtaining a threshold angle; detecting an angle between the first trajectory and the second trajectory; and selecting the input key of the virtual keyboard responsive to determining that the angle is less than or equal to the threshold angle.

19. The system of claim 17, wherein the method further comprises: obtaining a first designated direction component as the confirmation criteria; triggering a pre-selection state in response to detecting the first trajectory comprises the first designated direction component; and confirming the selection of the input key in response to detecting the user input crossing the first border along the second trajectory.

20. A non-transitory computer-readable storage medium storing a plurality of instructions executable by one or more processors, the plurality of instructions when executed by the one or more processors cause the one or more processors to perform a method comprising: generating a virtual keyboard comprising at least one operation area, a plurality of virtual key areas disposed within the operation area, and a plurality of borders at the interface between the at least one operation area and the plurality of virtual key areas; configuring a confirmation criteria responsive to detecting a user input that crosses a border of the plurality of borders; and displaying a graphical representation of the virtual keyboard on a display screen.

Description:
TEXT INPUT METHOD FOR AUGMENTED REALITY DEVICES

Cross-Reference to Related Applications

[0001] This application claims the benefit of U.S. Provisional Application No. 63/196,082 filed June 2, 2021 and titled "COOL DOWN - CONTINUOUS TOUCH - REFINED IN AND OUT (CD-CT-RIO) INTERACTION PARADIGM FOR TEXT INPUT WITH HEAD-MOUNTED DISPLAYS (HMDS)," which is hereby incorporated herein by reference in its entirety.

Technical Field

[0002] The embodiments of the present disclosure relate to the field of vision enhancement technology, and in particular, to systems and methods for text input on virtual reality and augmented reality devices.

Description of the Related Art

[0003] Modern computing and display technologies have facilitated the development of systems for so called "extended reality" (XR) experiences. An XR experience refers to all real-and virtual environments generated by computing and display technologies applications. XR experiences encompasses "virtual reality"; "augmented reality"; and "mixed reality" experiences. With these visual enhancement display technologies digitally, reproduced images or portions thereof are presented to a user in a manner that seems to be, or may be perceived as, real. A virtual reality (VR) experience involves presentation of digital or virtual image that is not transparent to other actual real-world visual input. While, an augmented reality (AR) experience typically involves presentation of digital or virtual image information as an augmentation to actual real-world environment around the user. A mixed reality (MR) experience is an extension of AR experiences that enables virtual elements to interact with real world elements in an environment. Such technologies provide for a simulated environment with which a user can interact, thereby providing an immersive experience.

[0004] Text input into visual enhancement display technologies can be a challenge. Particularly for head-mounted display devices (HMD) that include display screens for implementing the visual enhancement display technologies. For example, a physical keyboard and mouse can be connected to an HMD. On a physical keyboard, a user can typically tell, based on the change in resistance of a key, when a key has been sufficiently depressed. This tactile feedback relieves a user from having to constantly look at the physical keyboard to visually verify that input is being entered. Accordingly, the user's eyes are freed up away from the keyboard. However, in many locations it may not be practical or economical to connect external input devices, such as a physical keyboard, to the HMD. For example, in many public environments (e.g., airports, shopping malls, etc.,) a physical keyboard would quickly wear out or become damaged due to the volume and diversity of use. As another example, for certain applications and uses of an HMD it may be impractical. For example, an AR application may include the HMD used to view an item for maintenance (e.g., a vehicle or home repair) and display instructions over the real world view to assist in repair. In such an application, retrieving and connecting a physical keyboard could be cumbersome and the physical keyboard may simply get in the way of the repair.

[0005] Thus, some computing systems use software based on "virtual" keyboards. A virtual keyboard is essentially a replica of a real keyboard (or portion thereof) that is presented to the user, for example, on a touch screen. To enter a character, a user contacts the touch screen at a location for the desired input. Unfortunately, when using a virtual keyboard, there is no way to provide the tactile feedback associated with using a physical keyboard. Thus, a user must focus their attention to the virtual keyboard on the touch screen in order to see what they are typing. This makes it difficult for a user to accurately select inputs while remaining immersed in the simulated experience. Instead, the user must shift their focus to the touch screen to ensure accurate finger or thumb placement for correct text input. Doing such for each entered character is inefficient and potentially burdensome to a user.

Brief Summary

[0006] According to various embodiments of the disclosed technology, systems and methods are provided for generating a virtual keyboard for text input on virtual reality and augmented reality devices.

[0007] In accordance with some embodiments, a method for text input is provided. The method comprises generating a virtual keyboard on a mobile device, wherein the virtual keyboard comprises an operation area, and a plurality of virtual key areas, and a plurality of borders, each border at an interface between the first operation area and each virtual key area of the plurality of virtual key areas; detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders; configuring a confirmation criteria; detecting a second trajectory of the user input that crosses the first border; selecting an input key of the virtual keyboard based detecting the first and second trajectories that satisfy the confirmation criteria; and displaying text based on the selected input key.

[0008] In another aspect, a system for text input is provided. The system comprises a memory configured to store instructions and one or more processors communicably coupled to the memory. The one or more processors are configured to execute the instruction to perform a method comprising generating a virtual keyboard on a mobile device, wherein the virtual keyboard comprises an operation area, and a plurality of virtual key areas, and a plurality of borders, each border at an interface between the first operation area and each virtual key area of the plurality of virtual key areas; detecting a first trajectory of a user input on the mobile device that crosses a first border of the plurality of borders; configuring a confirmation criteria; detecting a second trajectory of the user input that crosses the first border; selecting an input key of the virtual keyboard based detecting the first and second trajectories that satisfy the confirmation criteria; and displaying text based on the selected input key.

[0009] In another aspect, a non-transitory computer-readable storage medium is provided. The non-transitory computer-readable storage medium storing a plurality of instructions executable by one or more processors, the plurality of instructions when executed by the one or more processors cause the one or more processors to perform a method. The method comprises generating a virtual keyboard comprising at least one operation area, a plurality of virtual key areas disposed within the operation area, and a plurality of borders at the interface between the at least one operation area and the plurality of virtual key areas; configuring a confirmation criteria responsive to detecting a user input that crosses a border of the plurality of borders; and displaying a graphical representation of the virtual keyboard on a display screen.

[0010] Other features and aspects of the disclosed technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosed technology. The summary is not intended to limit the scope of any inventions described herein, which are defined solely by the claims attached hereto.

Brief Description of the Drawings

[0011] The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.

[0012] FIG. 1 illustrates an example of visual enhancement system according to embodiments disclosed herein.

[0013] FIG. 2 illustrates an example virtual keyboard layout according to various embodiments of the present disclosure. [0014] FIGS. S and 4 illustrate additional examples of a virtual keyboard layout according to various embodiments of the present disclosure.

[0015] FIGS. 5-7 illustrate example input key mistouch scenarios that may result in a selection of an unintended input key.

[0016] FIG. 8 illustrates examples of confirmation criteria implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout of FIG. 2.

[0017] FIG. 9 illustrates examples of confirmation criteria implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout of FIG. 2.

[0018] FIG. 10 illustrates another example of a confirmation criteria implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout of FIG. 4.

[0019] FIG. 11 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.

[0020] These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.

Detailed Description

[0021] Embodiments presented herein provide systems and methods configured to generate a virtual keyboard usable to input text in XR applications (e.g., VR, MR, and/or AR applications). Various embodiments provided herein utilize an input detection device communicably coupled to a display device. The input detection device, such as a mobile device, is configured to maintain a virtual keyboard thereon. The input detection device may be configured to detect user inputs thereon, such as physical contact with a touch screen of the mobile device, and convert the detected user inputs to input key selections. The input detection device is communicably coupled to an external display device that presents the virtual keyboard to a user. The display device may be an XR display device, such as a head-mounted display device (HMD), tethered to the input detection device via wired or wireless communication. Embodiments herein may detect continuous user input (e.g., uninterrupted physical contact with the input detection device) executing selection mechanism to select desired input keys on the virtual keyboard. Additionally, feedback and/or a graphical representation of the location of the user input on the virtual keyboard is also presented to the user by the display device. This allows for input key selection without the user altering their gaze to the input detection device. Thus, the user need not monitor input selection by directly viewing the input detection device, and is able to view their surroundings via the display device while simultaneously using the virtual keyboard.

[0022] Existing methods and systems for text input on XR applications are cumbersome and inefficient, especially when the input text is very long. In addition, due to a large number of actions associated with handheld controllers, text input thereon usually causes rapid user fatigue. For example, a "point and shoot" input approach uses a virtual ray from a handheld controller to aim at input keys on a projected virtual keyboard. Confirmation of the key selection is performed by clicking a trigger button on the handheld controller. Another method uses a virtual ray projected on the display that is indicative of a head direction, derived from an HMD, to point to a key. Then confirmation is performed through activation of a trigger button on a handheld controller or on the HMD itself. In yet another example, multiple handheld controllers may each be assigned a portion of a split virtual keyboard and button selection is made by sliding the fingertip along the surface of a touchpad on each controller. The confirmation of the text input may be completed by pressing a trigger button. In the aforementioned approaches, the first method causes rapid user fatigue due to the need for numerous handheld controller clicks and frustration due to imprecision of the point and shoot accuracy; the second increases the possibility of motion sickness because it involves frequent head movements and increased text input requires faster head movements; and the third is inefficient because when there are many keys on the keyboard sliding fingertips across a traditional QWERTY layout to locate a keys is not efficient and could result in fatigue and inaccurate text selection.

[0023] In the case of a display devices being tethered to a mobile device, one approach is to use an existing text input interface on the mobile device. Generally, mobile devices have a floating full keyboard (e.g., a QWERTY keyboard), a T9 keyboard, a handwriting interface, etc. However, these keyboards require the user to view the keyboard interface on the mobile device screen to ensure accurate key selection and finger placement, at least because the tactile feedback of a conventional physical keyboard cannot be replicated on the keyboard interface. However, for XR applications, the user may want to maintain their field of view (FOV) on the simulated environment within their line of sight to ensure and maintain the immersive experience. Thus, the above methods are not ideal.

[0024] Furthermore, these methods utilize input key selection mechanisms of tap and lift-off, both of which have limitations in XR applications. Tap selection may be less accurate due to the user's inability to directly monitor finger placement. While, the lift-off selection may affect the efficiency of typing since users are required to reposition fingers after the lift-off from the input device, which interrupts planning of finger movement to the next character since the user needs to reposition the finger after each lift-off.

[0025] Some methods provide for a layout that is different from but based on a traditional QWERTY key layout. For example, an altered keyboard layout that is based on a traditional QWERTY key layout is described in International Application No. PC/US2022/020897, which is incorporated herein by reference in its entirety. However, while providing various advantages through the altered key layout, users may resist deviating from the traditional QWERTY key layout.

[0026] Accordingly, embodiments disclosed herein provides for an improved and optimized virtual keyboard and text input method for XR display applications. Embodiments herein provide utilize a traditional QWERTY key layout or T9 keyboard layout to leverage users existing familiarity with these keyboard layouts. [0027] Embodiments disclosed herein provide for an input detection device (such as a mobile device) that generates and maintains a virtual keyboard with a graphical representation of the virtual keyboard layout projected on a display device (such as an XR enabled device) external to the input detection device. In some embodiments, the virtual keyboard may also be displayed on the input detection device (e.g., on a display surface of a mobile device), while in others the virtual keyboard may not be displayed graphically. Embodiments herein also display the location of user input on the virtual keyboard by projecting a graphical representation (e.g., a graphical icon) of the location on the display device. As such, the user may monitor physical input location relative to the virtual keyboard on the input detection device via the projected graphical representation on a display device. By displaying the user input location on the projected virtual keyboard by the display device, the user does not need to monitor actual input placement or interrupt the input movement planning.

[0028] Additionally, embodiments disclosed herein may utilize an "In 'n out" selection mechanism for input key selection, which provides for improved key input while reducing inaccurate key selection. The selection mechanism according to various embodiments detects a continuous user input (e.g., continuous physical contact between the user input, such as a finger or thumb, and the input detection device) that moves a contact point from an idle state in an operation area into one of a plurality of virtual key areas, each virtual key area assigned an input key of the virtual keyboard. Based on the movement, embodiments herein trigger a pre-selection state and designate the input key of the one virtual key area as a candidate input key, if certain confirmation criteria are satisfied. Moving the contact point out of the one of the plurality of virtual key areas back into the non key area confirms the pre-selection state and executes the input key (e.g., in the case of character input keys, enters the character for text input). In this way, the user need not lift their finger or tap to execute an input key selection. Thus, the user need not look to the input detection device to confirm finger placement and accurate text entry. [0029] However, in certain scenarios, the above selection mechanism may result in selecting an unintended input key. For example, in some cases passing through a virtual key area to reach an intended key may result in selecting the unintended key. As another example, unintentional movement of a contact may result in selecting unintended keys. In yet another example, a change in direction of contact movement (e.g., sudden realization that a different key is intended) may result in selecting an unintended key.

[0030] Accordingly, embodiments disclosed herein configure one or more confirmation criteria as part of the selection mechanism, which are configured to avoid and/or mitigate the above and other mistouch scenarios. Confirmation criteria may include, but is not limited to, designating certain directional components, a confirmation time condition, setting confirming movements based on prior movements, and/or an angular threshold between trigger a pre-selection state and confirming said state. Embodiments herein implement one or more confirmation criteria and confirm that each criteria is satisfied before confirming a pre-selection. That is, even though the selection mechanism may be satisfied in moving the contact point out of the one of the plurality of virtual key areas back into the non-key area, the pre-selection state will not be confirmed unless the one or more confirmation criteria are also satisfied.

[0031] It should be noted that the terms "optimize," "optimal" and the like as used herein can be used to mean making or achieving performance as effective or perfect as possible. However, as one of ordinary skill in the art reading this document will recognize, perfection cannot always be achieved. Accordingly, these terms can also encompass making or achieving performance as good or effective as possible or practical under the given circumstances, or making or achieving performance better than that which can be achieved with other settings or parameters.

[0032] FIG. 1 illustrates an example of visual enhancement system 100 according to embodiments disclosed herein. The visual enhancement system 100 includes a display device 110 and an input detection device 120. The display device 110 is communicatively coupled to the input detection device 120 by a communications link 130, such as by a wired or wireless connectivity. For example, the communications link 130 may be implemented using one or more wireless communication protocols, such as Bluetooth ® , Wi-Fi, cellular communications protocols (e.g., 4G LTE, 5G, and the like), and so on. Alternatively, or in combination, the communications link 130 may comprise a data cable, such as a Universal Serial Bus (USB) cable, HDMI cable, and so on.

[0033] The input detection device 120 may be implemented as any processing systems as described below. For example, the mobile device may be a mobile smart phone, tablet computer, personal computer, laptop computer, personal digital assistant, smart watch (or other wearable smart devices), and so on.

[0034] The input detection device 120 may be a computer or processing system, such as the computer system 1100 of FIG. 11. In various embodiments and as illustrated in FIG. 1, the input detection device 120 may be a mobile device, such as, but not limited to, a mobile telephone, tablet computer, wearable smart device (e.g., a smartwatch and the like), etc. However, the input detection device 120 may be implemented as any computing system known in the art, for example, a personal computer, laptop computer, personal digital assistant, and so on. The input detection device needs to only be configured to detect user inputs and convert those user inputs into input key selections to facilitate text entry thereon. Example user inputs may be, for example, physical contact with a touch screen or other responsive surface via, for example, a finger, thumb, appendage, or user input device (e.g., a stylus or pen); voice command input; gesture command inputs; and the like. Embodiments herein refer to physical or direct contact with the input detection device, which refers to any contact on the input detection device (e.g., a touch screen) that is detected as an input. Thus, a physical or direct contact from the finger need not require the finger to directly contact (e.g., in the case of covered by a glove or other material), so long as the contact is a result of the user exerting force on the input detection device via an appendage or input device. As set forth above, the input detection device 120 may be any computing device; however, the input detection device 120 will be referred to herein as mobile device 120 for illustrative purposes only. [0035] FIG. 1 illustrates an example architecture of the mobile device 120 that may facilitate text input via a virtual keyboard. The mobile device 120 includes sensors 121, sensors interface 122, a clock 132, virtual keyboard module 124, application(s) 126, and graphics engine 128. Generally, the components of system 100, including sensors 121, sensors interface 122, virtual keyboard module 124, application(s) 126, and graphics engine 128, interoperate to implement various embodiments for generating a virtual keyboard that is maintained on the mobile device 120, displaying the virtual keyboard to a user, and receiving user input.

[0036] Sensors 121 can be configured to detect when a physical object (e.g., one or more finger(s), one or more stylus pen(s), or any input object or device) has come into physical contact with a portion of display surface 125. The display surface 125 may be a multi-touch display surface configured to detect contact from one or more physical objects (e.g., multiple fingers or pens) with the display surface 125. For example, sensors 121 can detect when one or more fingers of a user comes in contact with display surface 125. Sensors 121 can be embedded in the display surface 125 and can include for example, pressure sensors, temperature sensors, image scanners, barcode scanners, etc., that interoperate with sensor interface 122 to detect multiple simultaneous inputs

[0037] The display surface 125 may include sensors for implementing a touch screen interface. For example, the sensors 121 may be implemented as resistive sensors, capacitive sensors, optical imaging sensors (e.g., CMOS sensors and the like), dispersive signal sensors, acoustic pulse recognition sensors, and so on for touch screen applications. For example, display surface 125 can include an interactive multi-touch surface. Thus, it may be that display surface 125 also functions as a presentation surface to display video output data to the user of the visual enhancement system 100.

[0038] Sensors 121 can be included (e.g., embedded) in a plurality of locations across display surface 125. Sensors 121 can detect locations where physical contact with the display surface 125 has occurred. The density of sensors 121 can be sufficient such that contact across the entirety of display surface 125 can be detected. Thus, sensors 121 are configured to detect and differentiate between simultaneous contact at a plurality of different locations on the display surface 125.

[0039] Sensor interface 122 can receive raw sensor signal data from sensors 121 and can convert the raw sensor signal data into contact input data (e.g., digital data) that can be compatibly processed by other modules of mobile device 120. Sensor interface 122 or the other modules can buffer contact input data as needed to determine changes in contact on display surface 125 over time. For example, sensor interface 122 or the other modules can determine a change in position of a contact with the display surface 125 over time.

[0040] For example, raw sensor signal data from sensors 121 can change as new contacts are detected, existing contacts are moved while maintaining continuous and uninterrupted contact (e.g., continuous contact of a user's finger with the display surface 125 while the finger is moved across the display surface 125), and existing contacts are released (e.g., a finger causing contact is lifted off from the display surface 125) on display surface 125. Thus, upon receiving an indication of contact on display surface 125, sensor interface 122 can initiate buffering of raw sensor signal data (e.g., within a buffer in system memory of the mobile device 120). As contacts on display surface 125 change, sensor interface 122 can track the changes in raw sensor signal data and update locations and ordering of detected contacts within the buffer. Thus, in some embodiments, the sensor interface 122 can track trajectories of detected contacts on the display surface 125 (e.g., movements on the contacts) by buffering the raw sensor data.

[0041] For example, sensor interface 122 can determine that contact at a first point in time was detected at a first location and then the contact was subsequently moved to a second location. The sensor interface 122 can determine that the contact between the first location and second location was a continuous contact with the display surface 125 (e.g., physical contact between both locations was not interrupted for example by a separation or lift-off). The first location may be a first area of a virtual keyboard absent of input keys and the second location may be an area corresponding to an input key. Upon detecting movement of the contact into the input key area, sensor interface 122 can convert the contents of the buffer to candidate input key data (e.g., input data representing the input key corresponding to the second location) for a pre-selection state. Sensor interface 122 may then send the candidate input key data to other modules at mobile device 120. The candidate input key data may be used by the other modules to identify or display (e.g., on mobile device 120 and/or display device 110) the candidate input key in a pre-selection state.

[0042] Subsequently, the sensor interface 122 can determine that the contact was moved, without separating the physical contact from the display surface 125, to a third location outside of the area corresponding to the input key. The continuous contact and locations may be stored in a buffer. Upon receiving an indication that the third location is in the area absent any input keys (e.g., the area in which the first location was detected), sensor interface 122 can convert the contents of the buffer to input key data (e.g., input data representing the confirmation of the candidate input key data). Sensor interface 122 may then send the input key data to other modules at mobile device 120. Other modules can buffer the input key data as needed to determine changes at a location over time. The input key data may be used by the other modules to execute the selected input key for text entry.

[0043] Virtual keyboard module 124 is configured to maintain a virtual keyboard within mobile device 120. For example, the virtual keyboard module 124 may generate data defining the virtual keyboard (e.g., virtual keyboard data). The virtual keyboard module 124 executes software to create the virtual keyboard. For example, the virtual keyboard module 124 may define areas of the display surface 125 for input keys (e.g., alpha-numeric character input keys and/or command input keys, such as, but not limited to, enter command, backspace command, space command, etc.) of the virtual keyboard (e.g., each area assigned an input key). User interaction with each defined region (e.g., contact by a user finger or other input device with the region of display surface 125) may be converted by the virtual keyboard module 124 to a corresponding input key. The virtual keyboard

IB module 124 may be stored in a memory (e.g., random access memory (RAM), cache and/or other dynamic storage devices).

[0044] Virtual keyboard module 124 is configured to present a graphical representation the virtual keyboard on the display device 110. For example, virtual keyboard module 124 may generate image data of the virtual keyboard (e.g., virtual keyboard image data) for rendering a visualization of the virtual keyboard on the display device 110. In some embodiments, the virtual keyboard module 124 communicates the virtual keyboard image data to the display device 110 via the wired or wireless connection. The display device 110 may receive the virtual keyboard image data and generate the graphical representation of the virtual keyboard, which is visually displayed to a user of the display device 110 via display screen(s) 111.

[0045] In some embodiments, the virtual keyboard module 124 may be configured to generate a graphical representation of the virtual keyboard on the display surface 125 of the mobile device 120. For example, the virtual keyboard module 124 may provide the virtual keyboard image data to the graphics engine 128 which converts the virtual keyboard data into a visualization of the virtual keyboard on the display surface 125.

[0046] In some embodiments, the virtual keyboard module 124 need not display the virtual keyboard on the display surface 125. In such cases, the virtual keyboard module 124 assigns the sub-areas of the display surface 125 as set forth above and uses user interaction with each sub-area to input text. The display surface 125 may display a solid color or any desired image while the user interacts with the display surface 125 to select input keys from the virtual keyboard.

[0047] In some embodiments, the location of the virtual keyboard within the display surface 125 may be set in advance. For example, a pre-defined layout, orientation, and location within the display surface. In another embodiment, the location of the virtual keyboard within the display surface 125 may be determined based on an initial position of contact with the display surface 125 by the input device. For example, upon initializing the virtual keyboard, the virtual keyboard module 124 may receive contact input data from the sensor interface 122 for a first contact position on display surface 125. The virtual keyboard module 124 may determine the first contact position as a center position of the virtual keyboard and generate the virtual keyboard around that center position, such that the first contact position is centrally located within the virtual keyboard.

[0048] Virtual keyboard module 124 can generate the virtual keyboard in response to selecting an application data field within an application 126. For example, an application 126 can present and maintain application user-interface on display surface 125 and/or display device 110. The user may select an application data field for purposes of entering text. In response to the selection, virtual keyboard module 124 can generate virtual keyboard data and maintain virtual keyboard in the mobile device 120 and communicate the virtual keyboard data to the display device 110 for presentation to the user.

[0049] As described below in connection with FIGS. 2-10, the virtual keyboard may be configured based on a QWERTY keyboard. For example, the embodiments disclosed herein may utilize a QWERTY keyboard or split QWERTY keyboard layout. Alternatively, virtual keyboard can be configured based on any type of keyboard layout, such as but not limited to a T9 keyboard. Accordingly, the virtual keyboard can include function keys, application keys, cursor control keys, enter keys, numeric keypad, operating system keys, etc. Virtual keyboard can also be configured to present characters of essentially any alphabet such as, for example, English, French, German, Italian, Spanish, Chinese, Japanese, etc.

[0050] Generally, virtual keyboard module 124 can receive input contact data (e.g., data representing selecting and confirming a virtual key) from sensor interface 122. From the input data, virtual keyboard module 124 can generate the appropriate character code or command input for a character from essentially any character or command set, such, as for example, Unicode, ASCII, EBCDIC, ISO-8859 character sets, ANSI, Microsoft ® Windows ® character sets, Shift JIS, EUC-KR, etc. The virtual keyboard module 124 can send the character and/or command code to application 126. Application 126 can receive the character and/or command code and present the corresponding character in the application data field and/or execute command (e.g., enter command, back space, delete, etc.).

[0051] Alternately to or in combination with sending a character code to application 126, the virtual keyboard module 124 can send a character and/or command code to a replication window in the application user-interface and/or in the display device 110, which can receive the character code and present the corresponding character on a display (e.g., display 111) and/or execute command. Accordingly, a user can be given a visual indication of the character that was sent to application 126, without having to alter their field of view to look at display surface 125.

[0052] Virtual keyboard module 124 can also buffer one or more character and/or command codes as candidate inputs until an indication is received that confirms input selection or cancels the selection. The indication may confirm that the user intended to select the candidate character and/or command input key. In another case, the indication may indicate that the user did not intend the selected input key. The indication can result from satisfying a logical condition, for example, a selection mechanism including various conditions and/or confirmation criteria as disclosed herein and described in connection with FIGS. 2-10. Virtual keyboard module 124 can present a sequence of buffered characters codes for verification (e.g., displayed in a replication window). Then in response to an indication confirming selection, virtual keyboard module 124 can send the buffered sequence of character codes to application 126.

[0053] In various embodiments, the virtual keyboard module 124 may configure a confirmation criteria for the selection mechanism, examples of which are described in connection with FIGS. 2-10 below. In some embodiments, the confirmation criteria may be configured or set in response to detecting user inputs, such upon sensor interface 122 converting contents of the buffer to candidate input key data in response to detecting movement of a contact into an input key. While in other embodiments, the confirmation criteria may be configured in advance. In any case, the virtual keyboard module 124 may provide the confirmation criteria to the sensor interface 122, which may determine whether or not user inputs (e.g., physical contacts with the display surface 125 and/or trajectories of the contacts) satisfy the confirmation criteria. For example, the sensor interface 122 use contact and locations stored in the buffer, subsequent to the contact entering the input key, to track user inputs. In a case that tracked inputs satisfy the confirmation criteria, sensor interface 122 can convert the contents of the buffer to input key data (e.g., input data representing the confirmation of the candidate input key data). In a case that the confirmation criteria are not satisfied, the sensor interface 122 can converts the contents of the buffer to cancel key data (e.g., input data representing cancellation of candidate input key). Sensor interface 122 may then send the cancel key data to other modules at mobile device 120. The cancel key data may be used by the other modules to cancel the pre selection state and cancel input of the candidate input key.

[0054] The clock 132 may be a system clock or a global clock. The clock 132 may provide raw time data to the sensor interface 122. In some embodiments, the sensor interface 122 may convert the raw time data to a timing or time stamp and associate the timing or time stamp with user inputs that occurred at the associated time. For example, responsive to the sensor interface 122 generating candidate input key data for a pre selection state, a time stamp corresponding to the point in time when the contact moved into the input key can be associated with the pre-selection state. Similarly, responsive to the sensor interface 122 generating input key data for confirming a candidate input key, a time stamp corresponding to the point in time when the buffer data is converted to input key data can be associated with the confirmation. Additionally, raw sensor signal data from sensors 121 can be associated with corresponding timings from raw time data from clock 132. As such, the sensor interface 122 may use the time data to track timings between tracked contacts.

[0055] Upon receiving a character code, application 126 can apply application data field logic (e.g., data typing rules, validation rules, etc.) to input text. For example, when receiving character codes for text, application 126 can input codes for letters (e.g., the letter "a") and numbers (e.g., a "2"). Accordingly, a user is given a visual indication on display device 110 of the character actually displayed at application 126, without having to alter their field of view to look at the mobile device 120.

[0056] FIGS. 2-4 depict various examples of virtual keyboards that may be generated by the virtual keyboard module 124. FIGS. 2-4 may illustrate the graphical representation of the virtual keyboard as generated at the display device 110 and/or the graphical representation of the virtual keyboard on the mobile device 120.

[0057] The display device 110 includes a frame 112 supporting one or more display screen(s) 111. The frame 112 also houses a local processing and data module 114, such as one or more hardware processors and memory (e.g., non-volatile memory, such as flash memory), both of which may be utilized to assist in the processing, buffering, caching, and storage of data and generating of content on display screen(s) 111. The local processing and data module 114 may be operatively coupled to the mobile device 120 via communications link 130. Generally, the components of the display device 110, including the local processing and data module 114 and display screen(s) 111 interoperate to implement various embodiments for displaying the virtual keyboard to a user based on data from the virtual keyboard module 124.

[0058] The display device 110 may include one or more display screen(s) 111, and various mechanical and electronic components and systems to support the functioning of display screen(s) 111. The display screen(s) 111 may be coupled to a frame 112, which may be wearable by a user (not shown) and which is configured to position the display screen(s) 111 in front of the eyes of the user. The display screen(s) 111 may be one or more of Organic Light-Emitting Diode (OLED) display, Liquid Crystal Display (LCD), laser display, and so on. While two display screens are shown in FIG. 1, other configurations are possible. For example, 1, 2, 3, etc. display screens. Through the display screen(s) 111, content may be displayed and presented in front of the user's eyes, such that displayed content can fill or partially fill the user's field of vision. The display device may be a device capable of providing an XR experience. For example, a mixed reality display (MRD) and/or a virtual reality display (VRD) can include the display device 110, for example MR devices (e.g., Microsoft Hololens 1 and 2, Magic Leap One, Nreal Light, Oppo Air Glass, etc.) and/or VR glasses (HTC VIVE, Oculus Rift, SAMSUNG HMD Odyssey, etc.). In various embodiments, the display device 110 may be a head mounted display (HMD) worn on the head of the wearer.

[0059] The local processing and data module 114 includes at least a rendering engine 116. The rendering engine 116 receives virtual keyboard image data from the virtual keyboard module 124 and converts the virtual keyboard image data into a graphical data. The graphical data is then output to the display screen(s) 111 for generating a representation of the virtual keyboard displayed on the display screen(s) 111. In some embodiments, the virtual keyboard module 124 may be included in the local processing and data module 114. In this case, the functions of the virtual keyboard module 124 may be executed on the display device instead of the mobile device 120. For example, the sensor interface 122 may communicate contact input data to the virtual keyboard module on the display device 110 via the wired and/or wireless connection. The virtual keyboard module may then operate as described above and output virtual keyboard data to the rendering engine 116.

[0060] The display device 110 may also include one or more outward-facing imaging systems 113 configured to observe the surroundings in the environment (e.g., a 3D space) around the wearer. For example, the display device 110 may comprise one or more outward-facing imaging systems disposed on the frame 112. In some embodiments, an outward-facing imaging system can be disposed at approximately a central portion of the frame 112 between the eyes of the user, as shown in FIG. 1. Alternatively or in combination, the outward-facing imaging system can be disposed on one or more sides of the frame 112 adjacent to one or both eyes of the user. While example arrangements of the outward facing camera are provided above, other configurations are possible. For example, the outward facing imaging system 113 may be positioned in any orientation or position relative to the display device 110.

[0061] In some embodiments, the outward-facing imaging system 113 captures an image of a portion of the world in front of the display device 110. The entire region available for viewing or imaging by a viewer may be referred to as the field of regard (FOR). In some implementations, the FOR may include substantially all of the solid angle around the display device 110 because the display may be moved about the environment to image objects surrounding the display (in front, in back, above, below, or on the sides of the wearer). The portion of the FOR in front of the display system may be referred to as the field of view (FOV) and the outward-facing imaging system 113 may be used to capture images of the FOV. Images obtained from the outward-facing imaging system 113 can be used as part of XR applications (e.g., as images on which virtual objects are superimposed onto). For example, in AR and/or MR applications, the virtual keyboard may be superimposed over the images obtained from the outward-facing imaging system 113. In this way, the user may view the virtual keyboard without having to alter their field of view to look at display surface 125 of the mobile device 120.

[0062] In some implementations, the outward-facing imaging system 113 may be configured as a digital camera comprising an optical lens system and an image sensor. For example, light from the world in front of the display screen(s) 111 (e.g., from the FOV) may be focused by the lens of the outward-facing imaging system 113 onto the image sensor. In some embodiments, the outward -facing imaging system 113 may be configured to operate in the infrared (IR) spectrum, visible light spectrum, or in any other suitable wavelength range or range of wavelengths of electromagnetic radiation. In some embodiments, the imaging sensor may be configured as either a CMOS (complementary metal-oxide semiconductor) or CCD (charged-coupled device) sensor. In some embodiments, the image sensor may be configured to detect light in the IR spectrum, visible light spectrum, or in any other suitable wavelength range or range of wavelengths of electromagnetic radiation.

[0063] In some embodiments, the display device 110 and/or mobile device 120 may also include microphones, speakers, actuators, inertial measurement units (IMUs), accelerometers, compasses, global positioning system (GPS) units, radio devices, and/or gyroscopes. The data at the local processing and data module 114 may include data a) captured from sensors on the display device 110 (which may be, e.g., operatively coupled to the frame 112), such as image capture devices (e.g., outward-facing imaging system 113), microphones, IMUs, accelerometers, compasses, GPS units, radio devices, and/or gyroscopes; and/or b) acquired from sensors at the mobile device 120 (e.g., image capture devices, microphones, IMUs, accelerometers, compasses, GPS units, radio devices, and/or gyroscopes) and/or processed by the mobile device 120.

[0064] While an example device is described herein, it will be understood that the methods and devices disclosed herein are not limited to MR and/or AR devices or head mounted devices. Other configurations are possible, for example, applications in VR devices.

[0065] FIG. 2 illustrates an example virtual keyboard layout 200 according to various embodiments of the present disclosure. The layout 200 comprises one or more keyboard areas 210 and 220, each comprising a plurality sub-areas generated within a display area. The sub-areas comprise at least one operation area and a plurality of virtual key areas generated within the operation areas. A validation boundary may be configured at the interface between the operation area and each of the virtual key areas. In various embodiments, the layout 200 may be generated and maintained by the virtual keyboard module 124 of FIG. 1.

[0066] For example, the layout 200 includes a first keyboard area 210 and a second keyboard area 220 spaced apart from each other within the display area 201. Keyboard area 210 comprises a first plurality of virtual key areas 216a-216n (collectively referred to herein as first virtual key areas 216) that are positioned within a first operation area 212. Each virtual key area 216 includes a border, for example, virtual key area 216a comprises border 218a. In this example, the border 218a comprises a box or rectangular shape that includes four edges 218a-l through 218a-4. Similarly, virtual key areas 216b through 216n each comprise border 218b through 218n, respectively (collectively referred to as borders 218).

[0067] Similarly, second keyboard area 220 comprises a second plurality of virtual key areas 226a-226n (collectively referred to herein as second virtual key areas 226) that are within a second operation area 222. Each virtual key area 226 includes a border, for example, virtual key area 226a comprises border 228a. In this example, the border 228a comprises a box or rectangular shape that includes four edges 228a-l through 228a-4. Similarly, virtual key areas 226b through 226n each comprise a border 228b through 228n, respectively (collectively referred to as borders 228).

[0068] In the illustrative example of FIG. 2, the first and second keyboard areas 210 and 220 comprise twenty-six character key areas and multiple command keys (e.g., space, backspace, and return in this example) mapped onto the first and second virtual key areas 216 and 226. Locations for each character key in the respective keyboard may be determined based on a traditional QWERTY layout in this illustrative example, split into each the first and second keyboard areas 210 and 220. Accordingly, in various embodiments and with reference to FIG. 1 above, the virtual keyboard may assign a character or command input to a virtual key area 216 or 226 to mimic a traditional QWERTY and placement thereon.

[0069] As such, embodiments herein allow users to rely on familiarity with the traditional QWERTY layout so to optimize usage of the virtual keyboard and reduce learning curves of new text input methods. However, implementations herein are not limited to only the key placement as shown in FIG. 2. Character and command input positions relative to each other may be adjusted and customizable as desired by a user. Thus, the character and command inputs may be assigned to any virtual key area 216 or 226 as desired. Furthermore, the shapes and sizes of the virtual key areas may also be adjustable. For example, less or more than the 26 virtual key areas may be used as desired and assigned as desired. Furthermore, while the traditional English QWERTY layout is discussed herein, the embodiments herein may be used for any language as desired.

[0070] As noted above, the layout 200 may be generated and maintained by the virtual keyboard module 124 of FIG. 1. For example, virtual keyboard module 124 may generate image data of the layout 200 for rendering the virtual keyboard on the display device 110 (e.g., via rendering engine 116). In this case, display area 201 may represent edges of an FOV displayed or viewed on the display screen(s) 111. The user may then visually perceive the virtual keyboard while using the display device 110 and need not alter their gaze from the intended FOV. In this case, the first and second keyboard areas 210 and 220 may be superimposed over a background 205, which may be the image or real-world environment contained within the FOV of the viewer as viewed through the display device 110. For example, in MR and AR applications the background 205 may be an image of the FOV of the surroundings as seen through the display device 110 and/or as captured by the more outward-facing imaging systems 113. In VR applications, the background 205 may be a generated and rendered image over which the layout 200 is superimposed.

[0071] Additionally, the virtual keyboard module 124 may maintain the layout 200 at the mobile device 120, which may be used for character and command input selection to facilitate text input while viewing the FOV of the display device 110. For example, the display surface 125 may be converted to an operation interface and areas of the display surface 125 assigned to sub-areas of the layout 200. For example, the display area 201 may correspond the display surface 125 and sub-areas of the display surface 125 may be assigned to correspond to the first operation area 212 and areas surrounding assigned to the virtual key areas 216, with the validation boundary 218 therebetween. Similarly a distinct area of the display surface 125 may be assigned as the second keyboard area 220 in a similar manner. The relative position of areas assigned to each respective keyboard may be pre-determined and based on screen size of the display surface 125 and/or the user's hand size.

[0072] In another example, the position of areas assigned to the first and second keyboard areas 210 and 220 on the display surface 125 may be based on a first contact position by the user (e.g., the initial point of contact by the user's finger, thumb, or other input device). For example, a first contact position may be registered by the virtual keyboard as a center of either the first or second keyboard and the virtual keyboard generated based on the registered position. The display surface 125 may be divided into a left half and a right half. A first contact position detected on the left may result in registering a center position of the first keyboard area 210, while a first contact position detected on the right may result in registering a center position for the second keyboard area 220.

[0073] In some examples, as described above, the virtual keyboard module 124 outputs the virtual keyboard data to the graphics engine 128. In this case, the graphics engine 128 may render the layout 200 onto the display surface 125 over background 205. The background 205 may be a solid color background or other background as desired. In other examples, the actual layout 200 does not need to be displayed on the display screen.

[0074] First and second keyboard areas 210 and 220 also include graphical icons 215 and 225, respectively, displayed thereon. The graphical icons 215 and 225 are graphical representations of a position of user input on the virtual keyboard. Thus, each graphical icon 215 and 225 indicate a position of a user input relative to each keyboard area 210 and 220, respectively. For example, graphical icon 215 represents a position of a user input detected to contact an area corresponding to the first keyboard area 210 and the graphic icon 225 represents a position of a user input on an area corresponding to the second keyboard area 220. As shown in FIG. 2, the graphical icon 215 and 225 are circular icons having a solid color (e.g., black, blue, yellow, green, etc.). However, graphical icon 215 and 225 may be any shape and/or fill pattern as desired.

[0075] With reference to FIG. 1, for example, sensors 121 of the display surface 125 may detect a physical contact with the display surface by a physical object. The contact is provided to sensor interface 122, which provides contact input data including location on the display surface 125 to the virtual keyboard module 124. The virtual keyboard module 124 generates virtual keyboard data including the contact location data (e.g., a location on the display surface 125), which is used to render the graphical icon. The location may be provided as coordinates on a coordinate system based on the display surface 125. The graphical icon is then displayed in the display device 110 (and optionally on the mobile device 120) at a location relative to the virtual keyboard based on the location on the display surface 125. As long as physical and direct contact is maintained on the display surface 125, a graphical icon corresponding to the contact is displayed.

[0076] In various embodiments, graphical icon 215 may be a result of a user's thumb, finger, or input device contacting the left half of the display surface 125. Similarly, graphical icon 225 may be a result of a user's thumb, finger, or input device contacting the right half of the display surface 125. In this way, the user may utilize the mobile device 120 using both hands and select character and command input keys using both hands simultaneously, while being able to perceive the location of each contact relative to the sub- areas of each keyboard area 210 and 220 via the display device 110. At the same time, the user can operate the mobile device 120 to input text without a need for the user to gaze onto the mobile device and avert their gaze from the intended FOV.

[0077] FIGS. 3 and 4 illustrate examples of different virtual keyboard layouts according to various embodiments of the present disclosure.

[0078] FIG. 3 illustrates a virtual keyboard layout 300, which is substantially the same as virtual keyboard layout 200 except as provided herein. For example, virtual keyboard layer 300 comprise one keyboard area 310 opposed to two. The keyboard area 310 may be substantially similar to virtual keyboard area 210, which includes a plurality of virtual keyboard areas 316a-n and operational area 312that may be substantially similar to virtual keyboard areas 216 and operational area 212, respectively. Each virtual keyboard area 316a-n comprises border 318a-n, which are similar to the borders 218a-n described above (e.g., each border 318 may comprise multiple edges, such as border 318a is made up of edges 318a-l through 318a-4). The virtual keyboard layout 300 provides for all twenty- six character keys area and command keys within a single keyboard area 310. While select command keys are shown in FIG. 3, other command keys may be mapped as desired. For example, a full keyboard layout may be provided in the single keyboard area 310. The keys areas provided may be based on the size of the display surface 125 so to provide a workable area (e.g., fewer key areas for a cell phone display surface than provided for a tablet display surface). [0079] FIG. 4 illustrates another virtual keyboard layout 400, which is substantially the same as virtual keyboard layout 200 except that the virtual key areas 416a-n comprise a circular or ovular shape. Thus, for example, each virtual key area 416a-n and 426a-n comprises borders 418a-n and 428a-n, respectively, each of which consists of a single edge.

[0080] While specific examples of virtual keyboard layouts are provided herein, other implementations are possible within the scope of the present disclosure. For example, virtual key areas may comprise any desired shape (e.g., triangular, pentagon, hexagon, etc.) and are not limited to the examples provided herein.

[0081] Referring back to FIG. 2, each border 218a-n and 228a-n may be configured as a validation boundary for use in a character or command input key selection mechanism. In some implementations, each border 218a-n (e.g., all edges of a given virtual key area) may be validation boundaries.

[0082] For example, in response to detecting physical contact with the display surface 125 in an area corresponding to the first and second keyboard areas 210 and 220, graphical icons 215 and 225 are generated. Upon initial contact, the graphical icons 215 and 225 may be in an idle state. Idle state may refer to a physical contact that does not move, or may refer to a physical contact that remains within the operation area of the respective keyboard area. For example, the graphic icon 215 may not change position, and may represent an idle state of the contact corresponding to icon 215. As another example, the contact may be in an idle state in the case that the graphical icon 215 remains within the operation area 212, as shown in FIG. 2.

[0083] To select an input character or command (e.g., select character input key "T"), the user may move a physical contact (referred to herein as contact point) from a position PI in the operation area 212 to position P2 in the virtual key area for character key "T" and then to position P3 back into the operation area 212, all while maintaining continuous and uninterrupted contact with the display surface 125. At position PI graphical icon 215 is at an idle state or initial contact position by a physical contact with the display surface 125. To select the character input key "T" assigned to virtual key area, the user moves the contact point from position PI to position P2 in virtual key area for the character input key "T". This change in position includes crossing a border of the virtual key area, as shown by the arrow extending from PI to P2 along a first trajectory (In this example, responsive to crossing the border, the virtual keyboard may configure or set the border as a validation boundary. Thus, when the virtual keyboard detects that the contact point is moved from operation area 212 into the virtual key area for the character input key "T", the virtual keyboard triggers a pre-selection state and designates the character input key "T" as a candidate input key. As such the first trajectory may be referred to as a pre-selection trajectory. In some embodiments, the pre-selection state and candidate input key designation may be triggered in response to detecting the contact point crossed the validation boundary.

[0084] The virtual keyboard may then set a confirmation criteria as crossing the validation boundary from the virtual key area back into the operation area 212. Thus, subsequently to moving the contact point into the virtual key area of input key "T", the user may move the contact point from the virtual key area for the character input key "T" back into the operation area 212 by crossing the validation boundary. When the virtual keyboard detects this movement, the virtual keyboard confirms that the confirmation criteria is satisfied and then confirms that the candidate input key and enters the selected input key for text entry. For example, as shown in FIG. 2, the user moves the contact point from position P2 to position P3 in the operation area 212 via the border (set as a validation boundary), as shown by the arrow extending between P2 and P3 along a second trajectory. Detecting this movement satisfies the confirmation criteria and triggers the virtual keyboard to confirm the selection of input key "T" as input. In some embodiments, the confirmation may be triggered in response to detecting that the contact point crossed the validation boundary. As such the second trajectory may be referred to as a confirmation trajectory. Upon the contact point returning to the operation area 212, the idle state may be triggered and subsequent input key selection via moving the contact point to the same or another virtual key area. [0085] However, in some implementations of a virtual keyboard layout, such as that shown in FIG. 2, mistouches may become an issue. Mistouches may refer to a scenario where a user inadvertently inputs an unintended character or command input key, for example, through inadvertently selecting an unintended input key. FIGS. 5-7 illustrate various examples input key mistouch scenarios that may result in a selection of an unintended input key.

[0086] FIG. 5 illustrates an example scenario where a user moves a physical contact through an unintended input key, while maintaining continuous and uninterrupted contact, thereby satisfying the selection mechanism as described above in connection with FIG. 2. FIG. 5 depicts the virtual keyboard layout 200 described above, where each border 218a is set as a validation boundary. That is, the entire perimeter of the virtual key areas 216 and 226 may be set as validation boundaries. Under this configuration, a continuous, physical contact may cross a first edge of a virtual key area corresponding to an unintended input key, which triggers a pre-selection state that designates the unintended input key as a candidate input key. The continuous, physical contact then continues through the virtual key area of unintended input key and crosses a second edge of the virtual key area, which triggers a confirmation of the selection of the unintended key. The selection of the unintended key is a mistouch.

[0087] For example, as shown in FIG. 5, a user may wish to select character input key "T" after selecting character input key "C." To do this, the user may swipe (e.g., apply a continuous, physical contact) from the virtual key area 216n corresponding to input key "C" toward virtual key area 516b corresponding to input key "T", which passes through virtual key area 516a corresponding to "F", as shown in FIG. 5. Thus, the contact crosses edge 518a-l, thereby trigging a pre-selection state and designating "F" as a candidate input key. Then, the contact crosses edge 518a-2 as the swipe exits the virtual key area 518a, which confirms the candidate input key and selects input key "F". The selection of input key "F" is a mistouch as the user intended to swipe from "C" to "T." [0088] FIG. 6 illustrates another example scenario where a user moves a physical contact toward a first input key, realizes the next input should be a different input key instead of the first, and changes direction to move to the different input key, all while maintaining continuous and uninterrupted contact. Similar to FIG. 5, FIG. 6 depicts the virtual keyboard layout 200 described above, where each border 218a is set as a validation boundary. In some situations, the movement toward the first input key may cross an edge of an intermediate input key, which triggers a pre-selection state for the intermediate input key. The change in direction may result in confirming the pre-selection state, thereby satisfying the selection mechanism as described above. The change in direction may be due to a sudden realization that a different input key is to be the next input. The selection of the intermediate input key is a mistouch.

[0089] For example, as shown in FIG. 6, a user may confirm selection of input key "C" and then swipe (e.g., apply a continuous, physical contact) along a trajectory from point P5 toward character input key "T". In this example, the swipe crosses edge 618a-l of virtual key area 616 corresponding to intermediate character input key "F", thereby trigging a pre selection state and designating "F" as a candidate input key. Then at point P6, the user may suddenly realize that the intended input key is "B" opposed to "T" and change direction toward character input key "B". The swipe may then cross the edge 618a-l again, which confirms the candidate input key and selects input key "F". The selection of input key "F" is a mistouch as the user intended to swipe from "C" to "B."

[0090] FIG. 7 illustrates yet another example scenario where a physical contact unintentionally enters in a virtual key area, while maintaining continuous and uninterrupted contact, and unintentionally selects the corresponding input key. For example, after confirming a selection of a character input key and while an appendage of a second hand is actively selecting and confirming input keys, the user may unconsciously maintain physical contact between an appendage of a first hand and the display surface. The unconscious contact may unintentionally move into a virtual key area of an unintended input key. Unintended input keys may be selected, for example, where unconscious movement of the appendage of the first hand satisfy the selection mechanism as described above (e.g., into and output of virtual key areas).

[0091] For example, FIG. 7 depicts the virtual keyboard layout 200 described above, where each border 218a is set as a validation boundary. The user may complete selection and confirmation of a first input key (e.g., input key "C"). Then, while moving icon 225 on the keyboard area 220 to select one or more intended input keys, the user may unconsciously initiate or maintain a physical contact with the keyboard area 210 shown as icon 215. The user may move icon 215 unintentionally into virtual key area 716 to preselect input key "F" (as shown by the hatched virtual key area). The icon 215 may linger within virtual key area 716 while one or more intended input key selection operations are performed on the keyboard area 220. At some time later, the user may then unconsciously (or consciously) move icon 215 out of virtual key area 716, thereby selecting unintended input key "F".

[0092] The embodiments disclosed herein provide for an improved selection mechanism to avoid and mitigate mistouches, such as those from the above scenarios, that result in selection of unintended input keys. By mitigating the mistouches, text input accuracy and user acceptance of the selection mechanism may be increased. FIGS. 8-10 illustrate example embodiments of the selection mechanism, according to embodiments disclosed herein, that may be used to mitigate mistouches or selection of an unintended input keys.

[0093] For example, FIG. 8 illustrates examples of confirmation criteria implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout 200 of FIG. 2. While the embodiments of the selection mechanism shown in FIG. 8 are described with reference to the example layout 200 of FIG. 2, the embodiments herein are equally applicable to other layouts (e.g., layout 300 of FIG. 3, layout 400 of FIG. 4, etc.).

[0094] As alluded to above in connection with FIG. 1, the virtual keyboard may configure confirmation criteria that, if satisfied, trigger confirmation of a pre-selection state.

BO One example of a confirmation criteria is a confirmation time condition (also referred to as a cool down time condition) for confirming a pre-selection state of a candidate input key. The confirmation time condition may be time threshold (e.g., an amount of time) during which a candidate input key may be confirmed by satisfying the selection mechanism within the amount of time. If the selection mechanism is not satisfied within the time threshold, the pre-selection state may be cancelled.

[0095] For example, to select an input character or command (e.g., select character input key "F"), the user may move a physical contact from position P7 in the operation area 212 to position P8 in the virtual key area 816 for character key "F" and then to position P9 back into the operation area 212, all while maintaining continuous and uninterrupted contact with the display surface 125. As described above, to select the character input key "F", the user moves the contact point from position P7 to position P8 in virtual key area 816 along a first trajectory (or pre-selection trajectory). The virtual keyboard detects that the contact point is moved from operation area 212 into the virtual key area 816 by crossing edge 818a-l and triggers a pre-selection state that designates the character input key "F" as a candidate input key. In some embodiments, responsive to detecting the contact crossed edge 818a-l into virtual key area 816 (e.g., triggering pre-selection state), clock 132 may be initialized and a timer started. In another example, a time stamp of the time at which the contact crossed edge 818a-l may be associated with the pre-selection state.

[0096] In either case, subsequently the user may move the contact point from the virtual key area 816 back into the operation area 212 by crossing the validation boundary (e.g., edge 818a-l in this example) along a second trajectory (or confirmation trajectory). In some embodiments, responsive to detecting the contact crossed edge 818a-l into operation area 212, the time of clock 132 may be stopped and the amount of time between initializing and stopping the clock 132 may be recorded. In another example, a time stamp of the time at which the contact crossed edge 818a-l back into operation area 212 may be associated with the movement exiting the virtual key area 816. [0097] Detecting this movement may trigger the virtual keyboard to confirm the selection of input key "F" as input, if the confirmation time condition is satisfied. For example, confirmation of the selection is triggered if the virtual keyboard detects the contact crossed over the edge 818a-l within the time threshold, otherwise the pre-selection of input key "F" is cancelled. For example, the amount of time between entering and exiting the virtual key area 816 is compared to the time threshold. The amount of time may be based on a recorded amount of time based on starting and stopping a timer by clock 132. In another example, the amount of time may be determined as the difference between the time stamp associated with exiting the virtual key area 816 and the time stamp associated with the pre-selection state. In either case, responsive to the amount of time being less than or equal to the time threshold, the selection of input key "F" is confirmed. Otherwise, the pre-selection of input key "F" is cancelled once the time threshold expires (e.g., the input key cools itself down).

[0098] The time threshold may be set as desired to provide an adequate amount of time to confirm a selected input key, while being short enough so to cancel pre-selection states of unintended input keys. For example, time threshold in some implementations may be 1 second or less, 500 milliseconds or less, 200 milliseconds or less, etc. The time threshold may be adjustable, for example, such that the time threshold may be shortened as the user's comfort and accuracy with the selection mechanism increases. That is, as the user becomes more familiar with the selectin mechanism, the length of time needed to confirm pre-selection states may be lessened, which may result in fewer confirmations of unintended input keys.

[0099] Embodiments herein utilizing the confirmation time condition may reduce mistouches stemming from unintentionally or unconsciously contacting the display surface 125, such as the mistouches described in connection with FIG. 8. For example, when a user leaves the icon 215 inside a key due to unconsciously contacting the display surface, the time threshold may be set such that it is essentially impossible to meet the confirmation time condition to confirm the unintended input key. The unintended input key will likely cool itself down due to the contact lingering an amount of time that exceeds the time threshold. The unconscious contact usually lasts at least the amount of time the user takes to successfully locate and swipe to another key, and the time threshold may be set to be shorter than time to do so.

[00100] Furthermore, embodiments utilizing the confirmation time condition may reduce occurrence of mistouches stemming from direction change mistouches (e.g., as described in connection with FIG. 6). For example, as described above, the time threshold requires movement of a physical contact confirm a pre-selection state within a time threshold. When a user realizes that a current contact position is within a virtual key area for an unintended input key (e.g., input key "F" in FIG. 6) or the trajectory is heading toward an unintended input key (e.g., input key "T" in FIG. 6), the user must perform decision making (e.g., recognizing the mistake and determining the intended input key) and trajectory replanning (e.g., planning of contact movement to reach the intended input key, such as input key "B" in FIG. 6). The decision making and trajectory replanning will cause delay, that is likely to be longer than the time threshold. Furthermore, the time threshold may be set so as to be less than this delay, which may be determined based on studies of users of the keyboard layout over a population of users to determine an empirical delay amount. Since the time threshold may be set to less than the delay, when the user changes movement trajectory and moves the physical contact out of the virtual key area for input key "F", the pre-selection state of input key "F" will be cancelled and confirmation of the input key does not occur.

[00101] In some embodiments, a visual marker may be generated and overlaid on the virtual key area in which a physical contact is detected. For example, the virtual keyboard may detect a contact within a virtual key area and the graphics engine 128 and/or rendering engine 116 may generate a marker (shown as a grey coloring applied to virtual key area 816 of FIG. 8) that is overlaid on the detected virtual key area. The marker may be presented to the user on the displays 111 so that the user may recognize the virtual key area corresponding to the physical contact. This may be in combination with using the icons 215 and/or 225. In some embodiments, the marker may be generated in response to detecting the physical contact crossed the edge. The marker may comprise one or more of, for example, applying a color to virtual key area, a changed brightness, changing of font or font size of the input key, changing a line width of the shape of the virtual key area, changing the shape of the virtual key area, causing the virtual key area to pulsate (e.g., rotational wiggle, pulsing from larger to smaller size, flashing colors, etc.), among others.

[00102] Additionally, the marker may be used to visually represent the amount of time remaining until the time threshold expires. For example, upon first crossing an edge the virtual key area may change from a first color to a second color. As the amount of time that the physical contact remains in the virtual key area increments closer to the time threshold, the color may alternatively switch between the first and second color at an increasing rate until the time threshold is reached. As another example, the brightness of the virtual key area may be increased upon first crossing the edge, and as the time increases the brightness may be lessened until the time threshold is reached to indicate a cooling-off of the input key. As another example, a timer may be generated that displays the amount of time. The timer may be in the form of a numerical countdown and/or graphical icon that is either shortened or lengthened as the amount of time in the virtual key area approaches the time threshold. While certain examples are provided herein, it will be appreciated that other implementations are possible within the scope of the embodiments disclosed herein.

[00103] FIG. 8 also illustrates another mistouch mitigation technique, whereby configuring the confirmation criteria may include setting a validation boundary based on detecting a physical contact that crosses edge 818a-l. In the illustrative example shown in FIG. 8, responsive to detecting the physical contact crossed edge 818a-l of a virtual key area 816, the virtual keyboard module may set the edge 818a-l as the validation boundary. In this case, the other edges of border 818 of the virtual key area 816 do not operate as validation boundaries. Thus, confirmation of a pre-selection state may only occur upon detecting a physical contact that crosses back over the same edge 818a-l. If the contact crosses another edge, then the pre-selection state is cancelled, thereby cancelling designation of the input key as a candidate input key.

[00104] By restricting the validation edge to one edge for triggering the pre-selection state and confirming the candidate input key (e.g., satisfying the selection mechanism), embodiments herein may be used to mitigate mistouches due to a physical contact moving through an unintended input key, such as that described in connection with FIG. 5. For example, by setting the validation boundary as described in connection with FIG. 8, a user can only confirm the candidate input key by swiping out of the virtual key area through the same edge that the swipe entered. Therefore, swiping through a key will not satisfy the selection mechanism and not confirm an unintended input key.

[00105] Some embodiments of the selection mechanism configures a validation boundary based on any direction from which a pre-selection trajectory enters a virtual key area , as shown in FIG. 8. For example, FIG. 8 illustrates a first validation boundary that is set based on detecting the pre-selection trajectory, having an upward direction component, crossing edge 818a-l in keyboard area 210 and a second validation boundary is set based on detecting a pre-selection trajectory, including a downward direction component, of a contact (e.g., icon 225) crossing edge 828a-l in keyboard area 220. As in keyboard area 210 above, responsive to detecting the physical contact crossed edge 828a-l of a virtual key area 826, the virtual keyboard module configures the edge 828a-l (e.g., upper edge of the virtual key area 826) as the validation boundary, while the other edges are not configured as validation boundaries. Thus, in the keyboard area 220 the upper edge 828a-l of the virtual key area 826 is set as the second validation boundary with a pre-selection trajectory having a downward direction component, while in the keyboard area 210 the lower edge 818a-l of the virtual key area 816 is set as the first validation boundary with a first trajectory having an upward direction component.

[00106] In some embodiments, a confirmation criteria may include requiring trajectories to include a designated direction component. That is, some embodiments of the selectin mechanisms disclosed herein may restrict setting the validation boundary to the upper edge only, a left edge only, a right edge only, or a lower edge only. While trading off some flexibility in which a contact may enter a virtual key area to trigger a pre-selection state, these embodiments may offer improved accuracy and reduced mistouches.

[00107] For example, FIG. 9 illustrates examples of confirmation criteria implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout 200 of FIG. 2. FIG. 9 depicts a selection mechanism, where a confirmation criterion is configured by designating a direction component for triggering the pre-selection state. Accordingly, the selection mechanism of FIG. 9 restricts the direction of pre-selection trajectories to the designated direction component.

[00108] For example, the virtual keyboard designates at least one direction component for pre-selection trajectories. For a physical contact movement that crosses a border (e.g., border 218 or 228) of a virtual key area to trigger a pre-selection state, the trajectory of the contact must include the at least one designated direction component. In the illustrative example of FIG. 9, the downward direction component is designated to restrict pre-selection trajectories, such that only trajectories that include a downward direction component, while crossing a border of a virtual key area, can trigger pre-selection states. While the trajectories may include horizontal components, the trajectories must include a downward component in this example.

[00109] In some embodiments, confirmation trajectories may be restricted to a direction component that is the reverse or opposite to that of the pre-selection trajectories. That is, for a physical contact movement that crosses a border (e.g., border 218 or 228) of a virtual key area to trigger confirmation of the pre-selection state, the trajectory of the contact must include the direction component that is the reverse or opposite of the designated direction component. For example, if pre-selection trajectories are restricted to the downward direction component as in the example of FIG. 9, the confirmation trajectories may be restricted to the upward direction component.

[00110] In the illustrative example of FIG. 9, trajectories 902, 904, and 906 are provided that each correspond to a movement of a physical contact from operation area 212 or 222 into a virtual key area for input keys "D", "H", and "M", respectively. Each trajectory 902, 904, and 906 includes a downward component, such that the first trajectory (or pre-selection trajectory) into the virtual key areas satisfies the selection mechanism for triggering a pre-selection state for the corresponding input key.

[00111] Due to the designation of a direction component (e.g., directional restriction of trajectories) and that confirmation is triggered by the same edge that was crossed to enter the virtual key area, mistouches due to passing through unintended input keys (e.g., FIG. 5) can be avoided. For example, in the case of input key "D", the trajectory 902 passes through input key "E," while the exit trajectory 908 passes through input key "R". Passing through input key "E" may trigger a pre-selection state, but this state is not confirmed because the movement exits via a different edge and/or the confirmation trajectory does not include an upward component. Passing through input key "R" does not trigger pre selection state because the trajectory 908 does not include a downward component, only upward and horizontal components. Furthermore, embodiments including the directional restriOction of trajectories may mitigate direction change mistouches (e.g., FIG. 6). For example, after confirming input key "D", if the contact enters "R" from the lower edge of the virtual key area and the user decides to change course to input key "F" (or any other key), exiting the virtual key area for input key "R" will not confirm a selection since a pre selection was not triggered as noted above. Similarly, mistouches of input keys "Y", "U", and "J" can be avoided.

[00112] While FIG. 9 illustrates designation of the downward component for triggering a pre-selection state, embodiments herein are not so limited. That is, other direction components may be designated as desired. For example, the upward direction component may be used instead of the downward direction component, or one of from right to left horizontal direction component and from left to right horizontal direction component. In some implementations, multiple direction components may be designated to further restrict and mitigate mistouches. For example, downward direction component and from left to right horizontal direction component (as shown in FIG. 9) may both be designated such that a trajectory into a virtual key area must include both direction components.

[00113] In some embodiments, instead of or in combination with designating a direction component, the virtual keyboard may restrict validation boundaries to a designated edge. For example, in the case of FIG. 9, the upper edge of each virtual key area may be designated as validation boundaries, thereby restricting the direction into which a physical contact may enter a virtual key area. These embodiments may avoid mistouches in a manner that is similar to the direction component designation described above.

[00114] While the embodiments of the selection mechanism shown in FIG. 9 are described with reference to the example layout 200 of FIG. 2, the embodiments herein are equally applicable to other layouts (e.g., layout 300 of FIG. 3, layout 400 of FIG. 4, etc.).

[00115] In some embodiments disclosed herein, confirmation criteria for confirming a pre-selection state may be based on an angle (a) formed between a first trajectory and a second trajectory. The first trajectory may be a trajectory of a contact or swipe that crosses the edge into the virtual key area (e.g., a pre-selection trajectory) and the second trajectory may be a trajectory of a contact or swipe that crosses the edge while exiting the virtual key area (e.g., a confirming trajectory). In various embodiments, the angle (a) is compared to a threshold angle and a pre-selection state may be confirmed if the angel (a) is less than or equal to the threshold angle. Otherwise, the pre-selection state may be cancelled.

[00116] For example, FIG. 10 illustrates another example of a confirmation requirement implemented by selection mechanisms, according to embodiments of the present disclosure, provided on the virtual keyboard layout 400 of FIG. 4. FIG. 10 depicts a selection mechanism, where a confirmation criterion is configured by comparing an angle (a) with a threshold angle to determine whether to confirm a pre-selection state or not. While the embodiments of the selection mechanism shown in FIG. 10 are described with reference to the example layout 400 of FIG. 4, the embodiments herein are equally applicable to other layouts (e.g., layout 300 of FIG. 3, layout 200 of FIG. 2, etc.). [00117] Similar to FIG. 8 above, to select an input character or command (e.g., select character input key "X" in the example of FIG. 10), the user may move a physical contact from position P10 in the operation area 412 to position Pll in the virtual key area 416b for character key "X" and then to position P12 back into the operation area 412, all while maintaining continuous and uninterrupted contact with the display surface 125. As described above, to select the character input key "X", the user moves the contact point from position P10 to position Pll along a first trajectory 1015 (or pre-selection trajectory) through entry point 1010. The virtual keyboard detects that the contact point is moved from operation area 412 into the virtual key area 416b by crossing border 418a and triggers a pre-selection state that designates the character input key "X" as a candidate input key.

[00118] Subsequently, the user may move the contact point from the virtual key area 416b back into the operation area 412 by crossing the border 418b along a second trajectory 1025 (or confirmation trajectory) through exit point 1020. Detecting this movement may trigger the virtual keyboard to confirm the selection of input key "X" as input based on the angle (a) formed between the first trajectory 1015 and the second trajectory 1025. The angle (a) may be determined by the virtual keyboard by extending a line from entry point 1010 on border 418b to point Pll and extending a line from point Pll to exit point 1020 on the border 418b and determining the angle therebetween. The virtual keyboard may then compare the angle (a) to the threshold angle. Responsive to determining that the angle (a) is less than or equal to the threshold angle, the selection of input key "X" is confirmed. Otherwise, the pre-selection of the input key "X" is cancelled upon exiting the virtual key area 416b.

[00119] The threshold angle may be set as desired to provide an adequate angle to confirm a selected input key, while being small enough so to cancel pre-selection states of unintended input keys. For example, threshold angle in some implementations may be 90 degrees or less, 45 degrees or less, etc. The threshold angle may be adjustable, for example, such that the threshold angle may be reduced as the user comfort and accuracy with the selection mechanism increases. That is, as the user becomes more familiar with the selection mechanism, the angle needed to confirm pre-selection states may be lessened, which may result in fewer confirmations of unintended input keys.

[00120] By applying a threshold angle between first and second trajectories 1015 and 1025, embodiments herein may mitigation mistouches due to a physical contact moving through an unintended input key, such as that described in connection with FIG. 5. For example, by setting the threshold angle as described in connection with FIG. 9, a user can only confirm the candidate input key if the exit trajectory is within the threshold angle from the entry trajectory. Therefore, swiping through a key will not satisfy the selection mechanism and not confirm an unintended input key.

[00121] FIG. 11 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.

[00122] FIG. 11 depicts a block diagram of an example computer system 1100 in which various of the embodiments described herein may be implemented. For example, computer system 1100 may be the input detection device or mobile device 120 and/or the display device 110 of FIG. 1. Certain components of computer system 1100 may be applicable for implementing the mobile device 120, but not for the display device 110 (for example, cursor control 1116). As such the computer system 1100 includes a bus 1102 or other communication mechanism for communicating information, one or more hardware processors 1104 coupled with bus 1102 for processing information. Hardware processor(s) 1104 may be, for example, one or more general purpose microprocessors.

[00123] The computer system 1100 also includes a main memory 1106, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 1102 for storing information and instructions to be executed by processor 1104. Main memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104. For example main memory 1106 may maintain the virtual keyboard as described above. Such instructions, when stored in storage media accessible to processor 1104, render computer system 1100 into a special-purpose machine that is customized to perform the operations specified in the instructions.

[00124] The computer system 1100 further includes a read only memory (ROM) 1108 or other static storage device coupled to bus 1102 for storing static information and instructions for processor 1104. A storage device 1110, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 1102 for storing information and instructions.

[00125] The computer system 1100 may be coupled via bus 1102 to a display 1112, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 1114, including alphanumeric and other keys, is coupled to bus 1102 for communicating information and command selections to processor 1104. Another type of user input device is cursor control 1116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1104 and for controlling cursor movement on display 1112. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.

[00126] The computing system 1100 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.

[00127] In general, the word "component," "engine," "module", "system," "database," data store," and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip- flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.

[00128] The computer system 1100 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1100 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1100 in response to processor(s) 1104 executing one or more sequences of one or more instructions contained in main memory 1106. Such instructions may be read into main memory 1106 from another storage medium, such as storage device 1110. Execution of the sequences of instructions contained in main memory 1106 causes processor(s) 1104 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.

[00129] The term "non-transitory media," and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1110. Volatile media includes dynamic memory, such as main memory 1106. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.

[00130] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1102. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.

[00131] The computer system 1100 also includes a communication interface 1118 coupled to bus 1102. Communication interface 1118 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 1118 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1118 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 1118 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.

[00132] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the "Internet." Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 1118, which carry the digital data to and from computer system 1100, are example forms of transmission media.

[00133] The computer system 1100 can send messages and receive data, including program code, through the network(s), network link and communication interface 1118. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 1118.

[00134] The received code may be executed by processor 1104 as it is received, and/or stored in storage device 1110, or other non-volatile storage for later execution.

[00135] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. [00136] As used herein, the terms circuit and component might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present disclosure. As used herein, a component might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a component. Various components described herein may be implemented as discrete components or described functions and features can be shared in part or in total among one or more components. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application. They can be implemented in one or more separate or shared components in various combinations and permutations. Although various features or functional elements may be individually described or claimed as separate components, it should be understood that these features/functionality can be shared among one or more common software and hardware elements. Such a description shall not require or imply that separate hardware or software components are used to implement such features or functionality.

[00137] Where components are implemented in whole or in part using software, these software elements can be implemented to operate with a computing or processing component capable of carrying out the functionality described with respect thereto.

[00138] In this document, the terms "computer program medium" and "computer usable medium" are used to generally refer to transitory or non-transitory media. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as "computer program code" or a "computer program product" (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable a computing component to perform features or functions of the present disclosure as discussed herein. [00139] It should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Instead, they can be applied, alone or in various combinations, to one or more other embodiments, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments.

[00140] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing, the term "including" should be read as meaning "including, without limitation" or the like. The term "example" is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof. The terms "a" or "an" should be read as meaning "at least one," "one or more" or the like; and adjectives such as "conventional," "traditional," "normal," "standard," "known." Terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time. Instead, they should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.

[00141] The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term "component" does not imply that the aspects or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various aspects of a component, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.