Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VOICE CONTROLLED NAVIGATION AND DATA ENTRY SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2015/084141
Kind Code:
A1
Abstract:
A voice controlled system (10) and method for performing navigation and data entry in form based applications comprises of a user interface module (11) to access the user interface components, a voice recognition module (13) to receive voice data input by the user from an input device, and a navigation and data entry module (12) is operable to identify the user interface components on which task to be performed based on the grammar of the voice data input to perform command instructions, performing data entering (49) if said recognized content of the voice input data does not match (43) the type of command of said grammar file, by finding component in a UI registry unit (18) to be focused (44) and validating the voice data input that applies a set of rules defined by the components to perform conversions of data.

Inventors:
A L RAMACHANDRAN ARVIN (MY)
Application Number:
PCT/MY2014/000148
Publication Date:
June 11, 2015
Filing Date:
June 03, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MIMOS BERHAD (MY)
International Classes:
G10L15/00; G10L15/26
Domestic Patent References:
WO2008028029A22008-03-06
Foreign References:
US20090254347A12009-10-08
Other References:
None
Attorney, Agent or Firm:
MIRANDAH, Patrick (Suite 3B-19-3 Plaza Sentra, Jalan Stesen Sentral 5 Kuala Lumpur, MY)
Download PDF:
Claims:
CLAIMS

1. A voice controlled system (10) for performing navigation and data entry in form based applications, the system (10) comprising:

a user interface module (11) that is programmed to access the user interface components that were previously registered to a computing device by a user, and to display the content on the computing device to the user;

a voice recognition module (13) that is programmed to receive voice data input by the user from an input device; a navigation and data entry module (12) connected to said user interface module (11) and voice recognition module (13), said navigation and data entry module is operable to: identify said user interface components on which task to be performed based on the grammar of said voice data input to perform command instructions; and

validating said voice data input that applies to a set of rules defined by said components to perform conversions of data.

2. The voice controlled system (10) as claimed in claim 1, wherein said navigation and data entry module (12) comprising : a user interface (UI) registry unit (18) for mapping said user interface components;

a grammar unit (20) which includes grammar file of phrases or a combination of words is operable to classify and identify the tasks to be performed;

a validator unit (21) which includes said set of rules for validating voice data input;

a processor (19) connected to said units (18, 20, 21) is operable to perform said command instructions and conversions of data; and

a facade unit (17) which includes a user interface connector (17a) for connecting said user interface module (11) to said UI registry unit (18), and a voice recognition connector (17b) for providing a connection that will continuously observe and receive voice data input from said voice recognition module (13).

3. The voice controlled system (10) as claimed in claim 2, wherein said command instructions performed by said processor (19) include navigation, action, data manipulation and recording.

4. The voice controlled system (10) as claimed in claim 2, wherein said conversions of data performed by said processor (19) include data entry and transformation of data.

5. The voice controlled system (10) as claimed in claim 2, wherein said validator unit (21) is operable based on said set of rules of parameters of the data types, word length and predefined values properties.

6. The voice controlled system (10) as claimed in claim 1, wherein said user interface components include text field, text area, lists, radio buttons, tabs, sliders, date pickers, buttons, frames and dialogs.

7. A voice controlled method for performing navigation and data entry in form based applications, wherein said method comprising the steps of:

communicating a voice input data (31) from an input device to a voice recognition module (13) to receive recognized content of the voice input data;

identifying the user interface components (33) from the recognized content of the voice input data on which task to be performed based on the grammar file from a grammar unit (20) of a navigation and data entry module (12) or performing command instructions (34, 35, 36, 37) and performing data entering (42) if said recognized content of the voice input data does not match (43) the type of command of said grammar file, by finding component in a UI registry unit (18) to be focused (44) and validating said voice data input by a validator unit (21) that applies a set of rules defined by said component for performing conversions of data . 8. The voice controlled method as claimed in claim 7, wherein said command instructions further comprising at least one of a navigation command (34) where the operations include at least one of focusing on frame, dialog and tab using title, focusing a component which is not a container in a form using name of its label, and focusing on previous or next component to navigate from the currently focused component (38), an action command (36) where the operations include at least one of exiting the application, submitting of form, cancelling of form and closing a form or dialog (40), a data manipulating command (37) where the operations include at least one of clearing data completely from focused component, and deleting data from focused component in one character at a time (39) , and a recording command (35) where operations include at least one of stopping a recording, and starting a recording (41) .

9. The voice controlled method as claimed in claim 7, wherein said conversions of data are data entry and transformation of data.

10. The voice controlled method as claimed in claim 9, wherein said data entry and transformation of data process (42) operations include at least one of entering data into a focused field, ensuring that data is not entered after stop record command is issued, converting voice input data by identifying the focused component's data type, and adding space in the focused component.

Description:
Voice Controlled Navigation and Data Entry System and Method

Field of Invention The present invention relates generally to the field of navigation and data entry, and more particularly, to a voice controlled navigation and data entry system and method.

Background of the Invention

A user typically interacts with a computer through a user interface using a graphical input device. The graphical input device could be a keyboard, mouse, microphone, dials or function keys. With these graphical input devices, users are able to access to the applications to control the components and to activate commands to the computer.

Today, the form based applications are usually driven by keyboard for navigation and data entry. In the data entry system, data must be input into the correct field in the correct record. Some systems could be relatively complicated to use, requiring skilled users and careful input of information. Therefore, the entry of data into a database using keyboard can be tedious and troublesome. Particularly if the person entering the data has to multitask through the use of keyboard and simultaneously focus on another task.

Therefore, it would be desirable to provide a system and method to perform navigation and data entry in desktop form applications using voice data input to provide increased efficiency of interaction for all users. Voice controlled system may increase the efficiency of performing certain tasks and it is easy for an unskilled user to use. Also hands free data entry can be faster for those who are not keyboard savvy. Various technologies of the voice controlled system have been developed. However, existing systems provide limited support for controlling the applications using voice input data.

The object of the present invention is to provide a system and method to perform navigation and data entry in desktop form applications using voice data input. Another object of the present invention is to provide a system and method that reliably and effectively implements system functions based on the grammar of the voice based input and predefined values properties. A further object of the present invention is to provide a system and method which are able to identify and classify various phrases to perform data entry, data manipulation, navigation and submission of data.

Summary of Invention

In view of foregoing, embodiments herein provide a voice controlled navigation and data entry system and method.

In an aspect, a voice controlled system for performing navigation and data entry in form based applications comprises of a user interface module to access the user interface components that were previously registered to a computing device by a user, and to display the content on the computing device to the user, a voice recognition module to receive voice data input by the user from an input device, and a navigation and data entry module is operable to identify the user interface components on which task to be performed based on the grammar of the voice data input to perform command instructions, and validating the voice data input that applies to a set of rules defined by the components to perform conversions of data. Preferably the navigation and data entry module comprises of a user interface (UI) registry unit for mapping the user interface components, a grammar unit which includes grammar file of phrases or a combination of words is operable to classify and identify the tasks to be performed, a validator unit which includes the set of rules for validating voice data input, and a processor connected to the units is operable to perform the command instructions such as navigation, action, data manipulation and recording and conversions of data such as data entry and transformation of data .

In another aspect of the invention, a voice controlled method for performing navigation and data entry in form based applications, comprising the steps of communicating a voice input data from an input device to a voice recognition module, identifying the user interface components on which task to be performed based on the grammar of the voice data input for performing command instructions by a navigation and data entry module, and validating the voice data input that applies to a set of rules defined by the user interface components for performing conversions of data. Preferably the command instructions include at least one of a navigation command, an action command, a data manipulating command and a recording command. Preferably the conversions of data are data entry and transformation of data.

Preferably the set of rules is a set of parameters of the data types, word length and predefined values properties for validation of the voice input data.

These and other objects and features of the present invention will be more fully understood from the following detailed description which should be read in light of the accompanying drawings in which corresponding reference numerals refer to corresponding parts throughout the several views .

Brief Description of the Drawings

Figure 1 is a schematic diagram of the voice-controlled system of the present invention;

Figure 2 shows a diagram of the navigation and data entry module of the voice-controlled system; Figure 3 depicts a flowchart of the operation of the semantic processor and method of the present invention in performing the navigation in desktop form applications; Figure 4 is a flowchart of the operation of the semantic processor and method of the present invention in performing the data entry in desktop form applications; and

Figure 5 illustrates a flowchart of the operation of the UI registry unit in performing the registration of user interface components.

Detailed Description of the Preferred Embodiments The present invention will now be described in detail with reference to the accompanying in drawings.

Referring to figure 1, a voice-controlled system (10) to perform navigation and data entry based on the grammar of the voice input in desktop form applications in accordance with the present invention includes a user interface module (11), a navigation and data entry module (12) and a voice recognition module (13) . The user interface module (11) is connected to the navigation and data entry module (12) in which the user interface module (11) registered the user interface components to the navigation and data entry module (12) . The user interface components include but not limited to text field, text area, lists, radio button, tabs, sliders, date pickers, buttons, frames and dialogs.

The voice-controlled system (10) could be connected with a plurality of graphical input devices such as a keyboard (14), a microphone (15) and a headset (16). The voice recognition module (13) is connected to the navigation and data entry module (12) in the system (10) . The voice recognition module (13) may receive voice data input by a user from the microphone (15) and convert the voice input to text. The microphone (15) receives units of speech for example words or phrases, spoken by the user. Each unit of speech is either a command or text.

The navigation and data entry module (12) of the system (10) includes a facade unit (17), a user interface (UI) registry unit (18), a semantic processor (19), a grammar unit (20) and a validator unit (21) as shown in Figure 2. The facade unit (17) provides a user interface (UI) connector (17a) and a voice recognition connector (17b) . The UI connector (17a) provides a connection in between the user interface such as the microphone (15) device and the user interface module (11) so that the user interface components can be read into the UI registry unit (18) . The UI connector (17a) is a software language specific method where the parent frame of the user interface will be passed as the parameter. The voice recognition connector (17b) provides a connection that will continuously observe and receive words from the voice recognition module (13) when user uses the microphone (15) .

The UI registry unit (18) is connected to the facade unit (17) for mapping all the user interface components. The container components are keys and the child components are values and this process is recursive from parent to last child. All user interface components have properties added to them, through extension of the component class to specify the max length and data type to validate data entry. The containers include but not limited to tabs, frames and dialogs that hold other user interface components.

The semantic processor (19) of the navigation and data entry module (13) performs data entry, data manipulation, navigation and submission of data. The facade unit (17), UI registry unit (18), grammar unit (20) and validator unit (21) are connected to the semantic processor (19) . The semantic processor (19) utilizes the grammar unit (20) and validator unit (21) , which define a rule base for interpreting voice inputs. The grammar unit (20) includes phrases or a combination of words used to identify the tasks to be performed by the semantic processor (19) based on the voice input data by the user. The grammar unit (20) performs several identification and classification functions, including identifying which user interface component needs to be focused, identifying if the form must be submitted or cancelled, identifying if recording can be continued or to be stopped, and identifying if a screen, tab should be navigated to.

The validator unit (21) of the system (10) is used to check if data input adheres to the rules defined by the user interface component and converts data if necessary. The incoming data is validated by the validator unit (21) based on the rules on the parameters of the data types, word length and predefined values properties prior to setting the value of a component.

The semantic processor (19) of the navigation and data entry module (13) as shown in Figures 2 and 3, is the main core of the system (10) that is responsible for the navigation, data entry and editing. The voice data having recognized content from the voice recognition module (13) is the input to the navigation and data entry module (13) . The voice data is first fed (31) to the voice recognition connector (17b) of the facade unit (17) and to the semantic processor (19) of the navigation and data entry module (12) . The voice recognition connector (17b) of the facade unit (17) will check whether the recording is active (32) by continuously observe and receive voice data from the voice recognition module (13) when user uses the microphone (15) . The accuracy of the voice data received is dependent of the voice recognition module (13) capabilities. The semantic processor (19) based upon the recognized content of the voice data to identify (33) the type of command based on the grammar file from the grammar unit (20) and classify the recognized content into different types of commands including but not limited to navigation (34), recording (35), action (36) and data manipulation (37) . If the voice data from the voice recognition module (13) matches the input with the commands, the semantic processor (19) will perform appropriate function on component by connecting to the UI registry module (18) . If the recognized content is a navigation command, the semantic processor (19) will perform the navigation function such as to traverse user interface components in forms or identify frames or tabs and to bring to front (38). The semantic processor (19) of the present invention is able to focus on frame, dialog and tab using title, to focus a component which is not a container in a form using name of its label and to focus on previous or next component to navigate from the currently focused component based on the matching grammar file from the grammar unit (20) .

The semantic processor (19) will also perform the recording function (41) which includes to stop and to start recording, data manipulation function (39) which includes clearing data completely from the focused component and deleting once character data at a time from the focused component; and actions function (40) which include exit the application, submission of form, cancel of form and navigate a form or dialog based on the matching grammar file from the grammar unit (20) .

If the voice data input does not match (43) the type of command of the grammar file from the grammar unit (20), the input will be passed to the data entry process (42) as shown in figure 4. The semantic processor (19) is able to perform the data entry and transformation of data function which includes entering data into a focused field, ensuring that data is not entered after stop record command is issued, converting voice input data by identifying the focused component's data type and adding space in the focused component . In the data entry process, data will be entered into user interface components that are not containers. The semantic processor (19) will find component in the UI registry unit (18) to be focused (44) . If the component exists (45), the semantic processor (19) will utilize the validator unit (21) to find validation properties of component (46) and to perform validation on data input (47) as shown in Figure 4. These steps include of validation based on the parameters of data type in which it identifies if the incoming data is or can be converted to the data type property specified in the component, values in which it identifies if the incoming data matches any predefined values of the component; and word length in which it identifies if the incoming data word length adheres to the field property max length specified in the component. If the data is valid (48), the data will be converted as per component (49) and filled in the field (50) . The entire process will be terminated if the data input is invalid. Figure 5 shows a flowchart showing the steps of registering components to the UI registry unit (18) of the navigation and data entry module (12) from the user interface module (11) . The process will begin with registering parent container components of application after all children components have been added (51) . If the container components have children (52), the UI registry unit (18) will register container name (53) followed by the steps of registering child components name, valid values and data types (54), registering relationship of parent and child as a map with parent key and list of children as value (55) and the process is recursive from parent component to last child component .

While the disclosed system has been particularly shown and described with respect to the preferred embodiments, it is understood by those skilled in the art that various modifications in form and detail may be made therein without departing from the scope of the invention. Accordingly, modifications such as those suggested above but not limited thereto are to be considered within the scope of the invention, which is to be determined by reference to the appended claims.