Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTING DEVICES WITH IMPROVED INTERACTIVE ANIMATED CONVERSATIONAL INTERFACE SYSTEMS
Document Type and Number:
WIPO Patent Application WO/2019/143397
Kind Code:
A1
Abstract:
A conversational interface system including an interactive virtual avatar and method for completing and updating fillable forms and database entries. The conversational interface provides a user with the option of inputting data in either text or voice form, and logs user response data and populates fields within form documents. As the user progresses through the system, instructions and guidance are consistently provided via the interactive avatar presented within the system web browser. Without exiting the system, the conversational interface validates user input data types and data while updating entries within cloud-based databases.

Inventors:
DOHRMANN ANTHONY (US)
CHASKO BRYAN J (US)
BLAKE SAMUEL (US)
Application Number:
PCT/US2018/057814
Publication Date:
July 25, 2019
Filing Date:
October 26, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SAMEDAY SECURITY INC (US)
International Classes:
G06F3/048
Foreign References:
US20130167025A12013-06-27
US20170192950A12017-07-06
US20020062342A12002-05-23
Other References:
See also references of EP 3740856A4
Attorney, Agent or Firm:
HANKS, Mackenzie et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computing device comprising a display screen, the computing device being configured to dynamically display a specific, structured interactive animated conversational graphical interface paired with a prescribed functionality directly related to the interactive graphical user interface's structure.

2. The computing device of claim 1, being any form of a computing device, including a personal computer, laptop, tablet, or mobile device.

3. The computing device of claim 1, where upon initiation, a user is provided one or more options to select a desired method for data entry, including voice, type, touch or combinations thereof without having to switch back and forth.

4. The computing device of claim 1, further comprising a real-time updated user interface capable of displaying a plurality of user entries within a web browser based dialogue box.

5. The computing device of claim 1, further comprising user provided data being immediately validated based on characteristics defined within the specific, structured interactive animated conversational graphical interface.

6. The computing device of claim 1, further comprising user provided data being further validated against external data stored in a cloud-based database.

7. The computing device of claim 1, further comprising a user's provision of valid data allowing for the user's progression to a next item within a form.

8. The computing device of claim 1, further comprising a user providing invalid data resulting in the computing device not allowing the user to progress to a next item within the form.

9. The computing device of claim 1, further comprising the specific,

structured interactive animated conversational graphical interface completing and updating a database entry.

10. The computing device of claim 9, further comprising forms being

immediately transmitted to cloud-based storage upon completion of all data input.

11. The computing device of claim 10, further comprising previous entries being updated based on new user input.

12. The computing device of claim 1, further comprising the specific,

structured interactive animated conversational graphical interface converting text data to voice data for storage and for use in human conversation.

13. The computing device of claim 1, further comprising the specific, structured interactive animated conversational graphical interface converting response data to audio files using cloud-based text-to- speech solutions capable of being integrated into a web browser based avatar.

14. The computing device of claim 13, further comprising the converted audio files being transmitted to cloud-based storage for retention and later use.

15. The computing device of claim 1, further comprising the specific,

structured interactive animated conversational graphical interface including a virtual avatar for providing guidance and feedback to a user during utilization of the specific, structured interactive animated conversational graphical interface.

16. The computing device of claim 1, further comprising the specific,

structured interactive animated conversational graphical interface displaying an ECI avatar within a web browser.

17. The computing device of claim 16, further comprising the ECI avatar providing step-by-step verbal instructions to a user.

18. The computing device of claim 17, further comprising the step-by-step verbal instructions being converted to text displayed within the web browser in real-time. lsĀ·. The computing device of claim 18, further comprising the user making form related inquiries of the ECI avatar for clarification.

- Il

Description:
COMPUTING DEVICES WITH IMPROVED INTERACTIVE ANIMATED

CONVERSATIONAL INTERFACE SYSTEMS

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the priority benefit of U.S. Provisional Patent Application Serial No. 62/618,550 filed on January 17, 2018 and titled "Interactive Animated Conversational Interface System," which is hereby incorporated by reference in its entirety.

FIELD OF THE TECHNOLOGY

[0002] Embodiments of the disclosure relate to computing devices with improved interactive animated conversational interface systems.

SUMMARY

[0003] Provided herein are exemplary systems and methods including an interactive conversational text-based interaction (ECG_Forms) and a three- dimensional Electronic Caregiver Image (ECI) avatar that allows a user to complete various forms using voice conversation and cloud-based talk-to-text technology. Through the system, the ECI avatar may communicate in multiple languages. The system provides a user with the option of selecting methods for data input comprising either traditional type based data entry or voice communication data entry. Following the user input of data, the system uses cloud-based database connectivity to review user input and provide redundancy against data input errors. When errors are discovered by the system, feedback is provided to the user for correction of errors. To assess data for accuracy in real time, the system utilizes a catalogue of inputs to determine whether a data type input by the user matches a defined catalogue data type. As such, through the use of cloud-based applications, the system completes data assessment, executes the continuation decision process and provides a response to the user in less than 1.0 second. Once data has been assessed for accuracy and all user data are entered into the system, the system encrypts user input data and proceeds with transmitting the data to a cloud-based primary key design database for storage. The system also provides a company web browser comprising the three- dimensional Electronic Caregiver Image (ECI) avatar for interactive

communication with the user. This ECI avatar provides the user with an interactive experience during which the user is guided through the completion of the process. As the process is completed by the user, the ECI avatar provides real time feedback in conversational form in an effort to simplify and streamline the form completion by the user. BRIEF DESCRIPTION OF THE DRAWINGS

[0004] The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.

[0005] Figure 1 details the connectivity and processes associated with a ECG_Forms text-based conversational interface.

[0006] Figure 2A depicts successful completion of a data form.

[0007] Figure 2B shows an exemplary specific, structured interactive animated conversational graphical interface including an avatar, depicting the result of the input of invalid data.

[0008] Figure 3 depicts an exemplary architecture for further validating user input.

[0009] Figure 4 shows an exemplary architecture for the conversion of input from a user to speech configured for an ECI Avatar.

[0010] Figures 5-19 show exemplary specific, structured interactive animated conversational graphical interfaces with the ECI avatar.

DET AILED DESCRIPTION

[0011] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be apparent, however, to one skilled in the art, that the disclosure may be practiced without these specific details. In other instances, structures and devices may be shown in block diagram form only in order to avoid obscuring the disclosure.

[0012] Various exemplary embodiments described and illustrated herein relate to a computing device comprising a display screen, the computing device being configured to dynamically display a specific, structured interactive animated conversational graphical interface paired with a prescribed

functionality directly related to the interactive animated conversational graphical user interface's structure. Accordingly, a user is provided with an interactive conversational interface comprising the Electronic Caregiver Forms

(ECG_Forms), text-based conversation, and the Electronic Caregiver Image (ECI), which comprises a three-dimensional avatar paired with voice-driven interaction, all of which may be presented within a web browser.

[0013] User data input into document fields is typically tedious and boring for the user. It is also highly prone to human error. As such, text-based

conversational "chatbots" have become an increasingly popular interactive option for the replacement of simple keystroke text entry by user paradigms.

[0014] As chatbot programs have developed in recent years, they have been incorporated in the effective simulation of logical conversation during

human/computer interaction. The implementation of these chatbots has occurred via textual and/or auditory methods effectively providing human users with practical functionality in information acquisition activities. In most cases today, chatbots function simply to provide a conversational experience during the obtaining of data from a user. [0015] As chatbot programs have progressed, the knowledge bases associated with their capabilities have become increasingly complex, but the ability to validate user responses in real-time remains limited. Additionally, the capability of chatbot programs to be functionally incorporated across vast networks is significantly lacking. As such, most chatbot programs cannot be incorporated across multiple systems in a manner that allows them to collect user data while simultaneously verifying the type of data input by the user, transmit data input by the user to various storage sites for further data validation, store data offsite in cloud-based storage solutions and overwrite existing stored data based on new user inputs, all while providing a virtual avatar which guides the user through the data entry process.

[0016] Figure 1 illustrates an exemplary system in which a user 2 utilizes a computing device or connected device 3 to connect to the internet 4 to access relevant services necessary to complete various forms. Upon connection to the internet 4, the user 2 is provided access to cloud-based applications 5 which comprise conversational decision trees providing the capability of

communicating through both voice and text as illustrated by ECG_Forms Conversational Interface 6. According to various exemplary embodiments, this allows for conversational speech communication to be carried out between user 2 and computing device 3.

[0017] In Figure 1, according to various exemplary embodiments, ECG_Forms Conversational Interface 6 functions to request data input from user 2. Following this request, ECG_Forms Conversational Interface 6 waits for a response from user 2. Upon receiving a response, Data Intake System 7 intakes this data into the system. Once data from user 2 is taken into ECG_Forms Conversational Interface 6, the system compares the data type (for example, words, numbers, email, etc.) of the input to the data found in Defined Data Input Type 8 to assess the validity of user input types. Once the type of user input is determined to be valid, Progression Decision Program 9 is activated, resulting in ECG_Forms

Conversational Interface 6 moving on to the next item to be inquired of user 2.

[0018] Figure 2A shows the result of the successful completion of a form.

[0019] Figure 2B shows an exemplary specific, structured interactive animated conversational graphical interface including an avatar, depicting what occurs when data input by user 2 is deemed invalid by Defined Data Input Type 8 (FIG. 1), resulting in a "no" decision from Progression Decision Program 9 (FIG. 1).

[0020] Figure 3 depicts an exemplary architecture for further validating user input. This is achieved as user 2 inputs data into computing device 3. This data is transmitted to ECG_Forms Conversational Interface 6. Cloud-Based Applications 5 are communicatively coupled to Database Storage Solutions 10, which comprises defined data specifications and previously stored inputs from user 2. As ECG_Forms Conversational Interface 6 processes data input into the system by user 2, it also compares the data to data stored in Database Storage Solutions 10 for validation. Upon ECG_Forms Conversational Interface 6 determining that input data from user 2 is valid, the input data progresses across the entirety of the form being completed, and Cloud-Based Applications 5 and Compute 11 functions (as housed in Cloud-Based Applications 5) are called and result in ECG_Forms Conversational Interface 6 transmitting the completed data form for storage in Database Storage Solutions 10.

[0021] Figure 4 shows an exemplary architecture for the conversion of input from user 2 to speech configured for an ECI Avatar (FIG. 5). This occurs through Cloud-Based Applications 5 (FIG. 3). The data input by user 2 into Computing Device 3 and transmitted to ECG_Forms Conversational Interface 6 is further processed by Cloud-Based Text-to-Speech Application and converted into an audio file. Once the conversion to audio has been completed, the newly created audio file is transmitted to Database Storage Solutions 10 for storage and for recall by the ECI Avatar (FIG. 5) when needed. [0022] Figures 5-19 show exemplary specific, structured interactive animated conversational graphical interfaces with the ECI avatar.

[0023] According to various exemplary embodiments, a three-dimensional Electronic Caregiver Image (ECI) avatar as depicted in Figure 5 functions to guide the user (such as user 2 in FIGS. 1, 3 and 4) through the data entry process in an effort to reduce user errors in completing documents. This is achieved through the utilization of multiple cloud-based resources (such as Cloud-Based Applications 5 in FIG. 3) connected to the conversational interface system. For the provision of ECI responses from the avatar to user inquiries, either Speech Synthesis Markup Language (SSML) or basic text files are read into the system and an audio file is produced in response. As such, the aspects of the avatar's response settings such as voice, pitch and speed are controlled to provide unique voice characteristics associated with the avatar during its response to user inquiries.

[0024] While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.