Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NATURAL LANGUAGE INTERFACE FOR TROUBLESHOOTING A MOBILE DEVICE
Document Type and Number:
WIPO Patent Application WO/2019/162700
Kind Code:
A1
Abstract:
We describe a method for improving the performance of a mobile device and a mobile device itself. The mobile device (40) comprising: a user interface (24) for receiving user input; a diagnostic module (30) for launching one or more diagnostic tests on the mobile device; and a natural language module (20) for a user to engage in a natural language conversation. The natural language module is configured to receive a message from a user via the user interface; send the message to an interpretation module (10) to interpret the message and determine a response; receive the determined response from the interpretation module (10) wherein the determined response comprises machine readable instructions identifying a first unique identifier; identify a first diagnostic test of the diagnostic module from the first unique identifier; send instructions to the diagnostic module (30) to automatically launch the identified first diagnostic text; receive results of the identified first diagnostic test from the diagnostic module; send the results to the interpretation module (10); receive a reply from the interpretation module (10) based on the results; wherein the reply comprises at least one of machine readable instructions identifying a second unique identifier and machine readable instructions to display content to a user in the user interface (24); and when the reply comprises machine readable instructions to display content, display content in the user interface (24).

Inventors:
HOWELL, Emlyn Richard (C/O Support Robotics Ltd, St. John Innovation CentreCowley Road, Cambridge Cambridgeshire CB4 0WS, CB4 0WS, GB)
GIBSON, William Anthony (C/O Support Robotics Ltd, St. John Innovation CentreCowley Road, Cambridge Cambridgeshire CB4 0WS, CB4 0WS, GB)
Application Number:
GB2019/050517
Publication Date:
August 29, 2019
Filing Date:
February 26, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SUPPORT ROBOTICS LTD (St. John Innovation Centre, Cowley Road, Cambridge Cambridgeshire CB4 0WS, CB4 0WS, GB)
International Classes:
G06F11/07; G06F16/332; H04M3/493; G06F17/27; G10L15/22
Foreign References:
US20110010164A12011-01-13
US20170177561A12017-06-22
US20080294423A12008-11-27
US8423012B12013-04-16
US20100112997A12010-05-06
US20050245246A12005-11-03
Attorney, Agent or Firm:
APPLEYARD LEES IP LLP (15 Clare Road, Halifax Yorkshire HX1 2HY, HX1 2HY, GB)
Download PDF:
Claims:
CLAIMS

1. A mobile device comprising:

a user interface for receiving user input;

a diagnostic module for launching one or more diagnostic tests on the mobile device; and

a natural language module for a user to engage in a natural language conversation; wherein the natural language module is configured to

receive a message from a user via the user interface;

send the message to an interpretation module to interpret the message and determine a response;

receive the determined response from the interpretation module wherein the determined response comprises machine readable instructions identifying a first unique identifier;

identify a first diagnostic test of the diagnostic module from the first unique identifier;

send instructions to the diagnostic module to automatically launch the identified first diagnostic text;

receive results of the identified first diagnostic test from the diagnostic module; send the results to the interpretation module;

receive a reply from the interpretation module based on the results; wherein the reply comprises at least one of machine readable instructions identifying a second unique identifier and machine readable instructions to display content to a user in the user interface; and

when the reply comprises machine readable instructions to display content, display content in the user interface.

2. The mobile device of claim 1 , wherein the determined response comprises machine readable instructions to display content to a user; and

wherein the natural language module is configured to

display the content in the user interface, wherein at least a portion of the displayed content is actively linked to the first unique identifier; and

receive user input based on the portion of the displayed content. 3. The mobile device of claim 1 or claim 2, wherein when the reply based on the results of the first identified diagnostic test comprises machine readable instructions identifying a second unique identifier and machine readable instructions to display content, at least a portion of the displayed content is actively linked to the second unique identifier.

4. The mobile device of claim 3, wherein the second unique identifier identifies a second diagnostic test of the diagnostic module.

5. The mobile device of claim 4, wherein the natural language module is further configured to:

identify the second diagnostic test of the diagnostic module from the second unique identifier;

send instructions to the diagnostic module to automatically launch the identified second diagnostic test;

receive results of the identified second diagnostic test from the diagnostic module; send the results to the interpretation module;

receive a reply from the interpretation module based on the results of the second diagnostic test; wherein the reply based on the results of the second diagnostic test comprises at least one of machine readable instructions identifying a third unique identifier and machine readable instructions to display content to a user.

6. The mobile device of any one of claims 3 to 5, wherein at least one of the second and third unique identifiers identifies an adjustment to a setting of the mobile device.

7. The mobile device of claim 6, wherein the adjustment is an adjustment of volume for the mobile device.

8. The mobile device of any one of claims 2 to 6, wherein at least one of the second and third unique identifiers identifies an instruction to the mobile device to perform at least one of the following actions: initiate a call, launch a website or start a wizard.

9. The mobile device of claim 3, wherein the second unique identifier identifies a second diagnostic test to be performed on another device which is connected to the mobile device and wherein the mobile device is configured to send instructions to the other device to automatically launch the identified second diagnostic test.

10. The mobile device of any one of the preceding claims, wherein the content to be displayed is selected from a text message or an icon.

11. The mobile device of any one of the preceding claims further comprising the interpretation module.

12. The mobile device of claim 11 , wherein the interpretation module is configured to update a state in a state model and to determine the response and/or the reply based on the updated state in the state model.

13. The mobile device of claim 11 or claim 12, wherein the interpretation module is configured to determine the response and/or the reply using a look-up table.

14. A system comprising a mobile device according to any one of claims 1 to 13 and a computing device comprising the interpretation module.

15. The system of claim 14, wherein the interpretation module is configured to update a state in a state model and to determine the response and/or the reply based on the updated state in the state model.

16. The system of claim 14 or claim 15, wherein the interpretation module is configured to determine the response and/or the reply using a look-up table.

17. A method for improving the performance of a mobile device, the method comprising receiving a message from a user via a user interface on the mobile device;

sending the message from a natural language module on the mobile device to an interpretation module to interpret the message and determine a response;

receiving, at the natural language module, the determined response from the interpretation module wherein the determined response comprises machine readable instructions identifying a first unique identifier;

identifying, using the natural language module, a first diagnostic test of the diagnostic module from the first unique identifier;

sending instructions from the natural language module to the diagnostic module to automatically launch the identified first diagnostic text;

receiving, at the natural language module, results of the identified first diagnostic test from the diagnostic module;

sending the results to the interpretation module;

receiving, at the natural language module, a reply from the interpretation module based on the results; the reply comprises at least one of machine readable instructions identifying a second unique identifier and machine readable instructions to display content to a user in the user interface; and

when the reply comprises machine readable instructions to display content, displaying content in the user interface.

18. A machine-readable medium comprising computer code thereon which when running on a mobile device causes the mobile device to perform the method of claim 17.

Description:
NATURAL LANGUAGE INTERFACE FOR TROUBLESHOOTING

A MOBILE DEVICE

TECHNICAL FIELD

[0001] The present invention relates to a mobile device and a method for improving performance thereof.

BACKGROUND

[0002] Call centre support for mobile devices often results in low customer satisfaction scores. This can be somewhat improved by using remote support tools, where a human operator can run tests on a remote device from a call centre. The remote operator can view results and suggest further action in some cases, the operator may be able to remotely update settings 'which may be causing the reported issue. However, use of these advanced tools and troubleshooting techniques can require significant training, 'which may be undesirable or impractical.

[0003] Another existing solution is to use a self-service app where a workflow (or wizard) assists the user in solving their issue. This is achieved by selecting options which systematically narrow down the likely cause of an issue. Tests can also be run at appropriate points to identify or rule-out settings problems or hardware faults. This solution is useful for some cases, but some users prefer to use a natural language interface as opposed to a drill- d ow n me n u o r w iza rd .

[0004] Another mobile device diagnostic and remediation system is described in US8423012. The device comprises an application which sends a chat message via the radio transceiver identifying a problem of the device. The application further receives a request for one of a version identity of a preferred roaming list stored on the device, an identity of a firmware version installed on the device, an identity of the device model, and a mobile equipment identity and transmits this information. Another system and method of performing device- initiated diagnostic or configuration management on a mobile device is described in US2010/0112997. The system communicates a message relating to a problem to a remote server on a network, initiates a remedial action at the server, and changes the configuration of the mobile device based on the remedial action.

[0005] The present applicant has recognised the need for an alternative solution.

SUMMARY

[0006] According to the present invention there is provided an apparatus (i.e. a mobile device or system) and method as set forth in the appended claims. Other features of the invention will be apparent from the dependent claims, and the description which follows. [0007] We describe a mobile device comprising: a user interface for receiving user input; a diagnostic module for launching one or more diagnostic tests on the mobile device; and a natural language module for a user to engage in a natural language conversation. The natural language module is configured to receive a message from a user via the user interface; send the message to an interpretation module to interpret the message and determine a response; receive the determined response from the interpretation module wherein the determined response comprises machine readable instructions identifying a first unique identifier; identify a first diagnostic test of the diagnostic module from the first unique identifier; send instructions to the diagnostic module to automatically launch the identified first diagnostic test; receive results of the identified first diagnostic test from the diagnostic module; send the results to the interpretation module; receive a reply from the interpretation module based on the results; wherein the reply comprises at least one of machine readable instructions identifying a second unique identifier and machine readable instructions to display content to a user in the user interface; and when the reply comprises machine readable instructions to display content, display content in the user interface.

[0008] We also describe a method for improving the performance of a mobile device, the method comprising receiving a message from a user via a user interface on the mobile device; sending the message from a natural language module on the mobile device to an interpretation module to interpret the message and determine a response; receiving, at the natural language module, the determined response from the interpretation module wherein the determined response comprises machine readable instructions identifying a first unique identifier; sending instructions from the natural language module to the diagnostic module to automatically launch the identified first diagnostic text; receiving, at the natural language module, results of the identified first diagnostic test from the diagnostic module; sending the results to the interpretation module; receiving, at the natural language module, a reply from the interpretation module based on the results; wherein the reply comprises at least one of machine readable instructions identifying a second unique identifier and machine readable instructions to display content to a user in the user interface; and when the reply comprises machine readable instructions to display content, display content in the user interface.

[0009] The natural language module may be a frontend module of a natural language program, such as a chatbot. The interpretation module may be a backend module of the natural language program. The message which is input from the user may be a natural language question identifying one or more problems with the mobile device. In this way, the mobile device thus uses a combination of a natural language (chatbot) and diagnostics functionality, which is able to identify problems using a combination of natural language and diagnostic tests, without the need for a call centre, human operator or wizard-based Ul. The system is relatively easy for a user to use and is effective in identifying problems. The natural language processing may be a standard process for example as also used in US20050245246 which describes a system of accessing services, applications or content by interpreting natural language message queries.

[0010] The determined response may further comprise machine readable instructions to display content to a user and the natural language module may be further configured to display the content in the user interface. Furthermore, the determined response may further comprise machine readable instructions to link at least a portion of the displayed content to the first unique identifier and the natural language module may be further configured to receive user input based on the linked portion of the displayed content.

[0011] The content may be a text message or any other suitable visual representation, e.g. an icon. An example of a text message to be displayed to a user is“This could be a problem with your device, or it could be a problem with your settings or with an app. Let’s start by checking your settings. Tap here to check your settings”. The portion of the text message which is actively linked to the first unique identifier may be“tap here to check your settings” and the portion may be actively linked so that the natural language module is able to determine when a user clicks on the text portion. An alternative example is“Let’s check your loudspeaker. Tap here to start the loudspeaker test” and the portion of the text message which is actively linked to the first unique identifier may be“tap here to start the loudspeaker test”. Alternatively, the whole of the text message may be linked the first unique identifier so that effectively only active content is displayed to the user. It will be appreciated that these are merely examples.

[0012] Before the determined response which identifies a first unique identifier is received, the steps of receiving a message from the user and sending the message to an interpretation module may result in receipt of a determined response comprising machine readable instructions to display content to the user. For example, the content may request further clarification from the user. The natural language module may be configured to display the content to the user and this may result the initial step of receiving a message from a user identified above.

[0013] The machine readable instructions identifying a first unique identifier may be in the form of“<a href =“fixgenius://check-music-settings”>. It will be appreciated that this is merely one suitable example. The“href component of the machine readable instructions enables the natural language module to parse the message to locate the“check-music-settings” diagnostic test. In other words, the details of the diagnostic test(s) may be encoded to run within (or alongside) the preconfigured chatbot responses. In an initial stage, the mobile device, for example the natural language module or the diagnostic module, may be configured to determine all the diagnostic tests which are available using the diagnostic module. The mobile device, for example the natural language module or the diagnostic module, may be configured to build a one-to-one mapping of all the determined diagnostic tests to map each diagnostic test to a unique identifier, i.e. the identifier is only used once in the mapping. Each unique identifier may identify a single test or multiple tests which are performed sequentially, simultaneously or a combination of the two. Each unique identifier may be used multiple times if the same test is run multiple times.

[0014] The natural language module may be configured to send instructions to the diagnostic module using a function call. For example, the instructions may be in the form“run diagnostic test -“check-music-settings”. The results may be received as a callback to the function call. Alternatively, other similar mechanisms may be used to send the instructions and receive the results, for example a local network message and/or IPC call.

[0015] The reply based on the results of the first diagnostic test may comprise machine readable instructions identifying a second unique identifier. At least a portion of the content displayed in response to the reply may be actively linked to the second unique identifier. The second unique identifier may identify a second diagnostic test of the diagnostic module. This may be particularly useful when the first identified test has in conclusively identified any problems with the mobile device. In these circumstances, the natural language module may be further configured to: identify the second diagnostic test of the diagnostic module from the second unique identifier; send instructions to the diagnostic module to automatically launch the identified second diagnostic test; receive results of the identified second diagnostic test from the diagnostic module; send the results to the interpretation module; receive a reply from the interpretation module based on the results of the second diagnostic test; wherein the reply based on the results of the second diagnostic test comprises at least one of machine readable instructions identifying a third unique identifier and machine readable instructions to display content to a user.

[0016] The natural language module may be configured to identify the second diagnostic test of the diagnostic module from the second unique identifier, for example, in response to the user input based on the portion of the displayed content which is linked to the second unique identifier. In other words, the diagnostic test is triggered by user input. Alternatively, the natural language module may be configured to identify the second diagnostic test of the diagnostic module from the second unique identifier at the same time as displaying any content which is contained in the reply. In this way the second diagnostic test may be triggered automatically without user input.

[0017] It will be appreciated that multiple diagnostic tests may be carried out. Furthermore, over the course of the conversation between the user and the natural language module, the responses and replies will provide a mix of displayed content and unique identifiers and some of them may result in at least a portion of the displayed content being actively linked to a unique identifier so that user input is required to trigger the action related to the unique identifier. The ID based system makes it easy to integrate some or all of the displayed content (e.g. a text message) with the active content, namely the link to a diagnostic test. Thus, the natural language module may be configured to repeat any or all of the steps of receiving of responses or replies, displaying of content, identifying diagnostic tests, sending instructions, receiving results and sending results as necessary. This will result in the sending of multiple, i.e. subsequent unique identifiers, each of which uniquely identifies a diagnostic test(s).

[0018] The second or third or any subsequent unique identifier may identify an adjustment to a setting of the mobile device. This may be particularly useful when the one or more diagnostic tests have identified a problem and the problem can be solved by automatically adjusting a setting of the mobile device. For example, the problem may be that the volume has been set to mute and the solution is to adjust the volume. Thus, the problem can be identified and solved automatically.

[0019] The second or third or any subsequent unique identifier may identify an instruction to the mobile device to initiate a call. The call may be to a local store. For example, an example of content such as a text message to be displayed to a user is“There seems to be a problem with your device. We can book it in for a repair now. Tap here to book your device in for a repair”. The portion of the text message which is actively linked to the unique identifier may be “tap here to book your device in for a repair” and the portion may be actively linked so that the device automatically calls the local store. This may be particularly useful when the one or more diagnostic tests have identified a problem and the problem cannot be solved by adjusting a setting of the mobile device. Alternatively, the subsequent unique identifier may be used to identify a web page, launch a wizard in the app or take any other suitable action.

[0020] Any one of the unique identifiers may identify an instruction to the mobile device to initiate a diagnostic test which relates to another device on the local network. The second (or third) unique identifier may identify a second diagnostic test to be performed on another device which is connected to the mobile device. Alternatively, the second diagnostic test may be run on the mobile device but may request information from the other device which is connected to the mobile device. The mobile device, for example the natural language module, may be configured to send instructions to the other device to automatically launch the identified second diagnostic test or to request information, e.g. a specific settings value. The mobile device may receive results of the identified second diagnostic test or the requested information. The results and/or requested information may be sent to the interpretation module which may then confirm that they are acceptable. The diagnostic test may check the connectivity status of another device on the local network or may check or set other settings of another device on the local network. The other device may be any device, e.g. a router, modem, Wi-Fi booster, set-top box, connected thermostat or doorbell.

[0021] The interpretation module may be located remotely from the mobile device, for example on a computing device such as remote server which may be run by a third party, or may be located within the mobile device. The interpretation module may be configured to update a state in a state model and to determine the response and/or the reply based on the updated state in the state model. The state model may be stored in the remote server when the interpretation module is located on a remote server or may be stored on the mobile device. The state model may comprise a description of every possible state and how the model transitions from one state to another. The interpretation module may be configured to determine the response and/or reply using a look-up table. For example, the state model may include a look-up table listing each state against a corresponding response and/or reply.

[0022] We also describe a computer program product, including a non-transitory computer readable medium (e.g. a removable storage medium such as one or more DVDs, CD-ROMs, disks etc.). The computer program product may be installed by any suitable software installation procedure. The computer readable medium provides at least a portion of the software instructions for the present invention. We also describe a system comprising the mobile device and a remote computing device comprising the interpretation module as described above.

BRIEF DESCRIPTION OF DRAWINGS

[0023] Figure 1A is a block diagram of a system incorporating a mobile device;

[0024] Figure 1 B is a flowchart of a first method carried out in the system of Figure 1 A;

[0025] Figures 2A to 2D are screenshots of the mobile device of Figure 1A implementing the method of Figure 1 B;

[0026] Figure 3A is a flowchart of a second method carried out in the system of Figure 1 A;

[0027] Figure 3B is a flowchart of a continuation of the first method of Figure 1 B;

[0028] Figures 4A to 4E are screenshots of the mobile device of Figure 1A implementing the method of Figures 1 B and 3B; and

[0029] Figure 5 shows an example of a state model which may be used.

DETAILED DESCRIPTION OF DRAWINGS

[0030] Figure 1 a shows a schematic block diagram of a system including a mobile device 40. A mobile device is a portable and typically handheld computing device (or electronic device - the words can be used inter changeably) such as a mobile phone, particularly a smartphone or a small computer, such as a tablet. In the arrangement shown, the mobile device 40 comprises a frontend module 20 and a diagnostics module 30. The mobile device also comprises a processor 38 (i.e. microprocessor, central processing unit (CPU) or similar hardware and memory 40. The memory may comprise volatile memory and/or non-volatile memory. Volatile memory is computer memory that requires power to maintain the stored information unlike non-volatile memory. Volatile memory thus retains its contents while powered on but when the power is interrupted, the stored data is quickly lost. These components are shown as separate components but can be combined into a single module as required. Similarly, each module could also be split into multiple components, for example, the diagnostics module 30 may be split into a separate component (or engine) for each diagnostic test.

[0031] The frontend module 20 comprises a user interface 24 which as described below may show active content for launching diagnostic test(s), display the status of diagnostic tests and/or display any results from the diagnostic tests. The frontend module 20 also comprises a user or client library 22 which stores a mapping between the diagnostic tests and their ID as explained below. The mapping can be created in the diagnostic module or elsewhere in the system. The client library 22 may be stored in a separate memory for the frontend module or in memory 40 for the mobile device. An example client library is shown in the table below:

[0032] In the example client library above, the unique ID for each test is similar to the name of the test which may make it easier for a user to recognise the test being performed if they are aware of the ID. However, it will be appreciated that this does not need to be the case and any unique identifier, e.g. a number, may be used depending on the most appropriate format. The different tests may be grouped into classes. Again in this example, all of the tests are in different classes because of the nature of the tests. However, more than one test may be included in each class to simplify the structure of the library. For example, there may be a plurality of tests which can be run on the loudspeaker but they may be grouped together in the LoudspeakerTest.class.

[0033] The diagnostics module also comprises a user interface 34 which as described below may show diagnostic test screens or generate diagnostic user interface components which may be embedded in the frontend user interface 24. There is also a diagnostics engine 32 which runs one or more diagnostic tests to identify possible hardware faults or undesirable device settings. The frontend module 20 and the diagnostics module 30 communicate with each other using a communications interface which is termed a bridge 26, 36. A function call (or similar instructions) may be sent from the frontend module to activate a test in the diagnostics module and a function call may be returned from the diagnostics module to the frontend module with the results of the test. If the modules are integrated into a single module, it will be appreciated that no bridges 26 are needed. [0034] The messages which are displayed in the user interface 24 of the frontend module 20 may be generated using a chatbot (or similar) program which may be run by a third party. The frontend module may thus be termed a natural language module and the terms are used interchangeably throughout the specification. An example of a suitable chatbot program is DialogFiow™. The user interface 24 of the frontend module 20 thus allows a user to communicate with the mobile device in a conversational style by displaying natural language messages to the user and by allowing a user to input natural language messages through the user interface 24. The chatbot program may be provided using a separate backend module 10 which may be remotely located (i.e. located at a different location from the mobile device). In the example of Figure 1 a, the backend module 10 communicates with the frontend module 20 via a network such as the internet, e.g. using HTTPS communication. The backend module 10 may thus be provided on a separate server or computing device (i.e. electronic device) which may provide advantages such as increased processing power. Alternatively, the backend module 10 may be integrated onto the mobile device 40 which may provide different advantages such as increased security or use of the functionality without an internet connection.

[0035] The backend module 10 comprises a processor 18 (i.e. CPU or similar) and a state model 12 which may be stored in memory, which may include volatile and/or non-volatile memory. The state model 12 comprises a plurality of states each of which represents the possible state of the natural language conversation of the chatbot program. There may also be a look-up table which maps each state model to a natural language response and/or machine readable instructions for launching diagnostic operations. The natural language conversation is parsed by a natural language parser 14, e.g. DialogFiow by Google™. It will be appreciated that any appropriate natural language parser may be used. As explained below, the backend module 10 may thus be used to interpret the incoming message from the user and thus the backend module 10 may be termed an interpretation module and the terms may be used interchangeably throughout the specification.

[0036] The system may also comprise another device 140 which is connected to mobile device 40. It will be appreciated that although only one device is shown, there may be multiple devices. Examples of other devices 140 include a router, modem, Wi-Fi booster, set-top box, connected thermostat or doorbell. The device 140 may include any standard components and for the sake of simplicity only a diagnostics engine 132 which may run one or more diagnostic tests to identify possible hardware faults or undesirable device settings and a bridge 136 via which the mobile device 40 communicates with the other device 140 are shown. A function call (or similar instructions) may be sent from the frontend module of the mobile device to activate a diagnostic test for the other device and a function call may be returned to the frontend module with the results of the test. Activation of the diagnostic test for the other device may comprise running the diagnostic test on the mobile device so as to request information, e.g. setting information, from the other device. Alternatively, activation of the diagnostic test for the other device may comprise instructing the diagnostic module on the other device to run the test. The diagnostic test may relate to settings, including connectivity, on the other device 140 and the instruction to run the test or request for information and the results/information may be communicated between the other device and the mobile device. The front-end module may process the received results/information in a similar manner to those received in relation to the mobile device itself.

[0037] Figure 1 b illustrates how the components illustrated in Figure 1 a may communicate with one another. Before the process can begin, the frontend module needs to determine from the diagnostic module how many diagnostic tests can be run by the diagnostic module and to build a mapping for these tests which will be used later in the process. This is illustrated by a single step S100 but may comprise several individual steps.

[0038] Once the user has launched the chatbot program, the user can input a natural language message which describes a problem, e.g. “my music won’t play” to the frontend module (step S102). The message is then sent to the backend module (step S104) from the frontend module, for example using HTTPS or a similar secure protocol. The backend module then parses the text to interpret the message (step S106) and changes the state in the state model if needed (step S108). Once the state is updated as appropriate, the backend module determines the correct response to the input message (step S1 10). The determining step may comprise using a look-up table which may be stored in memory on the backend module.

[0039] The response is then sent from the backend module to the frontend module (step S1 12), for example using HTTPS or a similar secure protocol. The response may comprise instructions to the front end module to display a text message natural language to a user in the user interface of the front end module, for example“This could be a problem with your device, or it could be a problem with your settings or with an app. Let’s start by checking your settings.” Together with the text, the response may also comprise active content which can be selected by a user to trigger a diagnostic test. The message may thus comprise a portion which provides instructions to the frontend module, e.g. <br><br><ahref=”fixgenius://check- music-setting diagnostic”>. The “href component has a special meaning and these instructions link to a test which can be run by the diagnostic module. The link comprises a unique identifier (or ID) which uniquely identifies the test. The response may also comprise instructions to display a message to the user, e.g. “tap here to check your settings” which when selected operates the link and the diagnostics test(s). Alternatively the diagnostic test may be run automatically as soon as the response is received (i.e. without user instructions) . The response is part of a natural language conversation between the user and the chatbot.

[0040] Once the frontend module has received the response, it parses the response (step S1 14) and extracts the text to be displayed to the user as well as any active content. The frontend module then displays any extracted content, e.g. text together with any active content (step S1 16). The text or other representation of the active content may be highlighted for user interaction. The frontend module then processes when it receives user input (step S1 18) from the displayed active content, e.g. the user taps on the displayed message“tap here to check your settings”. Using the client library, the frontend module looks up the test(s) (step S120) which corresponds to the unique identifier linked to the user input. Alternatively, the test(s) may be automatically triggered without user input with the frontend module looking up the test(s) which correspond to the unique identifier on receipt of the response. The response may still comprise text to be displayed to a user but none of the text may be linked to the unique identifier as active content as described above. The test might run as a small window within the chatbot discussion user interface.

[0041] Once the test(s) has been identified, the frontend module automatically sends a function call to the diagnostics module (in the mobile device or other device) to trigger the one or more identified test(s) (step S122). The diagnostics module receives the function call, e.g. via the bridges described above, and automatically invokes and begins the identified test(s) (step S124). The test may include a short test, e.g. one which runs in a few seconds or less, such as a check to see if the music volume is set to mute. Alternatively, the test may include one which runs at repeatedly intervals over several hours, e.g. a day. A longer test may be useful for detecting intermittent errors, particularly ones which are not occurring at the time the user initiates the conversation but have recently occurred. The advantage of this automatic triggering of the test is that the system is not reliant on a human operator to trigger the test or interpret the results and is thus likely to be more reliable, particularly for longer tests.

[0042] Once the diagnostic test has been completed, the diagnostic module sends the results to the frontend module as a callback from the function call (step S126). For example, the test result may be a pass and thus the format may be Callback:Test result (pass). The frontend module then sends the results to the backend module (step S128) in a machine readable format, for example in the format HTTPS:{“testresult”:”pass”}. The backend module then updates the state model as necessary (step S130) and determines the appropriate response (step S132), e.g. using the look-up table as before. The response is then sent to the frontend module (step S134) which then repeats the steps of parsing the content of the response and displaying the extracted content.

[0043] Figures 2a to 2d show screenshots of the user interface 50 on a mobile device as the steps of Figure 1 b are carried out. As shown in Figure 2a, once the chatbot program has been triggered, the user interface 50 displays an input box 52 in which a user can enter a natural language question such as “my music doesn’t play”. The question may be input using a keyboard or by voice input or any other suitable user input. Figure 2b shows the result of the step S1 16 in Figure 1 b. The message from the user is displayed in a first chat box 54 and the response is displayed in a second chat box 56 which in this arrangement is below the first chat box 54, although it will be appreciated that this is just one suitable arrangement. The input box in which a user can input messages is still present on the user interface too. The message displayed to the user comprises normal text, i.e.“let’s start by checking your settings” together text which represents active content the form of“tap here to run the tests” which is linked to a test as described above.

[0044] After the test has been run, a response from the backend module is sent in step S134 as described above. In this example, the test showed a problem with the volume setting. Figure 2c shows the message which is displayed to the user which is generated from the response from the backend module. The message is displayed in a third box 58 which in this arrangement is below the second chat box 56. Once again, the message to the user comprises normal text which in this case explains the result of the test, i.e.“your music volume is set to 0 (mute), select one of the following to fix it:”. There is also text which represents active content which is listed as one or more fixes or automatic adjustments There is also a choice of three different text messages“set music volume to 25% (or 50% or 100%)” each of which is linked to the appropriate automatic adjustment on the mobile device. Like the diagnostic test each automatic adjustment has a unique identifier.

[0045] A user then selects the appropriate fix, e.g. clicks on“set music to 50%”. This triggers another function call (or similar mechanism), this time direct to the mobile device processor which triggers the requested fix. Alternatively, the function call may route through the diagnostic module to the device operating system and onto to the processor. The result of the fix is passed back to the frontend module which sends the results to the backend module. The state is updated as necessary and a response is sent to the frontend module. Figure 2d shows the result of the frontend module parsing the response. In this case, there is only text to be displayed without active content and the text notifies the user that the fix has been successfully applied.

[0046] Figure 3a shows an alternative to the method of Figure 1 b. As before, the method is triggered when the user has launched the chatbot program and input a natural language message to the frontend module (step S202). The message is then sent to the backend module (step S204) from the frontend module. The backend module then parses the text to interpret (step S206) the message and changes the state in the state model if needed (step S208). Once the state is updated as appropriate, the backend module determines the correct response to the input message (step S210).

[0047] The response is then sent from the backend module to the frontend module (step S212). The response may comprise a text message to display to a user in the user interface of the front end module, for example“Let’s check your loudspeaker.” Together with the text, the response may also comprise active content which can be selected by a user to trigger a diagnostic test. The message may thus comprise a portion which provides instructions to the frontend module, e.g. <ahref=”fixgenius://check-loudspeaker”>. Once again there is a unique identifier which uniquely identifies the test. The response may also comprise instructions to display a message to the user, e.g. “tap here to start the loudspeaker test” which when selected operates the link.

[0048] Once the frontend module has received the response, it parses the response (step S214) and extracts the text to be displayed to the user as well as the text representing the active content. The frontend module then displays any text together with any active content (step S216). The frontend module then processes when it receives user input (step S218) from the displayed active content, e.g. the user taps on the displayed message“tap here to start the loudspeaker test”. Using the client library, the frontend module looks up the test (step S220) which corresponds to the unique identifier linked to the user input.

[0049] Once the test has been identified, the frontend module automatically sends a function call to the diagnostics module to trigger the identified test (step S222). In this case, the test requires (or similar instructions) user input to determine whether or not the test is successful and thus when running the test at step S224, the user interface also displays a button or similar for a user to tap. This button may be generated in the user interface for the diagnostics module to be embedded in the user interface for the frontend module. At step S226, the diagnostic module determines whether or not it has received user input and what the result of the test is based on the user input.

[0050] Once the user input is received and processed, the diagnostic module sends the results to the frontend module as a callback from the function call (step S228). For example, the test result may be a fail and thus the format may be Callback:Test result (fail). The frontend module then sends the results to the backend module (step S230) in a machine readable format, for example in the format HTTPS:{“testresult”:”fail”}. The backend module then updates the state model as necessary (step S232) and determines the appropriate response (step S234), e.g. using the look-up table as before. The response is then sent to the frontend module (step S236) which then repeats the steps of parsing the content of the response and displaying the extracted content.

[0051] It will be appreciated that more than one diagnostic test can be used to determine the cause of the problem which the user has reported. For example, Figure 3b shows how the test described in Figure 3a can be triggered after the test in Figure 1 b. Figure 3b thus shows the steps after the response has been received in step S134 of Figure 1 b. The frontend module parses the response (step S314) and extracts the text to be displayed to the user as well as the active content. The frontend module then displays any text together with any active content (step S316). The frontend module then processes when it receives user input (step S318) from the displayed active content, e.g. the user taps on the displayed message“tap here to start the loudspeaker test”. Using the client library, the frontend module looks up the test (step S320) which corresponds to the unique identifier linked to the active content. [0052] Once the test has been identified, the frontend module automatically sends a function call to the diagnostics module to trigger the identified test (step S322). In this case, the test requires the user input to determine whether or not the test is successful and thus when running the test at step S324, the user interface also displays a button or similar for a user to tap. At step S326, the diagnostic module determines whether or not it has received user input and what the result of the test is based on the user input.

[0053] Once the user input is received and processed, the diagnostic module sends the results to the frontend module as a callback from the function call (step S328). The frontend module then sends the results to the backend module (step S330) in a machine readable format. The backend module then updates the state model as necessary (step S332) and determines the appropriate response (step 334), e.g. using the look-up table as before. The response is then sent to the frontend module (step S336).

[0054] The frontend module then repeats the steps of parsing the content of the response (step S338) and displaying the extracted content (step S340). In this example, the response may comprise instructions to display natural language, e.g.“there seems to be a problem with your phone. We can book it in for a repair now, or you can visit your local store”. There is also active content which is provided in instructions in the response to the frontend module which identify a unique action to be carried out, e.g. <ahref=”fixgenius://launch-repair-booking”>. The text which is displayed with the link is“tap here to book your device in for a repair” and this text is also identified in instructions embedded in the response. When the frontend module determines that the user input is received (step S362), the frontend module communicates with the processor of the mobile device to launch the link (step S364) and to make the call to the local store. This call is triggered automatically by the selection of the link by the user e.g. by a function call direct to the mobile device processor to trigger a call. Alternatively, the active content may contain a unique identifier which may be used to identify a web page, launch a wizard in the app or take any other suitable action.

[0055] Figures 4a to 4e illustrate the method of Figure 1 b combined with Figure 3b. Figures 4a to 4e show screenshots of the user interface 150 on a mobile device. As shown in Figure 4a, once the chatbot program has been triggered, the user interface 150 displays an input box 152 in which a user can enter a natural language question such as“my music doesn’t play”. Figure 4b shows the result of the step S1 16 in Figure 1 b. The message from the user is displayed in a first chat box 154 and the response is displayed in a second chat box 156. The message displayed to the user comprises normal text, i.e.“This could be a problem with your setting or with an app. Let’s start by checking your settings” together with active content in the form of text“tap here to check your settings” which is linked to a test as described above.

[0056] After the test has been run, a response from the backend module is sent in step S134 as described above. In this case, the test showed no problems. Figure 4c shows the message to the user which is generated from the response from the backend module. The message is displayed in a third box 158 which in this arrangement is below the second chat box 156. Once again, the message to the user comprises normal text which in this case explains the result of the test, i.e.“no problems were detected. Let’s check your loudspeaker”. There is also active content in the form of text“tap here to start the loudspeaker test” linked to the appropriate automatic test on the mobile device. For example if the user can’t hear the music they select “no”. This triggers an update to the state model to“device fault”, and a response which offers the user a chance to book the device in for repair.

[0057] In this case, the test requires interaction from the user to determine the result of the test as described in relation to Figure 3b. Thus the test is an activity with which the user needs to engage rather than a background test such as that described above in relation to Figures 2a to 2d. Figure 4d shows the display which is presented to the user while the interactive test is being run (e.g. as at step S324 in Figure 3b). The user interface comprises a text display,“can you hear the playing music?” together with user input mechanisms in the form of“yes” and“no” buttons 160. Thus, in this example the display is launched in a new screen rather than being embedded in the user interface for the frontend module. A user clicks on the appropriate button, in this case“no” because there is no sound. The diagnostics module then sends a callback to the frontend module with the test results so that the frontend module can send a response to the backend module to update the state model and select an appropriate response.

[0058] Figure 4e shows the display after the response from the backend module has been received and processed by the frontend module. The user interface 150 shows a fourth box 162 with text explaining the test results, e.g.“there seems to be a problem with your phone. We can book it in for a repair now, or you can visit your local store”. There is also active content with the text“tap here to book your device in for a repair” which as explained above can be used by the user to automatically trigger a phone call to the repair shop.

[0059] Figure 5 is an example of state model which may be used in the method and apparatus described above. The state model comprises a plurality of states and actions for moving between the various states. To reach the first state in the model“draw user interface model” 200, the first action S400 is to start and launch the user interface for the frontend module. To move to the second state in the model the“waiting for user input” state 202, the action is that the model is drawn S402. From the second state, there are two accessible states: the “looking up selected active content ID” state 204 which is accessed by a user selecting active content S404 and the“sending message to chatbot backend” state 206 which is accessed by a user entering a message.

[0060] If it is determined that the user has selected active content, the state model is updated to move the state to“looking up selected active content ID” 204. As explained above, the active content may be for a test and/or a fix. Accordingly, there are two accessible states: the “applying fix” state 208 which is reached by the active content applying a fix S408 and the “running diagnostic test” state 210 which is reached by the active content launching a diagnostic test. If it is determined that a fix is applied, the next state which is accessible is the “sending message to chatbot backend” state 206 and this is reached once the fix is applied S412. Similarly, the next state from the“running diagnostic test” state 210 is also the“sending message to chatbot backend” state 206 and this is reached either if the test fails S414 or if the test is passed S416.

[0061] From the“sending message to chatbot backend” state 206, only one accessible state which is the“waiting for backend response” state 212 is shown in the Figure and this is accessed by a message being sent S418. For example, an example of this is step S104 in Figure 1 b in which the user sends the initial message to the backend module. Step S104 represents a transition from the“draw Ul model” state 200 to the“waiting for user input” state 202 straight to the“sending message to chatbot backend” state 206. By contrast, another example of the transition from the“sending message to chatbot backend” state 206 to the “waiting for backend response” state 212 occurs in step S128 of Figure 1 b in which the successful test results are sent to the backend module. Step S212 represents a transition from the“waiting to user input” state 202 to the“sending message to chatbot backend” state 206 via the“looking up selected active content ID” state 204 and the“running diagnostic test” state 210. It will also be appreciated that the user may enter a new message even when the state machine is in any other state and in these circumstances it will be possible to transition to the“sending message to chatbot backend” state 206 from any other state. However, these transitions have been omitted from the Figure for simplicity.

[0062] When the backend module has sent a response S418, for example either as in step S112 or step S134 in Figure 1 b, the model can be updated to the “processing backend response” state 214. As shown in Figure 1 b, the processing may include parsing the response, displaying the extracted content etc. Once the response has been processed, the user interface model can be updated with the received message and/or active content S420 and the state can be updated to the initial“draw Ul model” state 200.

[0063] It will be appreciated that this is just one example of a state model and other implementations may be used. The updating may be done by one or both of the frontend and backend modules depending on the nature of the model. If the backend module alone updates the state model, multiple states may be stepped through in a single update depending on the steps being carried out at the frontend module.

[0064] At least some of the example embodiments described herein may be constructed, partially or wholly, using dedicated special-purpose hardware. Terms such as‘component’, ‘module’ or‘unit’ used herein may include, but are not limited to, a hardware device, such as circuitry in the form of discrete or integrated components, a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks or provides the associated functionality. In some embodiments, the described elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processors. These functional elements may in some embodiments include, by way of example, components, such as software components, object- oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Although the example embodiments have been described with reference to the components, modules and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements. Various combinations of optional features have been described herein, and it will be appreciated that described features may be combined in any suitable combination. In particular, the features of any one example embodiment may be combined with features of any other embodiment, as appropriate, except where such combinations are mutually exclusive. Throughout this specification, the term“comprising” or “comprises” means including the components) specified but not to the exclusion of the presence of others.

[0065] Attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.

[0066] All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.

[0067] Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

[0068] The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.