Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A DYNAMICALLY NETWORKED SOCIAL PLATFORM WITH A PREDICTIVE MODULE FOR SERVICES DELIVERY
Document Type and Number:
WIPO Patent Application WO/2019/059755
Kind Code:
A1
Abstract:
This invention relates to a method and system for predicting a service required by a user. The method comprises receiving input data from a mobile device, determining user state and action of user using a context engine, and predicting a service based on the user state and action using a detenninistic mode!, probabilistic model and/or machine learning model The method further comprises executing an appropriate action based on the outcome of the prediction, verifying actual action from user against appreciate action, and storing and transmitting of the outcome, appropriate action and actual action in the memory and a server respective.

Inventors:
LIM, Chern Chuen (NO. 23, Jalan Firma 2 Kawasan Perindustrian Tebrau 2, Joho, Johor Bahru ., 81100, MY)
Application Number:
MY2018/000028
Publication Date:
March 28, 2019
Filing Date:
September 25, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MANJA TECHNOLOGIES SDN BHD (NO. 23, Jalan Firma 2 Kawasan Perindustrian Tebrau Iv, Joho, Johor Bahru ., 81100, MY)
International Classes:
G06N5/04; G06F3/048; G06Q30/00; G06Q50/10
Domestic Patent References:
WO2016069956A12016-05-06
WO2014124333A12014-08-14
WO2013169912A22013-11-14
Foreign References:
US20120084248A12012-04-05
EP2608502A22013-06-26
Attorney, Agent or Firm:
KHOR, Pauline Hong Ping (Suite 33.01, Level 33, The Gadens Norh Tower,Mid Valley City, Lingkarn Syed Putr, Kuala Lumpur ., 59200, MY)
Download PDF:
Claims:
Claims

1 . A system for predicting a service required by a user comprising;

a mobile device having a processor, a memory and instructions stored on the memory and executable by the processor to:

receiving input data;

determining a user state and action to obtain a context tag:

comparing the context tag with current user state:

executing a prediction module in response to determining a difference in the context tag with respect to current user state;

receiving a predicted outcome and an appropriate action from the prediction module;

executing the appropriate action;

verifying actual action from user against the appropriate action;

storing the context tag, predicted outcome, appropriate action and actual action in the memory; and

transmitting the context tag, predicted outcome, appropriate action and actuaS action to a server. 2, The system according to claim 1 wherein the prediction module comprises instructions to:

receiving the context tag and a total set of existing context tags;

applying a machine learning (ML) model using a learnt model received from the server to obtain a ML outcome;

retrieving the appropriate action associated to the ML outcome in response to the (vIL outcome being above a certain ML threshold;

applying a probabilistic model in response to the ML outcome being below the certain ML threshold to obtain a probabilistic outcome;

retrieving the appropriate action associated to the probabilistic outcome in response to the probabilistic outcome being above a certain probabilistic threshold: applying a deterministic model in response to a probabilistic outcome from the probabilistic model being below a certain probabilistic threshold; and

retrieving the appropriate action associated to the deterministic outcome in response to the probabilistic outcome being above a certain probabilistic threshold.

3. The system according to c!airn 1 wherein the instruction to transmitting the predicted outcome, appropriate action and actual action to a server comprises instructions to:

determining if there is enough new data to be sent to the server;

obtaining the raw and related data in response to there being enough new data;

applying a one-time identifier and encrypting the raw and related data such that identity of user is masked.

4. The system according to ciaim 1 further comprising the server having a processor, a memory and instructions stored on the memory and executable by the processor to:

receiving the context tag, predicted outcome, appropriate action and actual action from the mobile device;

updating a learnt model of the user using a machine teaming module; and storing the context tag, predicted outcome, appropriate action and actual action.

5. The system according to claim 4 wherein the instruction to updating a ieamt model of the user using a machine learning module comprises instructions to:

retrieving datasets associated to the mobile device and appending the data received from the mobile device onto the retrieved datasets;

determining if the appended datasets is above a predetermined number of datasets;

training a user specific learnt mode! for the mobile device in response to the appended datasets being above a predetermined number of datasets; updating the user specific ieamt mode! in the dataset associated to the mobile device; and

transmitting the user specific learnt model to the mobile device. 6. The system according to claim 4 wherein the instruction to updating a iearnt model of the user using a machine learning module comprises instaictions to:

retrieving daiasets from a group relevant to the mobile device and appending the data received from the mobile device onto the retrieved datasets;

determining if the appended datasets is above a predetermined number of datasets;

training a generalized learnt model for the mobile device in response to the appended datasets being above a predetermined number of daiasets;

updating the generalized Ieamt model in the dataset associated to the mobile device; and

transmitting the generalized learnt modei to the mobile device.

7. A method specifically for predicting a service required by a user using a mobile device that is connectabie to a server comprising;

receiving input data;

determining a user state and action to obtain a context lag;

comparing the context tag with current user state;

executing a prediction module in response to determining a difference in the context tag with respect to current user state;

receiving a predicted outcome and an appropriate action from the prediction module;

executing the appropriate action;

verifying actual action from user against the appropriate action;

storing the context tag, predicted outcome, appropriate action and actual action; and

transmitting the context tag, predicted outcome, appropriate action and actual action to the server.

8. The method according to claim 7 wherein the prediction module executes steps comprising:

receiving the context tag and a total set of existing context tags;

applying a machine learning {ML) model using a learnt model received from the server to obtain a ML outcome;

retrieving the appropriate action associated to the ML outcome in response to the ML outcome being above a certain ML threshold;

applying a probabilistic model in response to the ML outcome being below the certain ML threshold to obtain a probabilistic outcome;

retrieving the appropriate action associated to the probabilistic outcome in response to the probabilistic outcome being above a certain probabilistic threshold; applying a deterministic mode! in response to a probabilistic outcome from the probabilistic model being below a certain probabilistic threshold; and

retrieving the appropriate action associated to the deterministic outcome in response to the probabilistic outcome being above a certain probabilistic threshold.

9. The method according to claim 7 wherein the step of transmitting the predicted outcome, appropriate action and actual action to the server comprises: determining if there is enough new ά^ϊΒ to be sent to the server;

obtaining the raw and related data in response to there being enough new data;

applying a one-time identifier and encrypting the raw and reiated data such that identity of user is masked.

10. The method according to claim 7 further comprising the server to:

receiving the context tag, predicted outcome, appropriate action a«d actual action from the mobile device;

updating a learnt mode! of the user using a machine learning module; and storing the context tag, predicted outcome, appropriate action and actual action.

11. The method according to ciasm 10 wherein the step of updating a learnt mode! of the iiser using a machine learning module comprises:

retrieving datasets associated to the mobile device and appending the data received from the mobile device onto the retrieved datasets;

determining if the appended datasets is above a predetermined number of datasets;

training a user specific iearnt model for the mobile device in response to the appended datasets being above a predetermined number of datasets;

updating the user specific teamt mode! in the dataset associated to the mobile device; and

transmitting the user specific learnt model to the mobile device.

12. The method according to claim 10 wherein the step of updating a learnt mode! of the user using a machine learning module comprises:

retrieving datasets from a group relevant to the mobile device and appending the data received from the mobile device onto the retrieved datasets;

determining if the appended datasets is above a predetermined number of datasets;

training a generalized iearnt model 'for the mobile device in response to the appended datasets being above a predetermined number of datasets;

updating the generalized learnt model in the dataset associated to the mobile device; and

transmitting the generalized learnt mode! to the mobile device.

Description:
A DYNAMICALLY NETWORKED SOCIAL PLATFORM WITH A PREDICTIVE MODULE FOR SERVICES DELIVERY

Field of invention

This invention relates to a method and a system thai predicts the intention of a service as well as the personalized parameters for a -service required by a user in a manner which is extensible by a reusable open framework, whilst still maintaining the privacy of users on use on mobile or other portable/wearable devices. Background

There are currently a few different options for the interfacing between users and intelligent devices, as well as users. One example is that provided by Sirs, Google Now etc. where these assistants provide users with a single interface to understand users in order for users to carry out some functionally. However, these piaiforms are usuaily single directional, which involves users issuing instructions to the assistant, in Alexia, the concept behind the devices is the use of a single physical interface to interface users with other devices or information at a particular interface. In common group messaging platforms (interfacing with users) such as Whatsapp, googiechat, Alio, these platforms enable users to interface with other users which are within their own social circle; where the main form of machine-human interface is through the implementation of chat-bots. In on-demand-based operating systems such as iOS and google android, these operating systems allow users to access "apps" to delivery services. However, the above mentioned examples do not enable users to be immediately networked with whatever of interest around them in order to access services In a seamless manner.

Thus, those skilled in the art are constantly striving to provide a method and a system that predicts a service required by a user more efficiently and accurately. Summary of Invention

The above and other problems are solved and an advance in the art is provided by a method and/or a system in accordance with this disclosure. A first advantage of a method and/or a system in accordance with embodiments of this disclosure is that it allows machines to interact with humans to obtain services or information, where various applications are activated automatically or prediciively suggest so that applications can be started on an on-demand basis, and allows for information and services to be accessed in a more seamless manner. A second advantage of a method and/or system in accordance with embodiments of this disciosure is that the system and method allows for a more pre-emptive activation of functions and provision of inputs. A third advantage of a method and/or system in accordance with embodiments of this disclosure is that the system and method ailows for a scalable framework for other service providers to live on the platform. A fourth advantage of a method and/or system in accordance with embodiments of this disclosure is that the system and method allows for the ability to start providing early suggestions without a trained datamodet. A fifth advantage of a method and/or system in accordance with embodiments of this disclosure is that the system and method allows for other inputs/specifications of a service to be preempted for personalized service and ailows for the multiple service providers to assist users through an easy to scaie framework while maintaining privacy.

A first aspect of the disclosure relates to a system for predicting a service required by a user comprising: a mobile device having a processor, a memory and instructions stored on the memory and executable by the processor to: receive input data; determine user state and action of user using a context engine; transmit the user state and action to a prediction module; receive an outcome from the prediction module; execute an appropriate action based on the outcome of the prediction module; verify actual action from user against appreciate action; store the outcome, appropriate action and actual action in the memory; and transmit the outcome, appropriate action and actual action to a server.

A second aspect of the disciosure relates to a system for predicting a service required by a user comprising: a mobile device having a processor, a memory and instructions stored on the memory and executable by the processor to: receiving input data; determining a user state and action to obtain a context tag; comparing the context tag with current user state; executing a prediction module in response to determining a difference in the context tag with respect to current user state; receiving a predicted outcome and an appropriate action from the prediction module; executing the appropriate action: verifying actual action from user against the appropriate action: storing the context tag, predicted outcome, appropriate action and actual action in the memory; and transmitting the context tag, predicted outcome, appropriate action and actual action to a server.

According to an embodiment of the second aspect of the disclosure, the prediction module comprises instructions to: receiving the context tag and a total set of existing context tags; applying a machine Seaming {ML} model using a learnt mode! received from the server to obtain a ML outcome; retrieving the appropriate action associated to the ML outcome in response to the ML outcome being above a certain ML threshold; applying a probabilistic mode! in response to the ML outcome being below the certain ML threshold to obtain a probabilistic outcome; retrieving the appropriate action associated to the probabilistic outcome in response to the probabilistic outcome being above a certain probabilistic threshold; applying a deterministic mode! in response to a probabilistic outcome from the probabilistic model being below a certain probabilistic threshold; and retrieving the appropriate action associated to the deterministic outcome in response to the probabilistic outcome being above a certain probabilistic threshold.

According to an embodiment of the second aspect of the disclosure, the instruction to transmitting the predicted outcome, appropriate action and actual action to a server comprises instructions to; determining if there is enough new data to be sent to the server; obtaining the raw and related data in response to there being enough new data; applying a one-time identifier and encrypting the raw and related data such thai identity of user is masked.

According to an embodiment of the second aspect of the disclosure, the system further comprises the sewer having a processor, a memory and instructions stored on the memory and executable by the processor to: receiving the context tag, predicted outcome, appropriate action and actual action from the mobile device; updating a learnt model of the user using a machine learning module: and storing the context tag, predicted outcome, appropriate action and actus I action.

According to an embodiment of the second aspect of the disclosure, the instruction to updating a learnt mode! of the user using a machine learning module comprises instructions to; retrieving datasets associated to the mobile device and appending the data received from the mobile device onto the retrieved datasets; determining if the appended datasets is above a predetermined number of datasets; training a user specific ieami model for the mobile device in response to the appended datasets being above a predetermined number of datasets; updating the user specific learnt model in the dataset associated to the mobile device; and transmitting the user specific learnt model to the mobile device.

According to an embodiment of the second aspect of the disclosure, the instruction to updating a learnt model of the user using a machine learning module comprises instructions to: retrieving datasets from a group relevant to the mobile device and appending the data received from the mobile device onto the retrieved datasets; determining if the appended daiasets is above a predetermined number of daiasets; training a generalized learnt mode! for the mobile device in response to the appended datasets being above a predetermined number of datasets; updating the generalized learnt mode! in the dataset associated to the mobile device; and transmitting the generalized learnt model to the mobile device.

A third aspect of the disclosure relates to a method specifically for predicting a service required by a user using a mobile device that is oonrsectabie to a server comprising', receiving " mput data; determining a user state and action to obtain a context tag; comparing the context tag with cument user state; executing a prediction module in response to determining a difference in the context tag with respect to current user state; receiving a predicted outcome and an appropriate action from the prediction module; executing the appropriate action; verifying actual action from user against the appropriate action; storing the context tag, predicted outcome, appropriate action and actual action; and transmitting the context tag, predicted outcome, appropriate action and actual action to the server. According to an embodiment of the third aspect of the disclosure, the prediction module executes steps comprising: receiving the context tag and a total set of existing context tags; applying a machine learning (ML) model using a learnt model received from the server to obtain a ML outcome; retrieving the appropriate action associated to the ML outcome in response to the ML outcome being above a certain ML threshold; applying a probabilistic model in response to the ML outcome being below the certain ML threshold to obtain a probabiiistic outcome; retrieving the appropriate action associated to the probabiiistic outcome in response to the probabilistic outcome being above a certain probabilistic threshold; applying a deterministic mode! in response to a probabilistic outcome from the probabilistic mode! being below a certain probabilistic threshold; and retrieving the appropriate action associated to the deterministic outcome in response to the probabiiistic outcome being above a certain probabilistic threshold.

According to m embodiment of the second aspect of the disclosure, the step of transmitting the predicted outcome, appropriate action and actual action to the server comprises; determining f there is enough new data to be sent to the server; obtaining the raw and related data in response to there being enough new data; applying a one-time identifier and encrypting the raw and related data such that identity of user is masked.

According to an embodiment of the second aspect of the disclosure, the method further comprises the server to; receiving the context tag, predicted outcome, appropriate action and actual action from the mobile device; updating a learnt model of the user using a machine learning module; and storing the context tag, predicted outcome, appropriate action and actual action.

According to an embodiment of the second aspect of the disclosure, the step of updating a learnt model of the user using a machine learning module comprises; retrieving datasets associated to the mobile device and appending the data received from the mobile device onto the retrieved datasets; determining if the appended datasets is above a predetermined number of datasets; training a user specific learnt model for the mobile device in response to the appended datasets being above a predetermined number of datasets; updating the user specific tearot model in the datasei associated to the mobile device; and transmitting the user specific learnt model to the mobile device.

According to an embodiment of the second aspect of the disclosure, the step of updating a learnt mode! of the user using a machine teaming module comprises; retrieving datasets from a group relevant to the mobile device and appending the data received from the mobile device onto the retrieved datasets; determining if the appended datasets is above a predetermined number of datasets; training a generalized learnt model for the mobile device in response to the appended datasets being above a predetermined number of datasets; updating the generalized learnt model in the dataset associated to the mobile device; and transmitting the generalized learnt mode! to the mobile device.

Brief description of drawings

The above and other features and advantages of a method and a system in accordance with this invention are described m the following detailed description and are shown in the following drawings;

Figure 1 illustrating devices and systems communicatively connected to a network to provide or perform a system and/or method in accordance with this disclosure;

Figure 2 illustrating a block diagram of a processing system of the server for providing a system in accordance with an embodiment of this disclosure;

Figure 3 illustrating a block diagram of components in a mobile device for performing processes to provide a system for providing a system in accordance with an embodiment of this disclosure;

Figure 4 illustrating an application executable by the mobile device in accordance with this disclosure;

Figure 5 illustrating an application executable by the server in accordance with this disclosure

Figure 6 illustrating a process flow performed a decision module in the mobile device in accordance with this disclosure; figure 7 illustrating a process flow performed by a context engine in the mobile device in accordance with this disclosure;

Figure 8 illustrating a process flow performed by a prediction module in the mobile device in accordance with this disclosure;

Figure 9 illustrating a process flow performed by an automated state-actions module in the mobile device in accordance with this disclosure;

Figure 10 illustrating a process flow performed by a decision module in the server in accordance with this disclosure;

Figure 11 illustrating a process flow performed by a machine learning module in the server in accordance with this disclosure;

Figure 12 illustrating a process flow performed by a suggestion module and a core messaging module in the server in accordance with this disclosure;

Figure 13 illustrating another process flow performed by a suggestion module arid a core messaging module in the server in accordance mth this disclosure;

Figure 14 illustrating the overall flow of data and the structuring of data m accordance with this disclosure;

Figure 15 illustrating an example of a use-case in accordance with this disclosure;

Figure 16 illustrating a generalized use-case of user interaction with machine or people for information in accordance with this disclosure;

Figure 17 illustrating images of how the generic components of an "information type" can be implemented in accordance with this disclosure; and

Figure 18 illustrating a process performed by an application in the mobile device in accordance with this disclosure,

Detailed description

This invention relates to a method and a system that predicts a service required by a user.

It is envisioned that a system and/or method in accordance with embodiments of this disclosure may be used to predict a service required by a user. Figure 1 illustrates system 100 for predicting a service required by a user in accordance with this disclosure. The system 100 includes a server 110 and mobile devices 120. The service 110 is communicatively connected to third party service providers 130 that may or may not be providing services to the mobile devices 120,

Server 110 is a typical processing system such as a desktop computer, laptop computer, or other computer terminal capable of handling large data storage and processing need. Server 110 is communicatively connected to a network 140 via either a wired or wireless connection to communicate with mobile devices 120 and third party service providers 130. Server 110 executes applications that perform the required processes in accordance with this disclosure. One skilled in the art will recognize that although only one server 110 is shown, any number of processing systems may be connected and/or operating in parallel to perform the applications for providing embodiments of this disclosure without departing from this disclosure. Further details of the server 110 will be described below with reference to figure 2.

Mobile devices 120 may be a mobile phone, a personal digital assistant (PDA), a portable computer, a tablet or other similar mobile device without departing from the disclosure. The mobile device 120 is equipped with telecommunication network interface in order to communicate with the server 110 and third party service providers 130. Further details of the mobile device 120 will be described below with reference to figure 3.

Processes are stored as instructions in a media that are executed by a processing system in server 110 or a virtual machine running on the server 110 to provide the method and/or system in accordance with this disclosure. The instructions may be stored as firmware, hardware, or software. Figure 2 illustrates processing system 200 such as the processing system in server 110 that execute the instructions to perform the processes for providing a method and/or system in accordance with this invention. One skilled in the art will recognize thai the exact configuration of each processing system may be different and the exact configuration of the processing system in each device may vary. Thus, processing system 200 shown in Figure 2 is given by way of example only.

Processing system 200 includes Central Processing Unit (CPU) 205. CPU

205 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention. CPU 205 connects to memory bus 210 and Input/ Output (i/Q) bus 215. Memory bus 210 connects CPU 205 to memories 220 and 225 to transmit data and instructions between the memories and CPU 205. i/O bus 215 connects CPU 205 to peripheral devices to transmit data between CPU 205 and the peripheral devices. One skilled in the art will recognize that I/O bus 215 and memory bus 210 may be combined into one bus or subdivided into many other busses and the exact configuration is left to those skilled in the art.

A non-volaiiie memory 220, such as a Read Only Memory <ROM). is connected to memory bus 210. Non-volatile memory 220 stores instructions and data needed to operate various sub-systems of processing system 200 and to boot the system at start-up. One skilled in the art will recognize that any number of types of memory may be used to perform this function.

A volatile memory 225, such as Random Access Memory {RAM}, is also connected to memory bus 210. Volatile memory 225 stores the instructions and data needed by CPU 205 to perform software instructions for processes such as the processes required for providing a system in accordance with this invention. One skilled in the art will recognize that any number of types of memory may be used as volatile memory and the exact type used is left as a design choice to those skilled in the art.

i/O device 230, keyboard 235, display 240, memory 245, network device 250 and any number of other peripheral devices connect to I/O bus 215 to exchange data with CPU 205 for use in applications being executed by CPU 205. i/O device 230 is any device that transmits and/or receives data from CPU 205. Keyboard 235 is a specific type of i/O that receives user input and transmits the input to CPU 205. Display 240 receives display data from CPU 205 and display images on a screeti for a user to see. Memory 245 is a device that transmits and receives data to and from CPU 205 for storing data to a media. Network device 250 connects CPU 205 to a network for transmission of data to and from other processing systems.

figure 3 illustrates an example of a processing system in the mobiie device

120. Processing system 300 represents the processing systems in the mobiie device 120 that execuie instructions to perform the processes described below in accordance with embodiments of this disclosure. One skilled in the art will recognize that the instructions may be stored and/or performed as hardware, firmware, or software without departing from this invention. Further one skilled in the art will recognise that the instructions may be instaiied as a software application that can be retrieved from a third party provider, such as, App Store managed by Apple Inc. or Google Play managed by Google Inc. without departing from this invention. One skilled in the art will recognize that the exact configuration of each processing system may be different and the exact configuration executing processes in accordance with this invention may vary and processing system 300 shown in figure 3 is provided by way of example only,

Mobile device 120 includes a processor 310, a radio transceiver 320, an image capturing device 330, a display 340, a keypad 350, a memory 360, an audio module 370, a Near Field Communication (NFC) module 3S0, and an I/O device 390.

The radio transceiver 320, image capturing device 330, display 340, keypad

350, memory 360, audio module 370, NFC module 380, I/O device 390 and any number of other peripheral devices connect to processor 310 to exchange data with processor 310 for use in applications being executed by processor 310.

The radio transceiver 320 is connected to an antenna which is configured to transmit outgoing voice and data signals and receive incoming voice and data signals over a radio communication channel. The radio communication channel can be a digital radio communication channel such as a CDMA, GSM, LTE channel or any other subsequent generations of ieiecommunication network channels (such as SG network) that employs both voice and data messages in a conventional techniques.

The image capturing device 330 is any device capable of capturing still and/or moving images such as complementary metal-oxide semiconductor (CMOS) or charge-coupled sensor (CCD) type cameras. The display 340 receives display data from processor 310 and display images on a screen for a user to see. The display 340 may be a liquid crystal display (LCD) or organic light-emitting diode (OLED) display. The keypad 350 receives user input and transmits the input to processor 310- in some embodiments, ifte display 340 may be a touch sensitive surface that functions as a keypad to receive user input.

The memory 36Q is a device that transmits arid receives data to and from processor 310 for storing data to a memory. The audio module 370 may include a microphone, an earpiece and a headset. A microphone is a device that transmits audio data to processor 310. An earpiece is a device that receives audio data from the processor. The headset is a device that transmits and receives audio data to and from the processor 310. The NFC module 380 is a module that allows device 310 to establish radio communication with another similar device by touching them together or by bringing the devices within a close proximity. The HFC module 380 enables the mobile device 120 to make contactiess communication with another mobile device 120.

Other peripheral devices that may be connected to processor 310 include a Bluetooth transceiver, a Wi-Fi transceiver and a Global Positioning System <<3PS}.

The processor 310 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention. The processor has the capability to execute various application programs that are stored in the memory 360. These application programs can receive inputs from the user via the display 340 having a touch sensitive surface or directly from a keypad 350. Some application programs stored in the memory 360 that can be performed by the processor 310 are application programs developed for iPhone, Android, Windows Mobile, Blackberry or otter mobile platforms.

Figure 4 illustrates a program 400 stored in memory or virtual memory of the mobile device 120 for performing the processes in accordance with the disclosure. Program 400 includes a prediction moduie 410, data points 420, relationship data 430, and a decision module 480, Briefly, the processes executed by 3 modules are as follows:

1 ) Prediction module 410 predicts the services required by the user based on input data, data points 420 and relationship data 430. Certain prediction algorithms are being implemented in the prediction module 410. Further details will be described below.

2) Data points 420 store a first relevant data of the user. The first reievant data are "strong intent/preferences" information which are determined to foe attached to items/artifacts which are processed/managed/displayed on the platform. The information here can include personal preferences for !ikes/disiikes of messages/Information attached to messages, or it can be explicit information which are specified by users, such as age, home address etc.

3} Relationship data 430 stores a second relevant data of the user. The second relevant data are the raw data which are captured everytirrte a meaningful action/input is captured. This data relationship forms the basis of the "probabilistic model" of predicting inputs or user intents. The data captured here is also similarly used for the training of a machine learnt model by the server for this purpose. Additionally, the data here are also aggregated for people in a particular region/globally/any other grouping in order to aggregate the general preferences of users. This allows tor implementation of a deterministic model which better suits the general set of users of a particular use-case. The second reievant data may contain any other data which may or may not be explicitly determined by the user but are also pertinent, e.g. GPS locations or processed/raw sensor data. 4) Context engine 440 tracks various raw input data to determine significant information about a user. This module is kept updated about changes or events which are significant to a user. The changes or events can be detected through constant scanning or through setting of listeners to conditions for various events. The purpose of this engine is to ensure that the "user state" or the "detected events" are kept live, and can therefore be reacted to in the relevant "State->Actions" automated actions management module. Each update of the condition is then determined to be significant or not in the "State-Actions" module. Each update of the "User StateTDetected Events" updates the "Context Tags", which are then used to make decisions in the "Automated State-Actions" module 445. Essentially, this module checks for certain changes or events of a user and triggers the predictive module 410 to predict a relevant service. The reievant context tags are then stored accordingly. Further details will be described below 5} Automated State-Actions module 445 allows for certain messages to be activated in order to present information at the right time and place. This module sends relevant client-side information to the server, which would allow for any registered events to be responded to the user. The state- actions which are selected by a user is determined by the "Channels' 1 which a user is tuned into, as well as the changes to "context tags" of users. The relevant actions which are activated (if any) are determined by the relevant deterministic rule/probabilistic intent/Machine learnt action.

8) "Update Userstate" module 450 updates the client side of the application with the relevant raw data, which would update the relevant Data Points 420 and Relationship Data 430. This would in-turn affect the behavior of the Context Engine 440 and the activation of various actions from the "Automated State-Actions" module 445.

Each of the 6 modules is communicatively connected to the decision module 460 which will execute the course of action based on the information received from the relevant modules.

Figure 5 illustrates a program 500 stored in memory or virtual memory of the server 110 for performing the processes in accordance with the disclosure. Program 500 includes a user data management module 510, core messaging processes 52Q, user referenced data 530, messaging data model 540, structured data model 550 and a decision module 460. Briefly, the processes executed by 5 modules are as follows:

1} User data management module .510 stores a first relevant data of users.

The first relevant data includes the information of all user received from the various mobile devices. The user data management stores information of users anonymized in any way or form which allows for individual action points of a user to be separated from identifiable user data. This then a!iows for high-level aggregation of information, which can then be used for the generalized model of machine learning or for analysis separately in order for the deterministic machine learning model to be determined or otherwise. Additionally, the anonymized data would be used to train individualized models through Machine Learning Module 555. This enables information which are related to users to be used for suggestions. 2) Suggestion Management module 520 stores the processes for querying and creating predetermined messages for communicating with mobiie devices 120, The querying and creating of pre-determined messages is performed through querying 3 rd party "Services Provider Actions", The Suggestion Management modute 520 would determine the correct type of actions which are to be presented to the user based on ruies which are indicated by the relevant User's preferences stored e.g. which assistants (i.e. Appie (Siri), Microsoft (Cortana), Amazon (Alexa), Google (Google Assistant}} the user is following or which channels the user is tuned into. 3) User referenced data 530 stores a second relevant data of each user.

The second relevant data includes the information of each user received from the mobile device. Such data are stored in a manner which is not linked to a user's credentials, but is stored in a manner where data can be queried from the portable device. For example, a series of number may be used to replace the user's credentials. Masking of user's credentials is widely known and is thus omitted for brevity.

4) The Core messaging modute 540 enables all information to/from both human users and other representations of bets/assistants to be represented in a uniform manner. The core messaging moduie 540 include a "Response Management" that allows for messages which are presented to be responded to. This response is then relayed to the relevant Sea'ice Provider accordingly, which may then in tum lead to another "Reply" from the Service Provider through the Suggestion Management Module 540. The core messaging loop between User Response and Service Provider Response may continue until a user obtains the information/service they need.

5) Structured data model 550 allows for data from the mobiie devices to be kept in their respective structured format. This includes information from the Relationship data, as well as datapoints to be stored in iheir respective formats. The data here can be further interpreted for other purposes on the server side. E.g. grouping of users for purposes of grouping users into Smart Communities.

6) Machine Learning Moduie 555 updates the relevant datamodel and updates User State Engine which allows for tf>e data to feed into a machine learning model accordingly. Further details on the machine teaming moduie 555 will be described below. Each of the 8 modules is communicatively connected to the decision moduie 560 which will decide on the course of action based on the information received from the mobile device and the relevant modules. Program 500 receives data from the mobile devices 120 and uses a supervised learning technique to train and obtain a learnt model for each user using the machine learning moduie 555. The teamt mode! will be transmitted to the relevant user when the mobile device is communicatively connected to the server 110.

The program 400 of the mobile device 120 wiil first be described foSiowed by the program 500 of the server 110.

Program 400

The program 400 receives input data from a user. The input data will be processed by the deoisson module 4§0 which wtii in turn retrieve further information from the data points 420 and/or relationship data 430 and transmit the relevant information to the prediction module 410 to determine a final result,

in brief, the prediction module 410 includes three models, namely, deterministic model, probabilistic model and machine learning model, The machine learning model is available when the teamt model is received from the server 110. in cases where learnt model is not available, the prediction moduie 410 will fall back on the probabilistic predictions or the deterministic model.

Figure 6 illustrates a process 600 performed by the decision model 460 in accordance with this disclosure. Process 800 begins with step 805 by receiving the input data.

In step 610, process 800 determines the user state and action using the context engine 440 to determine the relevant context tag. Context Tags are a means for the communications of the event to be communicated in order for the necessary "actions/intents/inputs" to be predicted. This is necessary since the input data is obtained from various sources and hence converting the input data into a certain form (in this instance, context tags) is required. The user state refers to the various states of the users being determined in real time. Various variables are used to determine a user state. These variables can be behavioral, situational or otherwise. The context engine 440 would be tracking various raw input data to determine significant information about a user. As an example, assume the intent of travelling to a location where a few variables would foe taken into consideration in order to determine if a user is going to go to a certain location. The variables include, for example, the user states of the time, their location, day of the week, weather condition etc. Based on these variables, the context engine 440 is able to determine the user state using "Location", "Day of week", and "weather condition". Further details will be described beiow with reference to figure 7.

The data received from the context engine would be transmitted to the prediction module 410 in step 615 if there are significant changes in the context tag with respect to certain requirements such as current user state. The data from the context engine is compared to various user data, including the data points such as what users preferences are, inciuding what "Channels" a user is tuned into, as well as what Service Providers a user would like to serve them. It is important to note that "Channels" and Service Provider/Assistant selections are generic functionalities which are provided on the platform. "Channels" are provided on the platform to broadly allow various specific "Situational Grouping" of use cases to be served, which allows for a more targeted grouping of users through what "Channels" a user is tuned into, as well as what situations as defined by the User Context Tags to serve a user. This allows for Service Providers/Assistants to target specific audiences at various situations for a variety of use cases and situations specific to a particular group of users. St is also this generic functionality which allows for both messages from "Assistants" to be represented to users, as well as messages from "Smart Communities" to be represented on a singular platform and user interface. Specifically, the data from the context engine include the relevant context, tag that has changed. The total set of existing context tags are provided into the Decision Module which in turns activates the necessary response as per the prediction, via the prediction module, to enable the relevant predicted output. As mentioned above, the prediction module 410 includes deterministic model, probabilistic model and machine learning model. The predicted outcomes from the machine learning model would be prioritized. However, if the predicted model is not mature enough, the probabilistic model would be used accordingly. In addition, if the probabilistic mode! also is unable to provide a satisfactory outcome, the deterministic model would be activated. The models would use the same "User State" context to activate logic, which would then enable consistent data to be used for both the probabilistic prediction, as well as the training of the machine learning model in the back-end. This is advantageous, because it simplifies the design of the system and simplifies the dataset to enable data to be propagated and reused for multiple purposes. Additionally, this set-up allows for some deterministic rules which would enable the software to at least start working with some generalized rules, so that personalized experiences can be performed from the start., and not fust when there is enough data about a user. The table 1 below shows how the "User context " ' structure Is reused for the purposes of predicting inputs/intents/actions from users for ail 3 models of prediction.

In step 620, process 600 receives an outcome from the prediction module and executes appropriate action. In particutar, the decision module 4Θ0 retrieves relevant data from data points 430 that is associated to the outcome received from the prediction module. The relevant data refers to predicted service or services that may be required by the user. The output from the Prediction Module is a ranked fist of probable actions and process 600 makes the necessary calls to one or more "Service Providers * ' which are then requested to service the user accordingly.

In step 625. process 600 verifies if the user requires the predicted service or services, !f the user uses the predicted service or services, process 600 triggers automated state-actions module 445 and update userstates module 450. Specifically, the automated state-actions module 445 forwards the relevant context tag and predicted action and/or actual action required by the user to the server 1 10 white the update userstates module 450 updates relevant Data Points 420 and Relationship Data 430. Additionally, the Suggestion Ivtanagemeni module 550 of the server may be executed during this step, and depending on the objectives of the messaging platform, may prioritize certain responses from various service providers accordingly. It is aiso the case that such responses are then represented to the user through the Core Messaging Module 540 of the server accordingly, where the response from the Service Providers, and the replies from the users are managed accordingly. St is to be noted that the responses (i.e. raw data) from the users may change the Context Tags of the user {which represents user state broadly) which may then reset the process 600 accordingly. This means that process 800 repeats from step 60S after step 625 upon receiving new raw data, such as additional user inputs or user responses.

In short, process 600 determines the user state and action using the context engine 440 to determine the relevant context tag and compare the relevant context tag with current user slate. If there are significant changes in the relevant context tag with respect to the current user state, the relevant context tag and the changes would be forwarded to the prediction module to predict an outcome and determines an appropriate action. Process 800 then executes an appropriate action based on the predicted outcome. Process 600 then verifies the actual action from user against the appropriate action and stores the predicted outcome, appropriate action and actual action in the memory. The predicted outcome, appropriate action and actual action are also transmitted to the server 1 10.

Figure 7 illustrates a process 700 performed by the context engine 440 In accordance with this disclosure. Process 700 begins with step 705 by receiving the input data and determining the relevant context tag.

In step 710, process 700 determines if there are any significant changes to the data. Specifically, process 700 determines if there are significant changes in the relevant context tag with respect to certain requirements such the current user state. For example, f the location of the mobile device has changed to a location that is outside of a 1 kilometer radius from a reference location, the prediction module wit! be activated to predict a service for the user, if the current date fails on certain day of the week, the prediction module will be activated to predict a service for the user, if the user has updated additional data which would update the user's preferences, or other clustered information which would impact on the relevant personalized context tags, or if the weather condition has changed {information of which may be derived by available online website and retrievable by suitable API), the prediction module will be activated to predict a service for the user.

In step 715, if process 700 determines a significant change of user state, process 700 proceeds to step 720 to instruct the decision module to activate the prediction module. Otherwise, process 700 proceeds to step 730 and update context tags accordingly.

Step 720 basically mates a call to process 800 via the decision module, where some "actiorss/intent/lnputs" are provided as outputs. Broadly, such changes would be compared to the channels that a user is tuned into, as well as the Service Providers/Assistants which a user is following in order to determine if the changes are significant to the user. Such changes can be significant to both Assistants and Smart Communities. The input into process 800 would include the relevant context tag that is changed. The total set of existing context tags are provided into the Decision Module which in turns activates the necessary response as per the prediction, via the prediction module, to enable the relevant predicted output. Context Tags are a means for the communications of the event to be communicated in order for the necessary "actions/intents/inputs" to be predicted. As per table 1 above, "Location Change to <Location A>" or "Weather at location is now Rainy" or "Hew Relevant Content" are context fags which can be used as Inputs for specifying the deterministic rules, probabilistic input tables for the Naive bayes predictor, or inputs into the neural net used for supervised machine learning. In step 725, process 700 receives data from decision module and update user model with appropriate predicted actions.

Process 700 ends after step 730.

if should be noted that one skilled in the art would be able to understand that a similar alternative set-up wouid be that process 700 (and therefore 800} are secured to run on the server as well, as long as the same processes are run with the user data stored separately and securely, and can only be unencrypted if the appropriate signatures from the user' devices which are provided, except that this set-up may impact on the overail responsiveness of the application.

Figure 8 illustrates a process βΟΟ performed by the prediction module 410 in accordance with this disclosure. The prediction module 410 includes three models, namely, deterministic model, probabilistic model and machine learning model. Process 800 begins with step 805 by receiving the user state and action from the decision module 460.

In step 810, process 800 activates machine learning model. The machine learning model is performed based on the learnt model received from the server 1 10 Although any form of machine learning can be applied. It is likely that a Deep Feed Forward Neural Networks, e.g. Multilayer Perceptrons (MLP) can be structured with the necessary "Context Tags" and other Structured Data as inputs and the "Predicted User Inpui/Aetions/lntenf can be provided as an output of the Artificial Neural Network. It is important to note that the already trained model is utilized by the mobile device, which would reduce computational effort, and time for response. During this step, ft is also the case that ft is also needed is to obtain the logic state of output neurons to access the real-valued activation value they were computed for. It is possible for us to use the softmax function for the activation function of the output layer and train the net with the cross-entropy loss function. The softmax function normaiizes a K dimensional vector of arbitrary real values into a K dimensional vector whose components sum to 1 , which cars then be utilized for calculating the probability of a classified output. This derived probability score can be used to compare with a threshold, e.g. 70% or above to determine if this would be utilized.

In step 815, the result from the machine learning mode! is compared with a threshold value. As mentioned above, an example of a reasonable threshold is 70% or above, if the result is lower than the threshold value, process 800 proceeds to step 825. If the result is higher than the threshold value, process 800 proceeds to step 820 and retrieve an action associated to the result. In this instance, the action may be to suggest the decision module to provide the likely routes which a user may want to take between 2 locations. The appropriate context tags as well as other Structured Data such as the historical count and route options can be provided as inputs, with the output being the most preferred routes for example.

Irs step 825, process 800 activates probabilistic model. In the probabilistic model, the relevant table containing the columns of context tag variables and other structured data which the particular output is dependent on is processed in order to provide the most likely outcome. In this case, for example, the historical records of the "Origin", "Destination", 'Time of day", "Day", etc. and other user context data which determined the preferred route would be used to determine the most likely route which would be picked. For example, if using naive bayes predictor, the highest likelihood would foe the max(P{djh) * P{h)}, where h Is the event in which route characteristics which would be taken, and d would be the state of the "context tags" at the point of that event being true which would calculate the relevant conditional probabilities and carrying out the prediction function by choosing the maximum of the values. The calculated conditional probability would be normalized accordingly, and this derived probability can be used to compare with a threshold, e.g. 70% or above to determine if this wouid be utilized.

For the purposes of this description, a brief description of Naive Bayes Classifier being implemented in the embodiments of this disclosure will now be described. The variables which are to be stored as a table for each predictor would be as follows.

The above representation of the Na ' rve Bayes Probability can be utilized, and as an explanation, the variables would be structured as follows:

1) P(xjc) is the likelihood, which would be calculated for each response variable, and this value is the probability of observing the predictor variable. 2) P(c) Is the class prior probability, which is the probability of the predictor occurrence.

3} P(x) is the predictor prior probability, which is the variable occurrence.

4} P(cjx) is the posterior probability. This is the predictor or the probability of the intent/variable.

Variables are designed to be one of the "user context variable", which is what the Context Engine determines it to be. Once the associated variables for a "predictor" (which could be e.g. a predicted intent, location, person to contact etc.) is determined, this enables the relevant information to be tracked.

This probabilistic model is applicable for predicting both the intents of users, as well as the preferences of users. By developing a simple probabilistic predictor which is abte to sit on the mobile device would allow for the right information to be suggested to users even when the user is offline, i.e. not connected to the server.

In step 830, the result from the probabilistic model is compared with a threshold vaiue. As mentioned above, an exampie of a reasonable threshold is 70% or above, if the result is tower than the threshold value, process 800 proceeds to step S40. If the result is higher than the threshold value, process 800 proceeds to step 835 and retrieve an action associated to the result. Similar to the above example, the action may be to suggest the decision module to activate a particular service required by the user, for example, a user may historically prefer travelling through a particular itinerary due to a number of factors. The travel notifications related to a suggested itinerary would then be provided to the user with the most likely route being prioritized.

In step 840, process 800 activates the deterministic model and determines an appropriate action. The deterministic ruies can be defined broadly by the platform based on data analytics which would optimize the manner in which the "deterministic rules" are created. The manner in which the deterministic rules are created also relies on the context tags. For example, it may be determined that in general, if the weather is sunny, and it's a weekday, it may be suggested that "Sunny^TRUE" + "weekday-TRUe" + Origin = HOME * + "Destination ~ WORK" ->OUTPUT a Triptype=PUBLiCTR AN SPORT' + "Preference^LGWEST COST", in Ibis manner, it allows for user context tags to be used for al! forms of user suggestions, from deterministic, to probabilistic to machine learnt. The deterministic rules are generally used when information of the user is not sufficient for the machine learning model and probabilistic mode!. Hence, the deterministic model is based on general context tags, i.e. the aggregated context lags of various users,

in step 845, process 800 forwards the result to the decision module. The result is a predicted action to be activated by the decision module.

Process 800 ends after step 845.

Figure 9 illustrates a process 900 performed by the automated state-actions module 445. Process 900 begins with step 905 by determining if there is enough new 0&ta to foe sent to the server. If there is enough new data, process 900 obtain the raw and related data from the user's devices.

in step 910, baseo * on one-time identifier, the relevant data from th<s users device is then sent to the server and encrypted in order for the overali user data to be compiled and stringed together with the other raw data previously sent. Assuming that all the relevant security and privacy capabilities are in place (e.g. SSL, Encrypted and isolated key store, encrypted data etc.) this provides a means for information from the previous batches to be combined in a manner which maintains the privacy of the data. The key enabier here is that the key and signatures are kept separately, and would require authentication from the user for access and processing. This would be most conveniently done through the user's devices. Essentially, the identity of user is masked so that data sent to server is anonymized.

Process 900 ends after step 910.

The processed performed by the server 110 will now be described as follows. Figure 10 illustrates a process 1000 performed by the decision module 560 of the server 110 in accordance with this disclosure, process 1000 begins with step 1005 by receiving the data from the mobile device 120, Upon receiving the data, process 900 stores the data in the user data management module 510 and the user reference data 530.

in step 1010, process 1000 updates the learnt model of the user using the machine learning module S55. The data received from the mobile device will be prepared and used for training a learnt model for the user accordingly. In this case, any appropriate machine learning techniques for training would be applied. In this instance, a Deep Feed Forward Neural Networks, e.g. Multilayer Perceptions {fV!LP} may be utilized. Broadly, there would be two types of classifiers, "Generalised model" and "User specific" model, which will affect the following processes. This allows for the training to be applied accordingly, so that the model can be kept to date.

Sn the user specific model, the appropriate training would be performed for the neural net, and the trained model is then passed into the device for use in the future.

In addition, a "Generalised model" is used, which would enable groups of users to be clustered accordingly depending on the outcome (action/ ' sntent/inputs) of the user. In this generalized model, based on the user data provided, the user would be classified accordingly. This can be through data which is provided across the generalized public or otherwise.

After the classification of the user, the classified identifier is sent to the mobile device, which would enable this variable to be provided into the generalized neural net which is already present on the mobile device. The details of the training of the learnt mode! using MLP are omitted for brevity.

After the first ieamt mode! is determined, a retrain or update of the user specific mode! and generalized model is triggered only if there is sufficient data. The re-trained user specific model and generalized model would then be sent to the user's device accordingly in step 1015. so thai this can be used for future predictions going forward. The user specific mode! is stored in the user reference data S3G and generalized mode! is stored in the user data management module 510, Further details of machine learning module will be described below with reference to figure 1 Lln step 1020, process 1000 activates the suggestion module and core messaging module. This step is performed when requested by the mobile device. Further details of step 1020 will be described below with reference to figure 12.

After steps 1015 and 1020, process 1000 repeats from step 1005 to receive new data from the mobile devices.

Figure 11 illustrates a process 1100 performed by the machine learning module 555 in accordance with this disclosure. Process 1100 begins with step 1105 to determine if there are enough training dataset to train a "Generalised mode!" and/or a "User specific" model. Specifically, in the user specific model, process 1 100 will retrieve the datasets from the user reference data 530 and append the new dataset with the retrieved datasets. If the revised datasets is above a predetermined number of datasets, process 1 100 will proceed to step 1110 to train a learnt model for the specific user. Specifically, in the generalized model, process 1 100 will retrieve the datasets from the user data management module 510 and append the new dafaset with the retrieved datasets. in this embodiment, the datasets from the user data management module 510 may be from a genera! pool of datasets. Alternatively, datasets from the user data management module 510 may be from a group of users that are of certain relevance to the user (i.e. in the same communities). If the revised datasets is above a predetermined number of datasets, process 1100 wiil proceed to step 1120 to train a learnt model for the generalized users. If there are not sufficient datasets for both specific and generalized models, process 1100 ends and wait for next instructions from the decision module 560. Determination of which approach to use. and the amount of data required would be obtained experimentally, and if really depends on the complexify of the problem and the exact learning algorithm applied.

In step 1 110, process 1100 trains a user specific learnt model based on the datasets of the user. As mentioned above, any appropriate machine learning techniques for training would be applied. In this instance, a Deep Feed Forward Neural Networks, e.g. Multilayer Perceptrons (MLP) is implemented. The details of the training of the user specific fcarnt model using MLP are omitted for brevity.

In step 1 115, the user specific learnt model is updated in the user reference data 530 and transmitted to the mobile device.

In step 1120, process 1100 trains a generalized learnt model based on the datasets of the various user from user data management module 510. As mentioned above, any appropriate machine learning techniques for training would be applied. In this instance, a Deep Feed Forward Neural Networks, e.g. Multilayer Perceptrons (MLP) is implemented. The details of the training of the generalized learnt model using MLP are omitted for brevity. In step 1125, the generalized learnt model is updated in the user data management module 810 and transmitted to the mobile device.

Process 1100 ends after steps 1115 and 1 125 or when there are not enough datasets for training a learnt model.

figure 12 illustrates a process 1200 performed by the core messaging module in accordance with this disclosure. Accordingly, a smart communities channels/grouping is achieved through the sending of messages through process 1200. In order words, the process 1200 is able to group a user to a particular group. This is especially useful in that a user is automatically grouped according to his/her comments via the messaging platform. Based on the communities channels/groupings, the user data management module 510 can further be segregated so thai the generalized learnt model can be further trained according to the relevant communities channels/groupings. For purposes of this description, communities channels refer to the Smart Community which the user is tuned into.. The messaging platform may be one provided by the application or any other third parties such as from Apple (Message), Facebook (Facebook Messenger or WhatsApp), eta. If is important to note that due to the conversational nature of Messengers currently, it is not sufficient for "Dynamic Groups" {this is because a user may be in and out of groups at any point in time, depending on their data pattern for the relevant channel). Therefore, ft is suggested that the Threaded" uniform messaging component introduces User interface Elements as per Figure 17, which would allow for a consistent view of messages regardless of whether or not they are from Assistants or from Smart Communities. Communities groupings refer to users with machine calculated groups such as similar activities, preferences, likes and dislikes, etc. It should be noted that the sending and receiving would be done based on the various data which are specific to a user, i.e. the user's user referenced data and the user's structured data model, that are derived from data which are sent about a user.

Process 1200 begins with step 1205 by extracting the parameters from the message received from the mobile device, in particular, process 1200 determines the appropriate parameters that are attached to a message received from the mobile device. This can be done with a simple mapping table which determines the neoes ' sary/mfevant information which are mandatory for a particular grouping. For example, if a Dynamic Group (i.e. one of the Smart Community Group) relies on the "location" to group users, then the message of the user would have the location parameter attached to the message. As another example, if the message belong to perhaps a "food recommendations' group, it would be the case that the user s food preferences (based on the users data model) can be clustered, and the appropriate clustered group criteria or identifier which the particular message grouping is based on would then need to have the reievant duster's ID to be added to the message as "Message Tags". With the use of the Context Tags history or other reievant Structured Data history of the user to group users, it would allow for dynamic grouping based on the user's previous activities, instead of requiring users to specify what groups they belong to. Further details of the retrieval of the parameters will be described below with reference to figure 13.

It should be noted that the information to foe attached to a message can be explicitly provided by a user, e.g. a straightforward location infomiaiion, or if could be implicitly provided by the user. E.g. food characteristics of the message being sent to the "Food Recommendations" grouping would be mandatory information, which would be explicitly be determined based on another program which classifies the food message into a particular cluster that then attaches the classified cluster information to the message.

In step 1208, process 1200 processes the parameters. Specifically, a mapping table which shows the criteria where each messaging group is based on is retrieved. A mapping table can look as follows;

Based on the relevant criteria, the relevant grouping logic is executed in order for the relevance of a message to a user to be determined accordingly. For example, in the mapping table of the Messaging group, there would also be a mapping to the type of "Grouping Method" which points to the logic which should be activated to test for relevancy of the message in the central datamodel for each user based on the Structured Data history of a user. In the case of "LocaiCommunity" grouping, a user would belong to a group based on the user's historical location information, i.e. the criteria for this grouping could be based on a high enough frequency of users at various locations In the past 30 days, which would determine if the messages sent to the centra! messaging datamodel with the location parameter attached matches the location grouping criteria of the user. It should be noted that the dynamic grouping of data is, many of the times, derived through the predictive model as the "Audience Tagging" would usually depend on some context tags of a user, which are subject to some form of clustering. In the derivation of context tags, it is common for clustering to be used as a means of deriving meaning which are then captured as Context Tags which are personalised, which is performed through process 700. The Context Tags which are personalized could be something tike "WiTHlN_LOCALCOMMUNITY ~ TRUE" which can be used as inputs into the various inputs for predictive or Ml classification. Additionally, such personalized Context Tags are may be derived from the relevant probabilistic model, deterministic rules or otherwise. For an example of how the clustered personalized Context Tags are utilized i.e, for the Local Community Channel, there would be a set of matching set of "audience tag" e.g. for the local community, tt could be the case that we would like to provide users only with information around where they frequent. Therefore, it would be the case that there would be a polyline as a fag information e.g. TQCALCOMMUNiTY={[x1 ,y1], |x2.y2], [x3,y3] ...etc..}" (thai is through clustering of user locations which are frequented by a user) which can be attached to a user as an "Audience Tag". The 1st of (x,y] are calculated polylines of locations, which are labels which are attached to a user. In the LocaiCommunity channel, the user's polyline could be calculated by referring to the user's data to determine the locations where would go past at least 70% of the days in the past 30 days for example, which indicates frequency). As another example, for the ' Food Recommendations" channel/grouping, the message with the food duster information would be processed according to step 1215 in process 1200. It is seen that as data is obtained through process 700, (perhaps as users provide food ratings) it is the case thai clustered grouping of user actions/preferences can be obtained as the user model and Context Tags are updated accordingly, and it would be possible for " to be kept updated, and be used for inputs going forward. Therefore for message retrieval, there would be a matched based on the user's preferences in the structured datamode! of the user, i.e. messages which matches that of the food clusters which are relevant to a user would be picked up to be relevant to a user. For Food Recommendations, therefore, some other information such as the relevant cuisine or cluster of food which a user likes, e.g. {[ASiA^yVEG£TARiAN],[dusler'.827913]) would be the "Audience Tag" which is attached to the user. \t is important to note that the above user tagging would be run periodically, which would process the user's tags based on the user's historical structured data e.g. preferences or travelling history, which would help with the message retrieval process. Step 1208 may be performed periodically, either asynchronously after step 1205 or otherwise.

In step 1210, process 1200 stores the parameters and audience tags determined from steps 120S and 1208 respectively into the central core messaging module, where data from all sources are stored when messages are sent. In Smart Communities, instead of representing Service Provider as the senders of messages, messages are essentially replaced by User messages i.e. using the same messaging system core which allow for adhoc-messaging from Service Providers (based on user context tags) to users at various user context, the same messaging system core allows for adhoc messaging from users to other users as well.

In step 1215, process 1200 appends the messages associated with the user with the new message received from the user. Essentialy, any additional information which may be required by the suggestion module is attached to the system, including any of the required parameters which are previously attached to the message in step 1205. This could be based on a table mapping of the types of parameters which are required by each specific grouping of a particular Smart Community Group. For example, for SmartCommunities, a table mapping of the location" information is considered mandatory, in which case, it would be necessary for the user to also provide the appropriate "Location" information which the message is about, such that this criteria can be utilized in the "Receiving Process". The user message is then stored in a Centra! Messaging dalamode! in the suggestion module.

in step 1220, the retrieval of the messages will be activated periodically, either asynchronously or otherwise in order for the necessary users to retrieve the messages based on the relevant criteria of the messaging channel or grouping. Further details of the retrieval of the messages will be described below with reference to figure 13.

Figure 13 Illustrates a message receiving process 1300 performed by the suggestion module and core messaging module in accordance with this disclosure. Process 1300 begins with step 1305 by retrieving all the audience tags and the messages from a user. In step 1310, process 1300 determines the relevance of the message to an audience tag. Specifically, the relevant tags, i.e. the topics and audiences would be matched with the relevant user group to determine if the message is relevant to the user. For example, for the 'LocafCommunity" Group, there would be a mapping which provides that "Location" is the grouping criteria for this group. It would be the case that the necessary audience tags which is relevant for a user is also processed and tagged accordingly to a user. If it is determined that the necessary "Audience Tags" matches that for the relevant Topic, then the message would be considered relevant for a user.

in step 1315, if the message is relevant to a user, but the post is considered old, then process 1300 ends. The post is considered old if the message has been read and the time of the post is more than a predetermined period of time, !f the message is relevant and the post is new or have not been read by a user, then it would be considered as fresh, and the process proceeds to step 1315.

in step 1315, process 1300 performs a look up and retrieves the relevant message from the central message datastore in the suggestion module, and transmits the message to the relevant user accordingly. Process 1300 ends after step 1315.

Figure 14 illustrates the overall flow of data, and the structuring of data which would run for each user connected to the platform as the user utilises the application on a regular basis. Each user action will result in a look up to determine any possible user actions, such that the relevant user intent is picked up, and eventually stored in the user datarnodei. It is the intention that depending on actions such as time, when and where users travel to, the system and method according to this disclosure use such data to automatically gather user-specific data, without user inputs, e.g. by knowing the venue and where users go for lunch, time, type of cuisine, willingness to travel etc. This is generally the case for services, because users have to be physically at a location e.g. transportation. Traditionally, applications require a lot of information about a user in order to personalise, and in this case it can be inferred through the travel patterns of the user the type of places and the possible preferences when it comes to meals, entertainment, travel mode, type of work, etc.), The user's actions (both recorded through user state changes as well as through actual explicit actions) on a user's device are predicted, and the user's actual actions are recorded as shown in figure 14, which shows how prediction and updating is performed accordingly.

Other information

1 User-intent centric design of user workflow.

The system and method described above may be focused on getting things done for users as a sequence of steps. This Is the case for most, if not all use cases when users are looking at fulfilling an outcome. For example, for obtaining services, such as "arrive at a location at some time". The steps are:

(1 ) Determine what travel options there are

(2) Prepare for travel

(3) Head to pick-up point

(4) Travel first ieg

(5) Travel second leg (6) Arrive at Location. (Destination)

Each use-case which ends with a "task success", and also at each step, it is possible for a series of actions to be performed by a user. This could be consuming of sendees at each step, or retrieving of information. This can be illustrated using figure 15. in figure 15, the use-case can fee applied on a multitude of applications, which starts with an intent, and ends with a user accomplishing what they initially intended to, for example, have a meat, or go to a location or purchase an item. The system and method according to this disclosure views user intent as the start of any possible workflow, and User intents are predicted by the user actions, and such user intents are what activates the "workflow model" that helps provide the right information at the right time to users.

Figure 15 shows the relationships between actions and steps in the performing of actions for each step. The user action prediction and updating appears for each state of the user workflow. If is important to note that the "steps" and "actions" as shown figure 15 maps to the "user state" at each step, and the "actions" in the system and method as described above. The system and method described above assists to provide suggestions as to the actions which are provided by the users through a "change in state" or a "explicit action calf from the platform.

The platform defines the use-cases which are supported, and the steps initially which form the overall processes for a user's workflow for getting things done, which may be modified by each user as the system and method according to this disclosure is utilized. Third party applications are seen as "service providers" which are provided with integration points so that the correct information can be served to users at the right place and time.

This also means that applications are not implemented for on-demand access to service or information as it is now, and instead applications are implemented by developers in line with the steps which are defined on the platform (and personalized for the user), and act more like "service providers", which provide their services or information at the corresponding step of the workflow, in short, the application according to this disclosure is able to complement the services provided by other third party applications. Another benefit of this approach is that each action may lead into a set of workflow in and of itself, and that user-specific predictive datamodel cm be averaged out amongst the population average to be used as the standard offering for users when using the application according to this disclosure. The updating of usermodei actually leads to the updating of the user's saved model.

As widely known, machine learning requires a substantial amount of dataset in order to create a user specific learnt model that is able to accurately predict a service required by the user, Hence, it is pertinent that the probabilistic and deterministic models are at least accurate to a certain extent such that the user continues to use the application according to this disclosure so that more datasets can be obtained from the user. The use of the generalized learnt model is also with the intention to ensure a certain level of user experience so that the user continues to use the application according to this disclosure. The generalized learnt model can a!so account for unexpected events to counter balance the prediction via the user specific leant model.

2. 3 main user interface components: have user interfaces which are built around the use cases of users, which shall work across various types of user devices.

The generalized use-case of user interaction with machine or people for information is shown in figure 16. The platform shall implement 3 entry points which can be applied for users who are using any type of device to access services/information. At a high level, the many steps which users utilize to achieve a particular outcome can be seen as categorized into one of the 4 steps, i.e. User intent, Service Discovery, Service Initiation, and Service Interaction. The system and method according to this disclosure leverages on this behaviour of people to introduce 3 types of interfaces which users would be able to use as entry points into the platform, i.e. the "Live feed", the "Ask Assistant" and " Conversations" , which is shown in the image shown in figure 18.

"Live Feed": This is a functionality which provide users with top intents which tries to predict and present user intents as early as possible. The "Live Feed" is an Important component which looks at both the relevancy as we!! as the tiveness" of messages to determine if the messages are sii!S relevant to a user.

Using the relevant learnt model about a user, an interface which provides user intents-specific functionalities would allow for instant access to users; This interface would be designed around providing suggestions to users about what they would like to immediately access. This form of access allow for the systems to either prompt users whether or not they would like particular information or services as they move through each step of the service provision, it is the intention that the "Live Feed" operates we!! for both visual and non-visual based machines. The way that the suggestive information can be presented to users for different device types could be as follows

"Ask Assistant": This mode of interacting with the platform allow users to immediately access functionalities, in which users already have an intent, and knows what services they want to access.

This interface allows for the main assistant to be interacted with in order to access information, it is through this interface which enable users to directly access services, or otherwise to discover the available range of services or information through the appropriate interface. The types of possible interfaces to allow users to discover information could be as follows.

'Oonversafion'': For service fulfillment/interaction, a series of conversation- based interface which aitow users to interact with bots or people to carry out services.

A generic chat/messaging application which allow different types of grouping to be obtained depending on the required purpose. As part of the design of the operating platform, it is the intention that messaging based interfaces are made the primary means of obtaining information or services, and users are able to interact with other service providers through bots, or be able to be grouped with other users dynamically through a criteria-based search through the user-models.

The interface provides the ability for users to interact wiih services, whereby a chat-based interface will be provided for. In the design of the chat interface allows for both conversations with people as well as conversations with bots, in which one-to- one exchanges with bots can foe simulated such that the user interaction with bots or people in one-to-one conversations or as a group.

This interface leverages on native Interfaces which allows for users to interact in a more conversational manner to obtain information or services.

the main entry-point Info this workflow is the starting of a request, and depending on the user intent and the action, the appropriate chatgroup can dynamically grouped. In the starting of the chat group, it is possible that chat-groups are started for:

1 Unkfueiy identified users which are grouped together 2. Dynamically determined users which are grouped together in a criteria- based manner

3. Chatbots which are implemented by service providers or otherwise Figure 18 illustrates a process 1800 performed by an application in the mobile device in accordance with this disclosure. The application may be provided by a third party service provider. Alternatively, a new module may be provided as part of program 400 to provide the live feed service.

Process 1800 begins with step 1805 by determining the type of trigger of initiating a live feed application. Specifically, process 1800 determines if the user initiated direct or indirect request to open the live feed application Direct request refers to the user switching on the live feed application to browse for information. Indirect request refers to switching on the live feed application due to certain service requested by the user. For example, a user may request for certain service and/or asked the relevant assistant accordingly. Based on the inputs from user, process 600 may have predicted a certain outcome where the appropriate action is to open the live feed application.

If the live feed application is triggered by indirect request, process 1800 proceeds to step 1810 to provide the relevant messages and services to the user.

If the live feed application is triggered by direct request, process 1800 proceeds to step 1815.

Sn step 1815, process 1800 executes process 600 to predict and obtain the most likely messages which the user would be interested in. Specificaty, while browsing the application, every steps triggered by the user would be received by the context engine which will in turn trigger the prediction module to predict and obtain an appropriate action. In this instance, the appropriate actions are relevant news feed or information that are relevant to the user. After step 1815, process 1800 proceeds to step 1815 to display the relevant news feed or information

In step 1815, process 1800 displays the top few relevant news feed or information the user accordingly on a "Live Feed" Panel which provides a summary of the predictive messages to serve users or to offer services. This way, the most likely responses which a user would iike is then presented to the user without any other additional user inputs.

In step 1820, if the user provides any additional inputs, as per usual, it will be received by process 600 to predict and obtain the most likely news feed, information or messages which the user would be interested in. Otherwise, process 1800 ends.

3. Unified Structure for different types of messaging to be grouped within a singular directory to simplify user access

There would be different types of chat groups which would be accessible through the "Conversations" interface, e.g. dynamic groups, user-based groups, chai- hot groups, etc. This disclosure also aims to create a structure which enables different types of conversations to be implemented as a singular interface.

It is therefore the intention of the disclosure to include a uniform manner for sending and receiving of such messages from the dynamic network. Therefore, a uniform grouping functionality will be introduced such that users only have to understand one type of conversation grouping, as well as enable information to be replicated and reused across the different type of channels.

One way of achieving this outcome is to utilize a use-case related grouping, which shall be referred to as "channels" as well as an "audience" tag. Such a referencing shall enable users to utilize a singie approach for a wide range of conversations/interactions through a uniform approach. In addition, generic types of information which are utilized across different sources shall also be defined, which allows for an additional level of grouping that makes information more specific for users, which shall be referred to as "information type". "Channels" relate to the use- case which the user is interested in engaging in, and the "audience" refers to the dynamic grouping of people which would support the particular user intent, "information Type" are the generic representations of more complex information which enables a seamless interaction of users across the different channels/grouping of information.

Examples of "channels" are Updates, Arrivaifsfoiices, Recommendations, etc. Examples of "audience" are LocalCommunity {Frequent Similar Locations), FoodCommuniiy (Similar Food Interests), etc. Examples of Information Type <Non- interactive) are, Places, Weblinks, Sightings/images, Events, etc. Examples of

Information Type {Interactive) are Journey Summary, MessageTbread, etc.

The intention of this disclosure is to be abie to create common user interface constructs at the higher levels, and enable support of consumer access to information and services through common constructs, such that user's access to information and services (which are provided to messages are through a single layer of interactions.

With this design, it is the case that highly customizable "message-like" elements are able to be implemented, which are also conversational in behaviour, and can enable additional interactions. The images in figure 17 show how the generic components of an "information type" can be implemented. It is the intention that the basic information type is laid out as uniformly as possible, such that a multitude of different types of information can be presented and interacted with. This allows for a large variety of information/services provision which would enable this messaging interface to be utilized for a large variety of use-cases.

4. Capabilities which enable/simplify the effort of physical businesses/organizations to easily network with users to perform/provide relevant services/information

This can be illustrated by setting up of locations as "Communities" which allow for locations to be identified as "audience tags". Due to the process of grouping of users based on a user's characteristics, it enables messages and bots/service providers to provide information/services through to users based on their location. By enabling this functionality, businesses/organizations are able to mark out areas of interest, which enables simplified interaction of users with their surroundings, i.e. homes, office or other businesses/merchants. Enabling this process allows for businesses/organizations of various locations to immediately offer services/information to users using the messaging capabilities set out above.

Another point to note is that frequently visited places are used as a physical

"proof of work" which is only enabled when a user is at a location frequently enough.

This leads to the ability for less noise, and less mis-use of this functionality to only allow for genuine users to create groups, in addition, it could also be the case that businesses need to verify their operations before they ate entitled to install the relevant device to reach out to users accordingly. It would aiso therefore naturally be the case that users who josrs need to also be able to prove thai they satisfy the requirements of that community, e.g. that they will need to aiso show that they have the relevant proof-of-work in regards to the frequency of travel or otherwise,

5. Ability for physical businesses/organizations to enable instant connection for communications with users, which allows for seamless interaction, and in a manner which fits in with the messaging constructs described above.

Due to the prooess of grouping of users based on a user's characteristics, this enables businesses to be able to participate as a user, and also enables to instantly be networked as a user.

For businesses which provide digital information/services, it is described above that the centralised "data structuring" information would be able to convert the digital; goods provided into messages which users are able to discover or interact with. For physical product/service providers, it is also similar to the prooess above for service providers/foots to service users accordingly. However, additionally, it should be noted that there are additional steps to also link information such as the location of the business as well as other characteristics.

Although businesses may choose to represent themselves as a "bot/service provider", it may be the case that businesses choose to interact as a user, which could be good for situations such as "LIVE Support/Chaf for human business customer service personnel to get in touch with the user. Similar to human users, i.e. where bots/service providers are able to service users, foots/service providers can also service physical businesses. The oniy difference is in how the business sets up its properties. Information such as it's location and type of business etc would add to it's user profile to allow for it to be set-up as a user for bot'services to start serving ^including to automatically serve customers) or even allow for employees of a business to automatically be connected to a user (and represented as the business). This allows for customer service staff for example to be instantly be able to shown as being 'online' when the staff reports to work, and also be able to talk to the user to support the user using the "direct message" functionality or be able to participate in the relevant channel as the business. The above is a description of embodiments of a system in accordance with the disclosure as set forth below It is envisioned that those skifled in the art can and will design alternative embodiments of this disclosure based upon this disclosure that infringe on this disclosure as set forth in the following claims.