Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SEGMENT TARGETING APPARATUS, SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2018/006099
Kind Code:
A1
Abstract:
The disclosure includes an apparatus, system and method for providing an end-to-end, multichannel artificial intelligence (AI) engine capable of enabling website publishers to at least to identify, engage, and converse with users. The apparatus, system and method may include recording each conversation between the user and the AI engine; extracting, via at least neuro-linguistic processing and machine learning, of user intent from the recorded conversation: assigning the user intent and a conversation identifier into a user profile having secondary data associated therewith that is uniquely associated with that user, wherein the assigning comprises generating a hashed alphanumeric key for each identified user, user intent, and conversation; creating a detailed profile in accordance with the user intent, conversation identifier, and the user profile indicative of at least preferences of the user; spawning at least one targeting model comprising a decision tree applied to the preferences of the user, wherein the decision tree applies criteria that results in at least one of a permanent assigning of the discerned preferences to the user profile, an action or inaction by the AI engine in relation to the user, or a discarding of the discerned preferences as irrelevant.

Inventors:
MELLINGER JONATHAN (US)
TEWKSBURY MARCUS (US)
Application Number:
PCT/US2017/040620
Publication Date:
January 04, 2018
Filing Date:
July 03, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MELLINGER JONATHAN (US)
TEWKSBURY MARCUS (US)
International Classes:
G06E1/00
Foreign References:
US20120221502A12012-08-30
US20030014489A12003-01-16
US20100058183A12010-03-04
US20140013249A12014-01-09
US20120166180A12012-06-28
Attorney, Agent or Firm:
MCWILLIAMS, Thomas (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An end-to-end, multi-channel artificial intelligence (AI) method capable of enabling website publishers to at least to identify, engage, and converse with users, comprising, upon execution of non-transitory computing code from at least one computing memory by at least one computing processor, the steps of:

recording each conversation between the user and the AI engine;

extracting, via at least neuro-linguistic processing and machine learning, of user intent from the recorded conversation;

assigning the user intent and a conversation identifier into a user profile having secondary data associated therewith that is uniquely associated with that user, wherein the assigning comprises generating a hashed alphanumeric key for each identified user, user intent, and conversation;

creating a detailed profile in accordance with the user intent, conversation identifier, and the user profile indicative of at least preferences of the user;

spawning at least one targeting model comprising a decision tree applied to the preferences of the user, wherein the decision tree applies criteria that results in at least one of a permanent assigning of the discerned preferences to the user profile, an action or inaction by the AI engine in relation to the user, or a discarding of the discerned preferences as irrelevant,

2. The method of claim 1, wherein the inaction comprises allowing for the user to continue communicating with a chatbot.

3. The method of claim 1, wherein the action comprises returning a pre-defined app.

4. The method of claim 3, wherein the pre-defined app comprises audio or video content.

5. The method of claim 3, wherein the pre-defined app comprises marketing- specific content or a survey.

6. Trie method of claim 1, wherein the decision tree comprises generation of a relevancy score of the discerned preferences to the user profile.

7. The method of claim 1, wherein the multi-channels comprise at least two of web, mobile, and SMS.

8. The method of claim 1, wherein an AI engine of the method comprises a chatbot.

9. The method of claim 1, wherein the website publishers comprise one of brands or enterprises.

10. The method of claim 1, wherein the conversation is for marketing purposes of the publisher.

11. The method of claim 1, wherein the user intent comprises at least one of sentiment, likes, dislikes, needs, and wants.

12. The method of claim 1, wherein the user profile comprises personally identifiable information.

13. The method of claim 1, wherein the conversation identifier comprises geolocation and time of day of the conversation.

Description:
SEGMENT TARGETING APPARATUS, SYSTEM AND METHOD

This application claims the benefit of priority to US Pro visional Patent Application Ser. No. 62/357,749, entitled SEGMENT TARGETING

APPARATUS, SYSTEM AND METHOD and filed on July 1, 2016, the entirety of which is incorporated herein by reference as if set forth herein.

[2] The present disclosure generally relates to targeted content, and more

particularly to a segment targeting apparatus, system and method that is based on intent development.

Advertising provided while "surfing the web" has become exceedingly prevalent in modern telecommunications. In currently known embodiments, an ad or ads, pop-up ads, springing ads, and the like are frequently provided to internet users surfing on a Web browser, often in a targeted manner. These advertisements are typically targeted based on demographic data indicated by so -called "cookies", which cookies are indicative of prior internet usage by the user of the subject web browser.

Advertisers often purchase access to particular segments of society in order to target their advertisements, wherein the accessed segments may be discerned based on the aforementioned cookies. For example, based on web surfing, and particularly on websites accessed, cookies may indicate the age range, demographic status, geographic preferences, likes and dislikes, and so on of a user using the browser to surf the internet. As such, advertisers may target advertising to that particular user in that particular segment based on the user's tracked data.

[5] However, with the advent of alternative query and user interface (UI)

methodologies, such as voice actuated personal assistants, chat boxes, app based messaging, and the like, cookie technology will be largely inapplicable to allow for accumulation of user preference data based on usage. Therefore, the need exists for a new intent development system that may be used with alternative user interface methodologies distinct from the web browser interfaces used previously, to allow for targeted user advertisements based on the developed intent.

SUMMARY OF THE DISCLOSURE

[6] The disclosure includes an apparatus, system and method for providing an end-to-end, multi-channel artificial intelligence (AI) engine capable of enabling website publishers to at least to identify, engage, and converse with users. The apparatus, system and method may include recording each conversation between the user and the AI engine; extracting, via at least neuro-linguistic processing and machine learning, of user intent from the recorded conversation; assigning the user intent and a conversation identifier into a user profile having secondary- data associated therewith that is uniquely- associated with that user, wherein the assigning comprises generating a hashed alphanumeric key for each identified user, user intent, and conversation; creating a detailed profile in accordance with the user intent, conversation identifier, and the user profile indicative of at least preferences of the user; spawning at least one targeting model comprising a decision tree applied to the preferences of the user, wherein the decision tree applies criteria that results in at least one of a permanent assigning of the discerned preferences to the user profile, an action or inaction by the AT. engine in relation to the user, or a discarding of the discerned preferences as irrelevant.

[7] Thus, the embodiments provide at least a new intent development system that may be used with alternative user interface methodologies distinct from the web browser interfaces used previously, to allow for targeted user advertisements based on the developed intent.

BRIEF DESCRIPTION OF THE DRAWINGS

The following detailed description of preferred embodiments of the invention will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings embodiments which are presently preferred. It should be understood, however, that the invention is not limited to the precise arrangements and instrumental.! ti es shown .

FIG. 1 is an exemplary illustration of aspects of the instant, invention;

FIG. 2 is an exemplary illustration of aspects of the instant invention;

FIG. 3 is an exemplary illustration of aspects of the instant, invention; and FIG. 4 is an exemplary illustration of aspects of the instant invention. DETAILED DESCRIPTION

[13] The figures and descriptions provided herein may have been simplified to illustrate aspects thai are relevant for a clear understanding of the herein described devices, systems, and methods, while eliminating, for the purpose of clarity, other aspects that may be found in typical similar devices, systems, and methods. Those of ordinary- skill may recognize that other elements and/or operations may be desirable and/or necessary to implement the devices, systems, and methods described herein. Because such elements and operations are well known in the art, and because they do not facilitate a better understanding of the presen disclosure, a discussion of such elements and operations may not be provided herein. However, the present disclosure is deemed to inherently include all such elements, variations, and modifications to the described aspects that would be known to those of ordinary skill in the art.

[ 14] The terminology used herein is for the purpose of describing particular

exemplar}' embodiments only and is not intended to be limiting. For example, the words "right", "left", "lower", and "upper" designate directions in the drawings to which reference is made. The terminologies "a" and "an", as used herein, imply "at least one," unless the corresponding disclosure explicitly is limited to the singular. Further, as used herein, the singular forms "a", "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise.

[ 15] The terms "comprises," "comprising," "including," and "having," are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.

When an element is referred to as being "on", "engaged to", "connected to" or "coupled to" another element, it may be directly on, engaged, connected or coupled to the other element, or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly engaged to", "directly connected to" or "directly coupled to" another element, there may be no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between" versus "directly between," "adjacent" versus "directly adjacent," etc.). As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

Although the terms first, second, third, etc., may be used herein to describe various elements, functions, steps, components, regions, layers and/or sections, these elements, components, functions, steps, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, function, step, region, layer or section from another element, component, region, layer or section. Thus, terms such as "first," "second," and other numerical terms, when used herein, do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the exemplary embodiments.

Computer-implemented platforms, engines, systems and methods of use are disclosed herein. Described embodiments of these platforms, engines, systems and methods are intended to be exemplary and not limiting. As such, it is contemplated that the herein described systems and methods may be adapted to provide server-based and cloud -based valuations, interactions, data exchanges, and the like, and may be extended to provide enhancements and/or additions to the exemplary platforms, engines, systems and methods described. The disclosure is thus intended to include all such extensions.

Furthermore, it will be understood that the terms "module" or "engine", as used herein does not limit the functionality to particular physical modules, but may include any number of tangibly-embodied software and/or hardware components having a transformative effect on at least a portion of a system, in general, a computer program product in accordance with one embodiment may comprise a tangible computer usable medium (e.g., standard RAM, an optical disc, a USB drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by a processor (working in connection with an operating system) to implement one or more functions and methods as described below, in this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, inteipretable source code or the like (e.g., via C, C++, C#, Java, Actionscript, Objective-C, Javascript, CSS, XML, etc.), by way of non-limiting example. Figure 1 depicts an exemplary computing system 1100 for use in association with the herein described systems and methods. Computing system 1100 is capable of executing software, such as an operating system (OS) and/or one or more computing applications 1190, such as applications applying the algorithms discussed herein, and may execute such applications using data, such as may be gained via the I/O port.

The operation of exemplary computing system 1100 is controlled primarily by computer readable instructions, such as instructions stored in a computer readable storage medium, such as hard disk drive (HDD) 1115, optical disk (not shown) such as a CD or DVD, solid state drive (not shown) such as a USB "thumb drive," or the like. Such instructions may be executed within central processing unit (CPU) 1110 to cause computing system 1100 to perform the operations discussed throughout. In many known computer servers, workstations, personal computers, and the like, CPU 1110 is implemented in an integrated circuit called a processor.

It is appreciated that, although exemplary computing system 1100 is shown to comprise a single CPU 1110, such description is merely illustrative, as computing system 1100 may comprise a plurality of CPUs 1110. Additionally, computing system 1100 may exploit the resources of remote CPUs (not shown), for example, through communications network 1170 or some other data communications means.

In operation, CPU 1110 fetches, decodes, and executes instructions from a computer readable storage medium, such as HDD 1115. Such instructions may be included in software such as an operating system (OS), executable programs such as the applications discussed above, and the like. Information, such as computer instructions and other computer readable data, is transferred between components of computing system 1100 via the system's main data- transfer path. The main data-transfer path may use a system bus architecture 1105, although other computer architectures (not shown) can be used, such as architectures using serializers and deserializers and crossbar switches to communicate data between devices over serial communication paths. System bus 1105 may include data lines for sending data, address lines for sending addresses, and control lines for sending interrupts and for operating the system bus. Some busses provide bus arbitration that regulates access to the bus by extension cards, controllers, and CPU 1110.

Memory devices coupled to system bus 1105 may include random access memory (RAM) 1125 and/or read only memory (ROM) 1130. Such memories include circuitry that allows information to be stored and retrieved. ROMs 1130 generally contain stored data that cannot be modified. Data stored in RAM 1125 can be read or changed by CPU 1110 or other hardware devices. Access to RAM 1125 and/or ROM 1130 may be controlled by memory controller 1120. Memory controller 1120 may provide an address translation function that translates virtual addresses into physical addresses as instructions are executed. Memory controller 1120 may also provide a memory protection function that isolates processes within the system and isolates system processes from user processes. Thus, a program running in user mode may normally access only memory mapped by its own process virtual address space; in such instances, the program cannot access memory within another process' virtual address space unless memory sharing between the processes has been set up. In addition, computing system 1100 may contain peripheral communications bus 135, which is responsible for communicating instructions from CPU 1110 to, and/or receiving data from, peripherals, such as peripherals 1140, 1145, and 1150, which may include printers, keyboards, and/or the sensors discussed herein throughout. An example of a peripheral bus is the Peripheral

Component Interconnect (PCI) bus.

Display 1160, which is controlled by display controller 1155, may be used to display visual output and/or presentation generated by or at the request of computing system 1100, responsive to operation of the aforementioned computing program. Such visual output may include text, graphics, animated graphics, and/or video, for example. Display 1160 may be implemented with a CRT-based video display, an LCD or LED-based display, a gas plasma-based flat-panel display, a touch-panel display, or the like. Display controller 1155 includes electronic components required to generate a video signal that is sent to display 1160.

Further, computing system 1100 may contain network adapter 1165 which may be used to couple computing system 1100 to external communication network 1170, which may include or provide access to the Internet, an intranet, an extranet, or the like. Communications network 1170 may provide user access for computing system 1100 with means of communicating and transferring software and information electronically. Additionally, communications network 1170 may provide for distributed processing, which involves several computers and the sharing of workloads or cooperative efforts in performing a task. It is appreciated that the network connections shown are exemplary and other means of establishing communications links between computing system 1100 and remote users may be used.

Network adaptor 1165 may communicate to and from network 1170 using any available wired or wireless technologies. Such technologies may include, by way of non-limiting example, cellular, Wi-Fi, Bluetooth, infrared, or the like. It is appreciated that exemplary computing system 1100 is merely illustrative of a computing environment in which the herein described systems and methods may operate, and does not limit the implementation of the herein described systems and methods in computing environments having differing components and configurations. That is to say, the inventive concepts described herein may be implemented in various computing environments using various components and configurations,

in the presently known art, cookies, which are typically Java script components, are used to track a user's usage of the internet, such as may occur over network adaptor 1165 of computing system 1100 as discussed above. This data is then discerned in real time in order to assess likely desires or preferences of the user so that advertising can be targeted specifically to that user, it goes without saying that advertising experiences a heightened impact when targeted to a user most likely to respond to that advertisement.

However, with the advent of alternative user interface methodologies to allow for searching, assistance, content provision, or discussion, such as UIs in the form of Siri from Apple, Alexis from Amazon, chat boxes, messaging applications, voice directed remote controls for television, set top boxes, or radio, and the like, the use of cookies to track interaction with the alternative Ui may not be available. Nevertheless, the need to target advertising to users of these alternative user interfaces is present.

Therefore, the present invention may use voice or text query input to alternative user interfaces to assess a user's intent, thereby generating "cookie- esque" data that may be compared, such as in a flat data file format, to available cookie pool data, such as in order to allow advertisements to be pulled from the typical cookie pool data in a manner that will optimally target the pulled content to the user of the alternative user interface.

Accordingly and by way of non -limiting example, the user may ask an artificial intelligence assistant for a hotel room reservation, and, based on the user's prior inquiries, the geography in which the hotel room is requested, the user' s age and the ages of other known users in the requesting household, the user's prior purchase requests through the Al, and whether the speaking voice is male or female, an advertisement for a hotel room may be targeted directly to a user via the alternative user interface. Accordingly, the present invention provides new systems and methods for receiving and aggregating user query and input data to alternative user interfaces to enable the production of targeted advertisements to the user through classic or alternative user interface means.

Once information is received and discerned, such as via voice recognition, an embedded API, a built-in voice recognition, or a remote networked engine, or the like, information may move through a database in order to allow for the building of a taxonomy in relation to the query/input. Needless to say, latency reduction is key in the formation of the intent development in light of the taxonomy. Thus, for example, the database taxonomy may indicate that, based on other prior queries and/or a user profile, the requesting user is a male, aged 18-35, who likes to eat fruit and play golf, who asks frequent questions about the hours of local gyms, and is in a high income bracket. Thereby, an advertiser may compare this information to desired information for advertisement targets.

in the immediately foregoing example, multiple advertisers may wish to have ads placed to the subject user. For example, Dole pineapples may have a campaign targeted to users similar to the user referenced above; the state of Hawaii may have a vacation campaign targeted to users such as the one above; and a local gym may have a campaign targeted to users such as the one above. That is, each of the foregoing advertisers may have indicated the desire to advertise to a segment of a "cookie" pool that matches the user segment indicated for this exemplary user interacting with any Ui, or interacting specifically with an alternative user interface.

Of note, although the disclosed embodiments tor intent development may be employed in conjunction with actual cookie data gained through web browsing, the disclosed intent development data, or "cookie-esque" data, discussed herein constitutes a separate data generation mechanism from the cookie data known in the present art. in short, the present system and method generates spoken or typed intent data in accordance with session mining during a use session of an alternative user interface, as that phrase is defined herein. However, the captured intent data provided by the instant systems and methods may be matched to existing cookie pool data, and consequently the "front end" provided by the instant invention may be employed with an existing, cookie-based advertisement "back end" to allow for the provision of targeted advertisements through the disclosed alternative user interfaces, in light of the foregoing, the data necessary to employ and make use of the instant invention, and the generation of such data, is less intensive than the generation of data necessary to employ historical cookies to target advertisements. That is, based on the voice and text recognition already inherent in the APIs of most of the aforementioned alternative user interfaces, the data sought to do the disclosed intent capture is already generated by these alternative user interface vehicles. Therefore, this existing data may be mined for the taxonomy items of interest to allow for the development of user intent, and, as discussed above, once this taxonomy is created it may simply be interfaced to existing back end cookie pool data.

It goes without saying that privacy and security form important aspects of the presently disclosed embodiments. Accordingly, and because the instant intent capture data may be akin to historical cookie data, privacy and security in relation to cookies may also be employed with the instant invention. That is, users of alternative user interfaces may be enabled to opt out of intent capture data, may be forewarned that intent capture data is being captured, or the intent capture features may be "closed" into the API of the de vice providing the alternative user interface, whereby the fullness of the user's intent capture data will not be available to any third party.

Figure 2 illustrates a system 200 in which the disclosed systems and methods are applicable. As illustrated in Figure 2, a user session may occur through an alternative user interface 202, such as a chat box/bot (as shown), and the API 204 of the alternative user interface device 202 may include a natural language parsing engine 206 whereby the natural language query 208 into the alternative UI 202 is parsed. Moreover, the user session may include a user profile 210, and/or the user profiles of other possible users of the device, wherein the user profiles 210 may include additional information regarding or indicative of the user's intent, such as entered user profile information 210, web browser cookie data for that use previously associated with the user' s profile, or the like.

The natural language data 206 of the user' s query may be structured to allow for a database comparison to assess the user' s intent. As shown, an intent development engine 220 may employ various methodologies to use the user's query to capture the user's intent.

The user's intent data and the user's profile data may be combined to formulate a taxonomy of the user's targeting profile 230, which may be compared, as shown, against the desired audience segment 240 for an advertisement. As referenced throughout, this audience segment 240 may be an indicated preference by an advertiser unique to the currently disclosed alternative user interface environment, or may be a historic cookie pool segment choice by the advertiser. Once the user is validated as a target for a targeted advertisement, the targeted advertisement may be produced 250. Of note, the targeted advertisement may be produced via the alternative user interface as defined herein and/or via one or more other interfaces known to be associated with that user.

Figure 3 is a flow diagram illustrating a method according to the

embodiments. The method provides an end-to-end, multi-channel (web, mobile, SMS) deployment of artificial intelligence, such as "chatbots", to enable website publishers (such as brands and enterprises) to identify, engage, and converse with (such as for marketing purposes) customers.

As illustrated, at step 302 a record of each conversation between a user and an AI (i.e., chatbot) is recorded. This provides a detailed record of the conversation between the AI and user; the use of neuro-linguistic processing and machine learning allows for extraction, by the disclosed engine and system, of sentiment and user intent from these conversations at step 304; and a unique identifier for that user and that conversation with the AI is assigned at step 306.

A hashed alphanumeric key for each identity/identifier is generated at step 310. Moreover, because the disclosed engine and system may interoperate and integrate with an enterprise's own environment (such as the enterprise's website, mobile app, etc.), additional personally identifiable information (PII) may be appended to the Identity, such as in a database, such as a relational database, at step 320. Other datasets may also be appended to this Identity and PII, such as geolocation and time of day, to create a detailed profile at step 322 that indicates, with particularity, WHO said WHAT, WHERE they were, and WHEN they said it, which, when taken in conjunction with other information (such as user profile), is indicative of like/dislikes/intent. Accordingly, this information forms an emotional Intelligence profile.

In short, for every instance of Who, What, Where, and When, a decision tree may be spawned from steps 320 and 322 to allow for application of any of various selected targeting or intent assessment models, which may be executed, input and/or called at step 330. Such models may, improve aspects of customer engagement based on the customer's desired outcome or key performance indicators (KPIs), by way of non-limiting example.

More particularly, at step 330, the Intents of a specific Identity may be assessed, such as by performing an internal auction or decisioning process (as dictated by the particular model or model of models). The outcome of the modelling may determine the next action of the AI at step 334. For example, the AI may do nothing and the user may continue communicating with the chatbot, or the AI may return a pre-defined application (App) that is to be provided under certain criteria. Such apps may consist of audio or video content, marketing- specific content, surveys, or other information that may be delivered directly into the chat stream of the user and the AI.

As discussed throughout, each of the foregoing interactions may form a recorded data "event", created and maintained in the disclosed data environment. In embodiments, each Intent may be scored for relevancy to developing the ongoing customer profile at step 350. For example, some interactions between a user and the AI may be meaningless, unrelated to the specific customer, or just noise, but some interactions may be provide strong signals of intent and sentiment. Those interactions that are highly indicative of intent/wants/needs/likes/dislikes may be appended to the Identity, and may be share with enterprise customers as valuable business intelligence insights. In an exemplary embodiment, the decision tree process may include multiple prongs, such as is illustrated in the logical block diagram of Figure 4. As shown, the logic may include decisions as to intent discovery 402, action mapping 404, automation 406, and dialog 408 with the user and the AI engine. it will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concepts detailed herein throughout. It is understood, therefore, that the inventive embodiments are not limited to the particular embodiments disclosed, but rather that the disclosure is intended to cover modifications within the spirit and scope of the present disclosure. That is, those modifications apparent to the skilled artisan stemming from an understanding of this detailed description are understood to form part of this disclosure.