Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR CONTENT-BASED TARGETING OF PROMOTED INFORMATION
Document Type and Number:
WIPO Patent Application WO/2014/123447
Kind Code:
A1
Abstract:
A method for targeting ads to users of touch screen devices. A visual preview for a web page is automatically generated and rendered to a user prior to actual transfer to the hyperlinked page. Any link on any page has an automatically generated preview. In order to cover the costs of generation of the preview, an advertisement banner is added to each of the previews. Targeted advertisements are displayed to the user along with the preview. The advertisements are based on content analysis and user preferences. The user selects a portion of the web page content. The content portion is analyzed for potential user goals (wishes). Then, the relevant ad(s) is selected based on the user goals and displayed to the user.

Inventors:
TSYPLIAEV MAXIM VIKTOROVICH (RU)
VERBITSKY ANDREY VLADISLAVOVICH (RU)
Application Number:
PCT/RU2013/000131
Publication Date:
August 14, 2014
Filing Date:
February 19, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OBSCHESTVO S OGRANICHENNOY OTVETSTVENNOSTYU SLICKJUMP (RU)
TSYPLIAEV MAXIM VIKTOROVICH (RU)
VERBITSKY ANDREY VLADISLAVOVICH (RU)
International Classes:
G06Q30/02; G06F17/40
Foreign References:
RU2445704C22012-03-20
US20100332313A12010-12-30
RU2393638C22010-06-27
US20070060114A12007-03-15
Download PDF:
Claims:
Claims:

1. A method for promoted information targeting, the method comprising: presenting content to a user on a touch screen device; detecting a portion of the content indicated by a touch action by the user; generating a metadata reflecting the user goal associated with the content portion; adding user-specific information to the metadata; extracting potential user goals from the metadata; applying the association matrix to the metadata to define resulting user goals; selecting promoted information based on the goals; applying user-specific targeting constraints to the promoted information; and

generating a visual presentation based on templates for promoted information and presenting them to the user.

2. The method of claim 1, further comprising presenting a list of the goals to the user for selecting one goal, if more than one user goal is determined.

3. The method of claim 1, wherein the user action is recognized by the touch screen device, and identifies a portion of the content.

4. The method of claim 3, wherein the user action is applicable to any part of content displayed on the touch screen device.

5. The method of claim 1, wherein the portion of the content includes a sentence, adjacent sentences, a portion of a sentence, a paragraph, or adjacent paragraphs.

6. The method of claim 1, wherein the portion of the content includes an image, a portion of an image, a video, an animation, a text, a 3D model, or any other visual presentation.

7. The method of claim 1, wherein the metadata and potential user goals in step (c) are pre-calculated for each portion of the content and cached, and then used in subsequent steps.

8. The method of claim 1, wherein the analyzing of the portion of the content is performed using Natural Language Processing (NLP).

9. The method of claim 1, wherein the user-specific targeting constraints are any of:

user gender;

user age;

user location; user time zone;

user current time; user time of continuous activity on the touch screen device; touch screen device operating system; type of a touch screen device; and wireless service provider.

10. The method of claim 1, further comprising using a link to a page containing the content to generate a preview of the page for a display to a user, wherein the promoted information is inserted into the preview.

11. The method of claim 1, further comprising querying the user to clarify the user potential goals associated with the selected portion of the content.

12. The method of claim 1, further comprising mapping coordinates of the user action, and, upon detection of a mistake with the user action, selecting an area of the screen that contains the most informative content.

13. The method of claim 1, wherein the promoted information is generated based on templates, and is automatically selected depending on a speed of a communications channel, an internet browser type, and user device capabilities.

14. The method of claim 1, further comprising presenting the promoted information in a separate graphical element with support of infinity scrolling and categorization.

15. The method of claim 1, further comprising presenting promoted information in different number of columns for different physical screen sizes and layouts.

16. The method of claim 1, wherein the user is provided a mechanism for feedback about relevant and irrelevant promoted information.

17. The method of claim 16, wherein the feedback is used for further targeting of promoted information.

18. The method of claim 16, wherein the system automatically recognizes and classifies a difference between the promoted information identified by the user as relevant and other presented promoted information, and automatically corrects weights of properties of the promoted information in a targeting model.

19. The method of claim 1, wherein the portion of the content in the step (b) is pre-defined as a link to another part of the content, and the visual presentation in step (i) also includes a visual preview of the content to which the link points.

20. The method of claim 19, wherein the preview is generated substantially in real-time by a predefined algorithm or is already available as an image in an storage or through an API call.

21. The method of claim 1, wherein the user is provided a mechanism for feedback about quality of presented promoted information.

22. The method of claim 21, wherein different thresholds with automatic actions for negative and positive feedback are defined, and wherein the thresholds are used for notification for the information promoter to stop presenting the promoted information.

23. The method of claim 1, wherein the action is any of: an event registered by an infrared camera, an event registered by an infrared sensor, an event registered by a video camera, an event registered by a microphone, an event registered by a motion sensor, an event registered by a touch sensor, an event registered by a kinesthetic detector, an event registered by a detector of a eye movement, which permit detection of the portion of the content selected by the user, and

notification of the system regarding the selection.

24. A system for promoted information targeting, the system comprising: a promoted information server for storing promoted information, targeting and distributing them to touch screen devices; and

wherein the promoted information server is connected to a media server that provides a media content to the touch screen devices;

and there are plurality of promoted information libraries for interfacing between the media server and the promoted information server; wherein the promoted information libraries, promoted information plug-in analyze the media server content and select promoted information from the promoted information server based on the content analysis; and

wherein the promoted information libraries provide the selected promoted information to the touch screen devices.

25. The system of claim 24, wherein an additional server is used for putting content to generate metadata and for accessing pre-generated metadata associated with the content.

26. The system of claim 24, wherein the selected promoted information are provided to the touch screen devices.

27. The system of claim 24, wherein the promoted information libraries analyze the media content and generating of metadata using Natural Language Processing.

28. The system of claim 24, wherein the user is provided a mechanism for feedback about relevant and irrelevant promoted information.

29. The system of claim 28, wherein the system automatically recognizes and classifies a difference between a promoted information identified by the user as relevant and other presented promoted information and automatically corrects weights of properties of the promoted information in a targeting model.

30. The system of claim 24, wherein the user is provided a mechanism for feedback about quality of presented promoted information.

31. The system of claim 30, wherein the feedback is used for future targeting of promoted information.

32. The system of claim 30, wherein different thresholds with automatic actions for negative and positive feedback are defined, and wherein the thresholds are used for notification for the information promoter to stop presenting the promoted information.

Description:
METHOD AND SYSTEM FOR CONTENT-BASED TARGETING OF

PROMOTED INFORMATION

The present invention is related to touch screen devices, and more particularly, to targeted promoted information based on user goals for touch screen devices .

A wide range of hyperlinks or active elements are used for web site navigation. These elements allow a user to move from one page to another. While the hyperlinks or active elements can be found on almost all web pages, statistics show that about 80% of the hyperlinks never get clicked on by the users. In other words, most of the efforts and time spent by the web developers in terms of mapping the pages become a waste.

Furthermore, from a user perspective, conventional hyperlinks and other elements used for site navigations appear to be "blind" (i.e., a user can never really know what he will encounter after clicking on the link). Inspire the fact that there are a lot of libraries for preview generation, only a few web sites implement some (usually minimal) previews displaying what is behind a particular link. However, this can only be done for the pages within the site. All the external links remain blind for the user. This problem becomes even more critical for mobile and touch screen devices, where opening a link takes more time.

Modern search engines such as Google™ and Yahoo™ can built detailed previews, but they work only at a first level of propagation. In other words, once the user selects a particular site with the links, the user cannot know what is hidden behind these links.

Another issue is a use of ads on mobile devices. Google shows ads in search results. When a user enters some search terms, Google shows some ads along with the search results. Other ads are based on page content, on user data and on geolocation. These ads are ineffective because they do not, by and large, target user's goals (or desires). For example, neither the page content viewed by the user nor detailed user profile, can help determine that the user is hungry at that moment or that he wants to go on vacation.

Furthermore, the ad banners take up screen space and annoy users. Media and news resources have been losing user revenues. The content may, at times, correspond to some goals and wishes of their users, but users then go to Google to search for details related to the content. Thus, the advertisement revenues go to Google or another search engine - not to the news resource. Monetization of the content on mobile devices is at an even lower level.

Accordingly, a system that provides the preview for all the links is desired. Such system requires additional costs that can be off-set by effective targeted advertisement. Thus, a system that provides the previews for all the links in a cost effective way is needed.

The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.

In the drawings: FIG. 1 illustrates a web page link with an auto-generated preview and an ad banner, in accordance with the exemplary embodiment;

FIG. 2 illustrates content processing using Natural Language Processing (NLP)technology, in accordance with the exemplary embodiment;

FIG. 3 illustrates an active control with ad banners, in accordance with the exemplary embodiment;

FIG. 4 illustrates a system architecture, in accordance with the exemplary embodiment;

FIG. 5 illustrates a flow chart of a method for targeted advertising, in accordance with the exemplary embodiment;

FIG. 6 illustrates how associations and user wishes depend on the user goal directed to a particular product or service;

FIG. 7 illustrates a flow chart of a method for precisely targeted advertising, in accordance with the exemplary embodiment;

FIG. 8 is a block diagram of an exemplary mobile device that can be used in the invention;

FIG. 9 is a block diagram of an exemplary implementation of the mobile device.

FIG. 10 illustrates a schematic of an exemplary computer system that can be used for implementation of the invention.

According to the exemplary embodiment, a method and system for promoted information (promoted item) targeting (for example, a portion of an image, a video, an animation, a text, a 3D model, or any other of visual presentation, an advertisement)on touch screen devices (i.e. mobile devices, smartphones, desktop, netbook, laptop, TV-set, game console, etc. are provided.

In one aspect, a visual page preview is auto-generated based on preview templates that are selected for a particular page design. For example, a page describing a product contains a large picture of the product, while a page with an article contains a large header (title) of the article. These particular properties of the pages are used for auto-generation of a detailed informative preview. Based on the page content, more important portions of the content are emphasized and less important ones are omitted in the preview.

According to the exemplary embodiment, a preview template can be manually generated (by a developer) as a part of the web page design. Alternatively, the preview template is generated automatically. The web page areas (text, fonts, pictures etc.) are analyzed and the most important parts are determined. The pre-defined preview layout template describing the most important parts is generated.

According to the exemplary embodiment, a web page visual preview can display a visual presentation based on templates for promoted items , for example, ad banner(s), product or service descriptions, product ratings, reviews, etc. In other words, the web page preview contains revenue generating data. An exemplary web page link with an auto-generated preview and an ad banner is depicted in FIG. 1. According to the exemplary embodiment, a preview can be pre-defined based on page properties (page type, header, product, screen size, device type, etc.) The web page can have a special built-in tag defining what type of a preview needs to be generated for the web page. For example, the tag can indicate "picture + header," "header-only," "picture-only," etc. preview type.

The preview type is used for preview generation function. When a user clicks on a hyperlink, a system calls a special java-script function which generates the preview and ad banners for the page referred by the hyperlink. The preview generating function uses preview and ad banner generation parameters, such as size, screen position, auto-rotation, device type, browser, etc. A critical parameter is a preview type.

According to the exemplary embodiment, the system can display one or more ad banners connected to a context of the page referred to by the link. Additionally, the ad banners can be selected based on user gender, a geolocation data, user history, media site statistics, site section, page metadata, etc. Thus, the page previews are auto-generated for all the links on the web page. The context-based banner ads are automatically generated and attached to the page previews. The page code is scanned. Then, an Application Programming Interface (API) function call with pre-defined parameters is added for each hyperlink, or other types of active controls that initiate a transition to different contents.

Those skilled in the art will appreciate that the proposed page preview generation method does not require additional memory space since the preview is generated on-the-fly. The page content itself is untouched, since when the API is activated, the page does not change its structure or appearance. The ad banners and the previews are not visible on the original page until activated by a user by pointing to the hyperlink. The ad banners and the previews appear on top of the web page in a separate window and disappear as soon as the hyperlink is clicked on.

Since the previews and the ads are auto-generated, more ads can be rendered to the users. Each time a user sees a preview, he can see a different ad banner. The hyperlinks become more attractive to the users, because the hyperlinks are not "blind" and give a user a glimpse of what is behind the hyperlink without actually moving from the current page. An interesting preview attracts user attention to the ad banner as well. This is particularly beneficial for mobile and touch screen devices, where users are often reluctant to click (or otherwise select the content, e.g., by using a joystick, a trackball, etc.) on ad links due to bandwidth constraints.

Users can notify the system about their interest to a particular part of content using, for example, an infrared camera, an infrared sensor or a video camera - where the sensor register (Kinect-type integrated or external device) replace physical "touch screen" and recognizes user's gestures and associates them with the context in the display; a microphone - e.g., Siri-like model (a sensor with background voice-recognition module, which can be integrated or external) - a user indicates a portion of the context and his desire by voice commend; a kinesthetic detector - a user uses a joystick, a mouse or wearable sensors (special gloves); a detector of a eye movement - a user uses a device such as Google-glasses or a special camera that can recognize, by the position of the eyes, the portion of context the user is looking at and the command (e.g. double-blink) to "activate" our solution.

According to the exemplary embodiment, the preview is auto-generated for all the links located on a web page. A generation engine scans the web pages, recognizes the hyperlinks and adds a special code for each hyperlink. The code calls a special API function. Thus, the original web page code is automatically modified. According to the exemplary embodiment, a form of a preview can be defined by the web page designer. According to another exemplary embodiment, an on-demand content-based dynamic advertising is implemented. Any type of a content (e.g., video, audio, text, etc.) has a tendency to invoke or increase desires of people viewing the content. A direct correlation between the content and some desires invoked by the content exists. According to the exemplary embodiment, this correlation is used for targeted advertising. Any portion of the content (context) that creates certain desires and associations can be used for targeting a user.

Note that when an advertisement is relevant in terms of user desires and expectations, it is no longer viewed as an advertisement by the user. Instead, the user accepts the targeted advertisement as useful information. According to the exemplary embodiment, a user is given an opportunity to explicitly indicate a portion of the content (e.g., a paragraph within the text or a name of a product) which sparks some interest (or desire). A set of user goals (wishes) is implemented as a set of metadata that is auto- generated for this content. The system determines the most appropriate metadata for a particular user. The entire content is analyzed. When the user selects a small portion of the content by clicking on it, the metadata reflecting this portion is generated and used for targeting the user with advertisement. To do this, the system recognizes parts of the sentences, parts of speech, meaning of the words, etc., using NLP (Natural Language Processing) techniques.

Given that the NLP processing can take significant time depending on the size of the content, and delays can affect usability of the system. The metadata of the content can be pre-generated and cached for subsequent use.

Thus, a user is shown highly relevant ads on-demand, where ads are provided in response to a direct request from a customer (as opposed to a compulsory mode) and presenting the ads as recommendations, rather than as plain ads. These ads are more effective than randomly displayed constant ads or blinking banners. In other words, the user points to a particular portion of the content and explicitly indicates what he wants to see. This has advantages over Google™ ads that require for a user to switch to Google™ search and enter a search string. This is particularly critical on mobile devices.

Those skilled in the art will appreciate that on-demand ads a re advantageous for mobile device having a small screen size. The constantly hanging ad banners take up screen space and require modification of the original content. According to the exemplary embodiment, the context (i.e., an object, such as a sentence or a portion of a sentence, a portion of an image, a paragraph of an article, a video fragment, an image, a 3D model, etc.) indicated by the user is used for advertising instead of a single search term taken out of the context.

Thus, according to the exemplary embodiment, non-relevant ads are eliminated entirely. The ads are displayed on-demand, instead of the banners that take up the screen space. The ads are displayed according to a portion of the content explicitly selected by a user. Thus, the ads display only relevant information based on the portion of the content as opposed to some general ads based on the entire content.

Knowing a set of wished the content invokes, the ads can be generated and displayed. For example, hotel - booking, reviews; restaurant - menu, reserving a table; product - purchase, reviews, etc. According to the exemplary embodiment, a list of key objects and their categories are auto- generated using Natural Language Processing technology. For example, if an article discusses a ski resort and a user has clicked on a portion describing one of the resort hotels, there is a high probability that the user wants to know more about this hotel, book a room there or buy lift tickets, etc. However, if the user clicks on new skis, he most likely wants to see some reviews and find out where he can buy or rent them. In order to implement this scenario, the system uses pre-defined association model, which defines a list of possible actions for each type of objects. The association model evolves based on user feedback (i.e., user actions). The association model can be adjusted based on collected statistics and market research.

In case when a user-selected portion of the content is complex and contains a multiple potential goals and wishes, the ad rendering process is performed in two steps. In the first step, the system specifies the user wishes. In the second step, the ad is generated based on the data from the first step and rendered to the user. In the first step, the user is given a list (words and pictograms) of possible goals. The user is asked what is of the most interest to him at the moment. Then, the user selects one or more goals. This data is used in the second step, and the ads relevant to the selected user goals are rendered to the user.

Note that the NLP technology allows for automated analysis of the portion of the text (selected by the user) and for recognition of the meaning carried by the text. Additionally, the NLP recognizes objects of a basic type (an organization, a person, a geographic object, a number/value, money/currency, percentage, company, data, etc., with the list being expandable) and uses metadata and additional targeting data (i.e., for example, time, geographic location data, personal data, organizations, etc.). The above process is depicted in FIG. 2.

According to the exemplary embodiment, the content and the associated metadata are used instead of a search request based on an explicit user choice. The metadata is a set of formalized features, extracted from the content and denoting meaning of the content. Metadata is presented in a machine-readable language and can include keywords, bigrams, n-grams, recognized named entities, implied categories, as well as author-defined categories and tags with associated distribution frequencies - all of which can be used for determining relevancy of advertisements.

According to one exemplary embodiment, an entire screen can be converted into one active link instead of using a large number of hyperlinks, most of which are not clicked on. This allows for avoiding a constant presence of ads and banners on a small screen. Yet, an unlimited number of ad banners can be rendered to users, because the banners appear on-demand and do not take up the screen space.

According to the exemplary embodiment, any textual content can be used for targeted advertisements. The system can render to the users several ad banners based on a link context. A screen interface of the exemplary embodiment can use controls (panels, edit boxes, buttons, check boxes, etc.) for additional operations. The user can re-define a request by using a list of options. For example, the system can provide a button "I'm disappointed" or "Not what I am looking for", which means that the customer is not happy with the ad offering. Also, the system can provide a way to select the ad which is close (or closest) to the customer's need but doesn't quite satisfy him. This provides an ability for a customer to evaluate the ads directly in-site with the help of buttons "like", "dislike", etc., and then this information can be used as feedback for advertisers, ad targeting and customer's rewards. It also provides a "gamification" effect for customers - e.g., increases customers' engagement with the ads and involve intrinsic motivators, like a sense of autonomy and control. An active control with ad banners is depicted in FIG. 3. According to the exemplary embodiment, on-demand targeted ads (being visible only temporarily, as opposed to the compulsive persistent banners) do not require a "shelf space" and do not destroy the original content. A larger number of ads can be rendered to the user increasing the effects of the advertisement. The ads do not annoy the user (or, at least, annoy the user less than the usual advertisements), because they are generated based on the user preferences. The ads are targeted towards the particular user based on semantic context a nalysis, optionally without tracking of user activities. The feedback from the content users is provided. Customers can click on the portion of the content that somehow stimulates their feelings - this provides additional feedback about the quality of the content. Customers can tell the media resource directly "this is interesting" and "this is not interesting". The history of clicks and the analysis of the "clicked" parts of the content allow to recognize the patterns, which can help make the content more attractive. The ad, which receives more than a certain number of negative user feedbacks during a period of time T would be automatically excluded from the presentation process, and the advertiser will be notified. There can be several quality-notification thresholds with pre-defined actions for negative and positive feedback.

According to the exemplary embodiment, a list of options for re-defining the user request is generated. For example, a user clicks on a sentence "Montevideo may not have the sultry allure of Buenos Aires, but there are many pleasant cafes you can settle in with a cortado (espresso with milk) and medialuna (croissant) and watch the world go by." In this sentence there are four potential goals - "Buenos Aires," "pleasant cafes," "cortado (espresso with milk)," "medialuna (croissant)." All of these options are rendered to the user in order to specify his request.

The system, in accordance with the exemplary embodiment, eliminates the hyperlinks and turns the entire content into one active link. The system uses an association model, which describes possible wishes and actions that can be triggered in the user by the content object. In other words, the association model indicates user actions if the object is of interest to the user. For example, associations can be hotel - booking a room, restaurant - reserving a table, etc.

The exemplary embodiment employs the NLP method for targeted advertisement. For example, the NLP recognizes that the selected content portion is an object - restaurant. Then, a list of associations (possible user action that may be performed, if the user is interested in the restaurant) is generated. These actions can be book the table, read reviews, compare with other places, etc. Subseq uently, an ad of a company providing these services is displayed to the user.

For example, if the object is a new car, the association list can include find a dealer, arrange for a test drive, find a new car loan, etc. Thus, the ad offering these services is generated and rendered to the user. System architecture is illustrated in FIG. 4. The system includes an ad-server 440. The ad-server 440 processes the content, stores and selects ads. Special ads libraries 430 and 450 integrate the ad- server 440 with media resources and applications.

The ads libraries 430 and 450 render the ads and process user reactions to the ads. (For example, a user can click to the banner, ignore it, request re-targeting or express his disappointing with the ads.) The ads libraries 430 and 450 are designed to work with mobile devices 460 and media sites 410. According to the exemplary embodiment, the content is stored on a remote media server 420 and is provided to media applications on mobile devices 460 or to media sites 410. System integration with the media server 420 is implemented by the ads libraries 430 and 450. Content analysis is performed on the media server 420 so computational load on the clients 410 and 460 is minimal. According to the exemplary embodiment, the ads are stored on the ad server 440. The ad server 440 is accessed via universal management portals 470 accessible by the mobile devices 460 and by the media sites 410. After a user clicks on a portion of the content, the NLP module determines objects within the text, classifies the objects and defines the relations between the objects. As a result, user wishes (desires) are associated with sentences and paragraphs of the text. According to the statistics, up to 80% of the words are used as connecting elements that do not reflect user wishes. Thus, objects that actually trigger the user wishes are rather small in numbers.

Then, the association model with a matrix is created. An association matrix defines relations between objects and user wishes or desires triggered by these objects. The association matrix also defines boundaries of these relations. In other words, relations between an object-desire and application constraints for this pair are defined. For example, a gender, a time of a year, public events, etc. are important constraints for people behaviors and desires. Note that the associations are based on a variety of factors and mainly on a type of the object. For example, a hotel object has one set of associations, a restaurant object has another set of association, and a car object has yet another association set and so on. The association matrix inherits some of the principles from the "interest graph" concept, and extends them with implementing several new options. For example, Amy Jo Kim classification of game players behaviors (compete, explore, collaborate, express) can be used to recognize the words in the content, which are associated with each type of behavior (e.g., words "design, create, build" are related with "express" mood) and associate the mood of a user with the services and activities that are suitable for such a mood. A Maslow pyramid can be used for prioritizing the desires for presentation. Also, relationships among services/products ad categories (which are based on statistic popularity of classifications in the resources like Pinterest) can be used. Extrinsic motivators (status, access, power, stuff) and intrinsic (competence, autonomy, relatedness) motivators classification can also be applied to the content and ads classification.

As a result of association, a list of advertisement goals is produced. The association model is formed based on sociological and statistical data. The association model gets constantly updated in the process of system deployment. Since the content defines and stimulates users' wishes (desires), the content can be analyzed by the NLP only once. Then, a universal model of user wishes can be created by applying the NLP to the association matrix. The ads can be classified according to their relationship to the user's desires, rather than to the classification of the product/services. In addition to the key words (e.g., brand or product or service name), each object in the content can be associated with several possible activities, which can be performed with the object, i.e., "buy", "booking", "order", "listen", "watch," etc. Also, each object can be associated with some kind of desire, e.g., "safety", "hunger", "thirst", "curiosity", "love," etc. There can also be key engagement words in the content, e.g. "build", "win", "like", "collect," etc. This can help define the current state of the customer and to better target ads.

According to the exemplary embodiment, relevant data for each user needs to be determined by application of available user data (a geolocation data, a personal data, a time zone, a user device type, etc.). Thus, once the model is created for a particular content it can be used for all users. This saves a lot of advertising resources. According to the exemplary embodiment, a user can re-define his choice at any time. User involvement in receiving the ads takes out a negative effect of conventional advertisement forced upon a user. A number of clarifying questions posted to a user is minimized and only serves a purpose of refining the user wishes and goals. According to the exemplary embodiment, a dead-end ad chains are eliminated using the data related to the particular user (a geolocation data, personal data, time zone, user device type, etc.). An ad is considered to be a dead-end, if its application conflicts with at least one known user parameter, such as, for example, age, gender, location, etc. For example, this allows for avoiding of rendering ads for female cosmetics product to a male user. To make the results more relevant and less annoying for customers, "minus" filters can be used, which exclude some results based on geographical location, time and gender data. The objective is to recognize routine, typical behavior (roots) for a user and exclude the obvious objects the user knows about from the results. For example, for typical behavior during the work hours, presenting even relevant advertising for local restaurants around the user can be avoided - and only special offers (if they exist) can be presented. During non-working hours, more shopping/fun/entertainment advertising can be presented than during the work hours. If it is recognized that a user is on holiday or is not at home, an accent on local objects which have high rating among tourists can be made.

FIG. 5 illustrates a flow chart of a method for targeted advertising, in accordance with the exemplary embodiment. A media resource adds new content in step 510. The content and metadata is uploaded into the ad server in step 520. The content is analyzed by NLP for determining user goals (wishes) in step 530. Then, if the user click on a text within the page is detected in step 540, the portion of the content and related metadata (i.e., user goals) is identified in step 550. If the user click on a hyperlink is detected in step 540, a preview of page referenced by the hyperlink is generated in step 555.

If, in step 560, more than one potential user goal is identified, the potential goals are displayed to the user to select one in step 565. Otherwise, an ad banner is selected in step 570. The ad banner is displayed in step 575 and the process ends in step 595. After the page preview is created in step 555, additional ad banner(s) is selected in step 580. Subsequently, the preview with the ad banner(s) is displayed to the user in step 590. Then, the process ends in step 595.

Note that in an initial phase of the process an area selected by the user is determined. The system processes the user mistakes caused by incorrect finger position on a small screen by approximating the point of touch with several "fictional" touch points with a random shift [0-X]. According to the exemplary embodiment, a minimal target area is a sentence. If a finger touch points to more than one sentence, the system automatically includes an entire paragraph. If the touch occurs between the paragraphs, both paragraphs are included for NLP analysis. In case of tablets with larger screens or where the text is displayed with high magnification, a position within a sentence is also taken into consideration.

According to the exemplary embodiment, the media resource-related and content-related data (resource specialization, resource directory where content is presented, user behavior history at the resource, etc.) are used for precise targeting. For example, some special directories attract a special audience, e.g., "financial markets", "banking news", "travel", "tech news", "business schools", "small business" "hardware", "design", "gardening", "real estate", "automobiles", which is used in targeting. Note that in-depth information on user personal data or tracking of user actions are not needed for more relevant advertisement.

FIG. 6 illustrates how associations and user wishes depend on the user goal directed to a particular product or service. If a user intends to use a product himself, the user employs one decision process. However, if the user is interested in the product (or service) for somebody else, he employs a different decision process. Unlike traditional "gender-based targeting" (e.g., not showing women clothes or cosmetics to a male user, and vice versa), the present invention can use a previous history of ad-visiting that does not necessary reflect the personal interests of the user himself (the user could be looking for a present for a friend, and not be interested in the item for himself - in other words, once he makes the purchase and gives the present, he won't necessarily have any interest in this or similar products.

FIG. 7 illustrates a flow chart of a method for precisely targeted advertising, in accordance with the exemplary embodiment. In step 710, a portion of the content selected by the user is recognized. A metadata (goals) associated with the selected portion of the content is determined in step 720. Then, the overall content metadata is applied in step 730. This metadata can be a context of the content. In step 740, the additional restrictions (gender, site, device type, etc.) are applied.

If more than one potential user goals are detected in step 750, the possible goals are displayed to the user to select one in step 760, and the process moves to step 770. Otherwise, the association model (matrix) is applied to the goal in step 770. Subsequently, the restrictions (geolocation data, gender, time zone, user device type, website, etc.) are applied to the user goal in step 780. Then the most relevant ad(s) is found in step 790. For example, when a person has clicked on a sentence with a description of a restaurant and a music group that is playing in that restaurant that night, in the first step the algorithm has to select between "meal" and "music", and after the customer selects "meal", and it turns out that the restaurant has a different geogra phic location than the user (for example, the user is in the US, the restaurant is in France), there is no need to offer that person a booking of a table in that restaurant, but rather it would make more sense to offer him a local restaurant with the same cuisine.

FIG. 8 is a block diagram of an exemplary mobile device 59 on which the invention can be implemented. The mobile device 59 can be, for example, a personal digital assistant, a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGP S) mobile phone, a network base station, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these data processing devices or other data processing devices.

In some implementations, the mobile device 59 includes a touch-sensitive display 73. The touch- sensitive display 73 can implement liquid crystal display (LCD) technology, light emitting polymer display (LPD) technology, or some other display technology. The touch-sensitive display 73 can be sensitive to haptic and/or tactile contact with a user.

In some implementations, the touch-sensitive display 73 can com prise a multi-touch-sensitive display 73. A multi-touch-sensitive display 73 can, for example, process multiple simultaneous touch points, including processing data related to the pressure, degree and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers, chording, and other interactions. Other touch-sensitive display technologies can also be used, e.g., a display in which contact is made using a stylus or other pointing device.

In some implementations, the mobile device 59 can display one or more graphical user interfaces on the touch-sensitive display 73 for providing the user access to various system objects and for conveying information to the user. In some implementations, the graphical user interface can include one or more display objects 74, 76. In the example shown, the display , objects 74, 76, are graphic representations of system objects. Some examples of system objects include device functions, applications, windows, files, alerts, events, or "ther identifiable system objects. In some implementations, the mobile device 59 can implement multiple device functionalities, such as a telephony device, as indicated by a phone object 91; an e-mail device, as indicated by the e- mail object 92; a network data communication device, as indicated by the Web object 93; a Wi-Fi base station device (not shown); and a media processing device, as indicated by the media player object 94. In some implementations, particular display objects 74, e.g., the phone object 91, the e-mail object 92, the Web object 93, and the media player object 94, can be displayed in a menu bar 95. In some implementations, device functionalities can be accessed from a top-level graphical user interface, such as the graphical user interface illustrated in the figure. Touching one of the objects 91, 92, 93 or 94 can, for example, invoke corresponding functionality.

In some implementations, the mobile device 59 can implement network distribution functionality. For example, the functionality can enable the user to take the mobile device 59 and its associated network while traveling. In particular, the mobile device 59 can extend Internet access (e.g., Wi-Fi) to other wireless devices in the vicinity. For example, mobile device 59 can be configured as a base station for one or more devices. As such, mobile device 59 can grant or deny network access to other wireless devices.

In some implementations, upon invocation of device functionality, the graphical user interface of the mobile device 59 changes, or is augmented or replaced with another user interface or user interface elements, to facilitate user access to particular functions associated with the corresponding device functionality. For example, in response to a user touching the phone object 91, the graphical user interface of the touch-sensitive display 73 may present display objects related to various phone functions; likewise, touching of the email object 92 may cause the graphical user interface to present display objects related to various e-mail functions; touching the Web object 93 may cause the graphical user interface to present display objects related to various Web-surfing functions; and touching the media player object 94 may cause the graphical user interface to present display objects related to various media processing functions.

In some implementations, the top-level graphical user interface environment or state can be restored by pressing a button 96 located near the bottom of the mobile device 59. In some implementations, each corresponding device functionality may have corresponding "home" display objects displayed on the touch-sensitive display 73, and the graphical user interface environment can be restored by pressing the "home" display object.

In some implementations, the top-level graphical user interface can include additional display objects 76, such as a short messaging service (SMS) object, a calendar object, a photos object, a camera object, a calculator object, a stocks object, a weather object, a maps object, a notes object, a clock object, an address book object, a settings object, and an app store object 97. Touching the SMS display object can, for example, invoke an SMS messaging environment and supporting functionality; likewise, each selection of a display object can invoke a corresponding object environment and functionality.

Additional and/or different display objects can also be displayed in the graphical user interface. For example, if the device 59 is functioning as a base station for other devices, one or more "connection" objects may appear in the graphical user interface to indicate the connection. In some implementations, the display objects 76 can be configured by a user, e.g., a user may specify which display objects 76 are displayed, and/or may download additional applications or other software that provides other functionalities and corresponding display objects. In some implementations, the mobile device 59 can include one or more input/output (I/O) devices and/or sensor devices. For example, a speaker 60 and a microphone 62 can be included to facilitate voice-enabled functionalities, such as phone and voice mail functions. In some implementations, an up/down button 84 for volume control of the speaker 60 and the microphone 62 can be included. The mobile device 59 can also include an on/off button 82 for a ring indicator of incoming phone calls. In some implementations, a loud speaker 64 can be included to facilitate hands- free voice functionalities, such as speaker phone functions. An audio jack 66 can also be included for use of headphones and/or a microphone.

In some implementations, a proximity sensor 68 can be included to facilitate the detection of the user positioning the mobile device 59 proximate to the user's ear and, in response, to disengage the touch-sensitive display 73 to prevent accidental function invocations. In some implementations, the touch-sensitive display 73 can be turned off to conserve additional power when the mobile device 59 is proximate to the user's ear.

Other sensors can also be used. For example, in some implementations, an ambient light sensor 70 can be utilized to facilitate adjusting the brightness of the touch-sensitive display 73. In some implementations, an accelerometer 72 can be utilized to detect movement of the mobile device 59, as indicated by the directional arrows. Accordingly, display objects and/or media can be presented according to a detected orientation, e.g., portrait or landscape. In some implementations, the mobile device 59 may include circuitry and sensors for supporting a location determining capability, such as that provided by the global positioning system (GPS) or other positioning systems (e.g., systems using Wi-Fi access points, television signals, cellular grids, Uniform Resource Locators (URLs)). In some implementations, a positioning system (e.g., a GPS receiver) can be integrated into the mobile device 59 or provided as a separate device that can be coupled to the mobile device 59 through an interface (e.g., port device 90) to provide access to location-based services.

The mobile device 59 can also include a camera lens and sensor 80. In some implementations, the camera lens and sensor 80 can be located on the back surface of the mobile device 59. The camera can capture still images and/or video.

The mobile device 59 can also include one or more wireless communication subsystems, such as an 802.11b/g communication device 86, and/or a BLUETOOTH communication device 88. Other communication protocols can also be supported, including other 802.x communication protocols (e.g., WiMax, Wi-Fi, 3G, LTE), code division multiple access (CDMA), global system for mobile communications (GSM), Enhanced Data GSM Environment (EDG E), etc.

In some implementations, the port device 90, e.g., a Universal Serial Bus (USB) port, or a docking port, or some other wired port connection, is included. The port device 90 can, for example, be utilized to establish a wired connection to other computing devices, such as other communication devices 59, network access devices, a personal computer, a printer, or other processing devices capable of receiving and/or transmitting data. In some implementations, the port device 90 allows the mobile device 59 to synchronize with a host device using one or more protocols, such as, for example, the TCP/IP, HTTP, UDP and any other known protocol. In some implementations, a TCP/IP over USB protocol can be used.

FIG. 9 is a block diagram 2200 of an example implementation of the mobile device 59. The mobile device 59 can include a memory interface 2202, one or more data processors, image processors and/or central processing units 2204, and a peripherals interface 2206. The memory interface 2202, the one or more processors 2204 and/or the peripherals interface 2206 can be separate components or can be integrated in one or more integrated circuits. The various components in the mobile device 59 can be coupled by one or more communication buses or signal lines.

Sensors, devices and subsystems can be coupled to the peripherals interface 2206 to facilitate multiple functionalities. For example, a motion sensor 2210, a light sensor 2212, and a proximity sensor 2214 can be coupled to the peripherals interface 2206 to facilitate the orientation, lighting and proximity functions described above. Other sensors 2216 can also be connected to the peripherals interface 2206, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities.

A camera subsystem 2220 and an optical sensor 2222, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips.

Communication functions can be facilitated through one or more wireless communication subsystems 2224, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem 2224 can depend on the communication network(s) over which the mobile device 59 is intended to operate. For example, a mobile device 59 may include communication subsystems 2224 designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax or LTE network, and a BLUETOOTH network. In particular, the wireless communication subsystems 2224 may include hosting protocols such that the device 59 may be configured as a base station for other wireless devices.

An audio subsystem 2226 can be coupled to a speaker 2228 and a microphone 2230 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.

The I/O subsystem 2240 can include a touch screen controller 2242 and/or other input controller(s) 2244. The touch-screen controller 2242 can be coupled to a touch screen 2246. The touch screen 2246 and touch screen controller 2242 can, for example, detect contact and movement or break thereof using any of multiple touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen 2246.

The other input controller(s) 2244 can be coupled to other input/control devices 2248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of the speaker 2228 and/or the microphone 2230.

In one implementation, a pressing of the button for a first duration may disengage a lock of the touch screen 2246; and a pressing of the button for a second duration that is longer than the first duration may turn power to the mobile device 59 on or off. The user may be able to customize a functionality of one or more of the buttons. The touch screen 2246 can, for example, also be used to implement virtual or soft buttons and/or a keyboard.

In some implementations, the mobile device 59 can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, the mobile device 59 can include the functionality of an MP3 player. The mobile device 59 may, therefore, include a 32-pin connector that is compatible with the MP3 player. Other input/output and control devices can also be used.

The memory interface 2202 can be coupled to memory 2250. The memory 2250 can include highspeed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 2250 can store an operating system 2252, such as Darwin, RTXC, LINUX, UNIX, OS X, ANDROID, IOS, WINDOWS, or an embedded operating system such as VxWorks. The operating system 2252 may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system 2252 can be a kernel (e.g., UNIX kernel).

The memory 2250 may also store communication instructions 2254 to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. The memory 2250 may include graphical user interface instructions 2256 to facilitate graphic user interface processing including presentation, navigation, and selection within an application store; sensor processing instructions 2258 to facilitate sensor-related processing and functions; phone instructions 2260 to facilitate phone-related processes and functions; electronic messaging instructions 2262 to facilitate electronic-messaging related processes and functions; web browsing instructions 2264 to facilitate web browsing-related processes and functions; media processing instructions 2266 to facilitate media processing-related processes and functions; GPS/Navigation instructions 2268 to facilitate GPS and navigation-related processes and instructions; camera instructions 2270 to facilitate camera-related processes and functions; and/or other software instructions 2272 to facilitate other processes and functions.

Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures or modules. The memory 2250 can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device 59 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.

Those skilled in the art will appreciate that proposed system and method allow for effective targeted advertising directed to mobile device users.

With reference to FIG. 10, an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 20 or a server, a laptop, a netbook and etc. including a processing unit 21, a system memory 22, and a system bus 23 that couples various system components including the system memory to the processing unit 21.

The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory includes read-only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system 26 (BIOS), containing the basic routines that help transfer information between elements within the computer 20, such as during start-up, is stored in ROM 24.

The computer 20 may further include a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD-ROM, DVD-ROM or other optical media. The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical drive interface 34, respectively. The drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the computer 20.

Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 29 and a removable optical disk 31, it should be appreciated by those skilled in the art that other types of computer readable media that can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memories (RAMs), read-only memories (ROMs) and the like may also be used in the exemplary operating environment.

A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25, including an operating system 35. The computer 20 includes a file system 36 associated with or included within the operating system 35, one or more application programs 37, 37', other program modules 38 and program data 39. A user may enter commands and information into the computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner or the like.

These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or universal serial bus (USB). A monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor 47, personal computers typically include other peripheral output devices (not shown), such as speakers and printers.

The computer 20 may operate in a networked environment using logical connections to one or more remote computers 49. The remote computer (or computers) 49 may be another computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20, although only a memory storage device 50 has been illustrated. The logical connections include a local area network (LAN) 51 and a wide area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, Intranets and the Internet.

When used in a LAN networking environment, the computer 20 is connected to the local network 51 through a network interface or adapter 53. When used in a WAN networking environment, the computer 20 typically includes a modem 54 or other means for establishing communications over the wide area network 52, such as the Internet.

The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program modules depicted relative to the computer 20, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.