Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
WEB CONTENT ORGANIZATION AND PRESENTATION TECHNIQUES
Document Type and Number:
WIPO Patent Application WO/2021/247256
Kind Code:
A1
Abstract:
An online system that identifies allocations of both organic and promoted content on a given page. The allocations of page content are compared against one another and configured to prioritize for overall utility based on objective factors that quantify a page "look and feel" as measured by machine learning models. The page allocations are operated on an automatic and continuous basis for each user viewing the page. In some embodiments, the page content allocations are based on individual viewing users stored characteristics.

Inventors:
YATES ANDREW DONALD (US)
Application Number:
PCT/US2021/033682
Publication Date:
December 09, 2021
Filing Date:
May 21, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PROMOTED AI INC (US)
International Classes:
G06Q30/02; G06F16/9535; G06F16/9538; G06F16/958; G06Q30/00; G06Q99/00
Foreign References:
US20160358229A12016-12-08
US20200099746A12020-03-26
US20160291914A12016-10-06
US20170249658A12017-08-31
US20190158901A12019-05-23
US20170024761A12017-01-26
US20210097126A12021-04-01
Attorney, Agent or Firm:
PETROVIC, Lena (US)
Download PDF:
Claims:
CLAIMS 1. A method to place a promoted content in a webpage comprising: sending a request for a content to an organic system and a promoted system, wherein the promoted system provides a bid for placement of the promoted content associated with the promoted system, and wherein the organic system provides an organic content without placing a bid for placement of the organic content; receiving multiple organic contents and multiple promoted contents from the organic system and the promoted system, respectively; computing a quality score for each organic content among the multiple organic contents and for each promoted content among the multiple promoted contents, wherein the quality score represents a probability that a user engages with the each organic content and the each promoted content; creating a first arrangement by allocating multiple content slots associated with the webpage to the multiple organic contents and the multiple promoted contents based on the quality score associated with the each organic content; determining a utility value associated with a content slot among the multiple content slots by determining a user experience cost between showing the organic content in the content slot and showing the promoted content in the content slot; obtaining a termination condition indicating whether a minimum spacing between the multiple content slots has been achieved; until the termination condition is satisfied, iteratively performing: creating an arrangement associated with the webpage, wherein the arrangement has different promoted content compared to a previous arrangement, wherein the different promoted content includes different spacing between promoted content contained in the arrangement and promoted content in the previous arrangement; calculating a webpage utility score associated with the arrangement based on the user experience cost; comparing the multiple webpage utility scores to obtain the highest score; and selecting a webpage having the highest score to display to the user. 2. A method comprising: sending a request for a content to an organic system and a promoted system, wherein the promoted system provides a bid for placement of a promoted content associated with the promoted system, and wherein the organic system provides an organic content without placing a bid for placement of the organic content; receiving multiple organic contents and multiple promoted contents from the organic system and the promoted system, respectively; computing a quality score for each organic content among the multiple organic contents and for each promoted content among the multiple promoted contents, wherein the quality score represents a probability that a user engages with the each organic content and the each promoted content; creating a first arrangement by allocating multiple content slots associated with a webpage to the multiple organic contents based on the quality score associated with the each organic content; and reordering the multiple content slots by moving the promoted content among the multiple promoted contents based on the quality score associated with the promoted content by: determining an auction bid indicating an increase in a probability that the user engages with the promoted content when the promoted content is reordered to a different content slot; and based on the auction bid, moving the promoted content. 3. The method of claim 2, comprising: obtaining a user experience cost and an impact control, wherein the user experience cost represents a difference in a user’s experience between viewing the organic content in a content slot on the webpage and viewing the promoted content in the content slot on the webpage, wherein the impact control represents an effect the difference in the user’s experience has on a webpage utility score; computing a first webpage utility score of the first arrangement based on the auction bid, the user experience cost and the impact control; creating a second arrangement by reallocating the multiple content slots to increase a promoted content load; computing a second webpage utility score of the second arrangement; comparing the first webpage utility score and the second webpage utility score; and selecting a webpage having a higher webpage utility score to display to the user. 4. The method of claim 2, comprising: determining a utility value associated with a content slot among the multiple content slots by determining a user experience cost between showing the organic content in the content slot and showing the promoted content in the content slot; obtaining a termination condition indicating whether minimum spacing between the multiple content slots has been achieved; until the termination condition is satisfied, iteratively performing: creating an arrangement associated with the webpage, wherein the arrangement has different promoted content compared to previous arrangement, wherein the different promoted content includes different spacing between promoted content contained in the arrangement and promoted content in the previous arrangement; calculating a webpage utility score associated with the arrangement based on the user experience cost; comparing the multiple webpage utility scores to obtain the highest score; and selecting a webpage having the highest score to display to the user.

5. The method of claim 2, wherein computing the quality score comprises: gathering data associated with the user comprising an interaction with a previously promoted content, wherein the interaction includes clicking the previously promoted content, sharing the previously promoted content, indicating a preference for the previously promoted content, or posting a comment regarding the previously promoted content; training a machine learning model based on the gathered data; and computing the quality score using the trained machine learning model. 6. The method of claim 2, wherein computing the quality score comprises: gathering data associated with the user comprising an interaction with a previously promoted content, wherein the interaction includes clicking the previously promoted content, sharing the previously promoted content, indicating a preference for the previously promoted content, or posting a comment regarding the previously promoted content; generating training data based on the data associated with the user by standardizing the data associated with the user; based on the training data, predicting a likelihood that a type of benefit to the promoted system occurs for a type of action performed by the user in response to being presented with the promoted content. 7. The method of claim 2, wherein computing the quality score comprises: gathering data associated with the user comprising an interaction with a previously promoted content; generating training data based on the data associated with the user by standardizing the data associated with the user; based on the training data, predicting a likelihood that a type of benefit to the promoted system occurs for a type of action performed by the user within a timeframe in response to the user being presented with the promoted content by: determining a total number of impressions of the promoted content presented to multiple users; obtaining a first probability by determining how many of the total number of impressions result in the type of action being performed by the multiple users; obtaining a second probability by determining a number of the type of action performed by the multiple users and a number of the type of benefit occurring within the timeframe; and determining the likelihood based on the first probability and the second probability. 8. The method of claim 2, wherein computing the quality score comprises: obtaining a dictionary comprising multiple topics associated with the multiple promoted contents; classifying the promoted content into a first subset of topics among the multiple topics; generating a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics; obtaining a second subset of topics among the multiple topics and a second subset of probabilities, wherein each probability in the second subset of probabilities indicates a user’s affinity toward a corresponding topic among the second subset of topics; computing the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics. 9. The method of claim 8, wherein computing the quality score based on the first measure or the second measure of similarity comprises: calculating a dot product between the first subset of probabilities and the second subset of probabilities.

10. The method of claim 8, wherein computing the quality score based on the first measure or the second measure of similarity comprises: calculating a number of overlapping topics between the first subset of topics and the second subset of topics. 11. The method of claim 2, wherein computing the quality score comprises: determining that the promoted content contains an image, a sequence of images, or an audio; determining a degradation of the promoted content; and calculating the quality score based on the degradation of the promoted content, wherein the lower the degradation, the higher the quality score. 12. The method of claim 2, wherein computing the quality score comprises: determining the promoted system providing the promoted content; and calculating the quality score based on the promoted system. 13. The method of claim 2, wherein computing the quality score comprises: determining a quality of a destination to which the promoted content may connect; and computing the quality score based on the quality of the destination. 14. The method of claim 2, wherein computing the quality score comprises: obtaining a dictionary comprising multiple topics associated with the multiple promoted contents; classifying the promoted content into a first subset of topics among the multiple topics; generating a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics; classifying the request into a second subset of topics among the multiple topics; generating a second subset of probabilities indicating a correspondence between the request and a topic in the second subset of topics; and computing the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics. 15. The method of claim 14, wherein computing the quality score based on the first measure or the second measure of similarity comprises: calculating a dot product between the first subset of probabilities and the second subset of probabilities. 16. The method of claim 14, wherein computing the quality score based on the first measure or the second measure of similarity comprises: calculating a number of overlapping topics between the first subset of topics and the second subset of topics. 17. A system comprising: at least one hardware processor; and at least one non-transitory memory storing instruction, which, when executed by the at least one hardware processor, causes the system to: send a request for a content to an organic system and a promoted system, wherein the promoted system provides a bid for placement of a promoted content associated with the promoted system, and wherein the organic system provides an organic content without placing a bid for placement of the organic content; receive multiple organic contents and multiple promoted contents from the organic system and the promoted system, respectively; compute a quality score for each organic content among the multiple organic contents and for each promoted content among the multiple promoted contents, wherein the quality score represents a probability that a user engages with the each organic content and the each promoted content; create a first arrangement by allocating multiple content slots associated with a webpage to the multiple organic contents based on the quality score associated with the each organic content; and reorder the multiple content slots by moving the promoted content among the multiple promoted contents based on the quality score associated with the promoted content by: determining an auction bid indicating an increase in a probability that the user engages with the promoted content when the promoted content is reordered to a different content slot; and based on the auction bid, moving the promoted content. 18. The system of claim 17, comprising the instructions to: obtain a user experience cost and an impact control, wherein the user experience cost represents a difference in a user’s experience between viewing the organic content in a content slot on the webpage and viewing the promoted content in the content slot on the webpage, wherein the impact control represents an effect the difference in the user’s experience has on a webpage utility score; compute a first webpage utility score of the first arrangement based on the auction bid, the user experience cost and the impact control; create a second arrangement by reallocating the multiple content slots to increase a promoted content load; compute a second webpage utility score of the second arrangement; compare the first webpage utility score and the second webpage utility score; and select a webpage having a higher webpage utility score to display to the user. 19. The system of claim 17, comprising the instructions to: determine a utility value associated with a content slot among the multiple content slots by determining a user experience cost between showing the organic content in the content slot and showing the promoted content in the content slot; obtain a termination condition indicating whether minimum spacing between the multiple content slots has been achieved; until the termination condition is satisfied, iteratively perform: creating an arrangement associated with the webpage, wherein the arrangement has different promoted content compared to the previous arrangement; calculating a webpage utility score associated with the arrangement; compare the multiple webpage utility scores to obtain the highest score; and select a webpage having the highest score to display to the user. 20. The system of claim 17, the instructions to compute the quality score comprising the instructions to: gather data associated with the user comprising an interaction with a previously promoted content, wherein the interaction includes clicking the previously promoted content, sharing the previously promoted content, indicating a preference for the previously promoted content, or posting a comment regarding the previously promoted content; train a machine learning model based on the gathered data; and compute the quality score using the trained machine learning model. 21. The system of claim 17, the instructions to compute the quality score comprising instructions to: gather data associated with the user comprising an interaction with a previously promoted content, wherein the interaction includes clicking the previously promoted content, sharing the previously promoted content, indicating a preference for the previously promoted content, or posting a comment regarding the previously promoted content; generate training data based on the data associated with the user by standardizing the data associated with the user; based on the training data, predict a likelihood that a type of benefit to the promoted system occurs for a type of action performed by the user in response to being presented with the promoted content. 22. The system of claim 17, the instructions to compute the quality score comprising instructions to: gather data associated with the user comprising an interaction with a previously promoted content; generate training data based on the data associated with the user by standardizing the data associated with the user; based on the training data, predict a likelihood that a type of benefit to the promoted system occurs for a type of action performed by the user within a timeframe in response to the user being presented with the promoted content by: determining a total number of impressions of the promoted content presented to multiple users; obtaining a first probability by determining how many of the total number of impressions result in the type of action being performed by the multiple users; obtaining a second probability by determining a number of the type of action performed by the multiple users and a number of the type of benefit occurring within the timeframe; and determining the likelihood based on the first probability and the second probability. 23. The system of claim 17, the instructions to compute the quality score comprising the instructions to: obtain a dictionary comprising multiple topics associated with the multiple promoted contents; classify the promoted content into a first subset of topics among the multiple topics; generate a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics; obtain a second subset of topics among the multiple topics and a second subset of probabilities, wherein each probability in the second subset of probabilities indicates a user’s affinity toward a corresponding topic among the second subset of topics; compute the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics. 24. The system of claim 23, the instructions to compute the quality score based on the first measure or the second measure of similarity comprising the instructions to: calculate a dot product between the first subset of probabilities and the second subset of probabilities. 25. The system of claim 23, the instructions to compute the quality score based on the first measure or the second measure of similarity comprising the instructions to: calculate a number of overlapping topics between the first subset of topics and the second subset of topics. 26. The system of claim 17, the instructions to compute the quality score comprising the instructions to: determine that the promoted content contains an image, a sequence of images, or an audio; determine a degradation of the promoted content; and calculate the quality score based on the degradation of the promoted content, wherein the lower the degradation, the higher the quality score. 27. The system of claim 17, the instructions to compute the quality score comprising the instructions to: determine the promoted system providing the promoted content; and calculate the quality score based on the promoted system.

28. The system of claim 17, the instructions to compute the quality score comprising the instructions to: determine a quality of a destination to which the promoted content may connect; and compute the quality score based on the quality of the destination. 29. The system of claim 17, the instructions to compute the quality score comprising the instructions to: obtain a dictionary comprising multiple topics associated with the multiple promoted contents; classify the promoted content into a first subset of topics among the multiple topics; generate a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics; classify the request into a second subset of topics among the multiple topics; generate a second subset of probabilities indicating a correspondence between the request and a topic in the second subset of topics; and compute the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics.

Description:
WEB CONTENT ORGANIZATION AND PRESENTATION TECHNIQUES CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to the U.S. provisional Patent Application Number 63/034,894 filed June 4, 2020, and U.S. Patent Application No. 17/323,146 filed May 18, 2021, both of which are incorporated herein by this reference in their entirety. [0002] TECHNICAL FIELD [0003] The disclosure relates to content presentation and user interfaces. More particularly, this disclosure relates to web content look-and-feel utility. BACKGROUND [0004] Certain online content delivery systems, such as social networking systems, allow their users to connect to and communicate with other online system users. Users may create profiles on such an online system that are tied to their identities and include information about the users, such as interests and demographic information. The users may be individuals or entities such as corporations or charities. Because of the increasing popularity of these types of online content delivery systems and the increasing amount of user-specific information maintained by such online systems, an online system provides an ideal forum for third parties to increase awareness about products or services to online system users. [0005] One illustrative example of such a system is the Amazon platform where users’ searches are populated with a series of content tiles (items for sale). The organization and presentation of the content tiles varies between organic and promoted content. It is difficult to determine balance and positioning between organically searched content and promoted content of variable relevance/utility to the search/page. BRIEF DESCRIPTION OF THE DRAWINGS [0006] FIG.1 is a high-level block diagram of a system environment for an online system, according to an embodiment. [0007] FIG. 2 is an example block diagram of an architecture of the online system 140, according to an embodiment. [0008] FIG. 3 is a block diagram illustrating an exemplary data flow for determining the value of content for available slots. [0009] FIG. 4 is a flowchart illustrating a request and answer content-to-slot ranking system. [0010] FIG. 5 is a flowchart illustrating a bidding series content-to-slot ranking system. [0011] FIG. 6 is an illustrative example of the method of Figure 5. [0012] FIG. 7 depicts a flowchart illustrating competition controls. [0013] FIG. 8 illustrates a flowchart for determining credit account of promoters. [0014] FIG. 9 is a flowchart describing an allocation heuristic for package bidding. [0015] FIG. 10 is a flowchart illustrating a method of executing a continuous auction that enables winning of multiple slots. [0016] FIG. 11 illustrates a flowchart depicting allocation based on projected realization. [0017] FIG. 12 shows a method to place promoted content in a webpage. [0018] FIG. 13 is a block diagram that illustrates an example of a computer system in which at least some operations described herein can be implemented. DETAILED DESCRIPTION [0019] Embodiments of the invention include an online system that performs component optimization of benefit computation for third-party systems. [0020] Web content comes in many forms and exists on numerous platforms. One trait that tends to exist throughout many embodiments is the inclusion of numerous slots for insertion of content. For example, content curation or recommendation tends to include a search tool, or provide recommendations based on some pre-existing user profile. Content recommended or returned as a search result should aspire to serve the utility of the webpage. The term “page”, i.e. “webpage”, is used to describe a venue for content distributed via the Internet. A “page” may take many forms including well understood definitions referring to one or more URL locations, or a page may additionally describe hosted video data. In some embodiments, a page is a web- accessible element that a user concentrates their attention on and which includes slots for insertable content. [0021] The slots are filled with either organic or promoted content. Additionally, some slots may have greater prominence on the page (e.g., in a grid or vertical format, those at the top that are visible without scrolling are more prominent; in a video format, inserts that are not skippable are more prominent than a message bubble that appears for a predetermined segment of time during video playback). When each potential content item has a determinable quality value, and the available slots each have a value, it becomes a problem of allocation optimization. [0022] The manner of organizing makes use of a number of factors that develop an overall quality score of given insertable content. The overall quality score is a position-independent metric of user experience in this context. Those factors may include scores for any combination of: predicted positive engagement; predicted negative engagement; quality score; matches user interest; high-quality creative; trusted content creator; negative experience score; spam, distracting; violated policy, e.g., fraction of text; creative type offsets; query relevance/contextual relevance; landing page quality; and/or creator trust. Each factor is determined objectively using measurable techniques known in the art. [0023] The overall quality score is used to estimate how much negative experience would result from showing a given item as compared to another that would otherwise be shown. Content is provided by third-party systems. The online system holds content auctions on a revolving continuous basis for each of the available slots. Each slot goes to content based on bid value and overall quality score of the content using allocation schemes. [0024] Pages are configured using a number of different promoted content loads and each is evaluated for utility. The allocation of page content prioritizes highest utility, measured in converted units that consider user experience, relation of promoted content to original user-dictated parameters, and value offered by promoters. System Architecture [0025] FIG. 1 is a high-level block diagram of a system environment 100 for an online system 140, according to an embodiment. The system environment 100 shown by FIG.1 comprises one or more client devices 110, a network 120, one or more third-party systems 130, and the online system 140. In alternative configurations, different and/or additional components may be included in the system environment 100. In one embodiment, the online system 140 is a social networking system. [0026] The client devices 110 are one or more computing devices capable of receiving user input as well as transmitting and/or receiving data via the network 120. In one embodiment, a client device 110 is a conventional computer system, such as a desktop or laptop computer. Alternatively, a client device 110 may be a device having computer functionality, such as a personal digital assistant (PDA), a mobile telephone, a smartphone, or another suitable device. A client device 110 is configured to communicate via the network 120. In one embodiment, a client device 110 executes an application allowing a user of the client device 110 to interact with the online system 140. For example, a client device 110 executes a browser application to enable interaction between the client device 110 and the online system 140 via the network 120. In another embodiment, a client device 110 interacts with the online system 140 through an application programming interface (API) running on a native operating system of the client device 110, such as IOS® or ANDROID™. [0027] The client devices 110 are configured to communicate via the network 120, which may comprise any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network 120 uses standard communications technologies and/or protocols. For example, the network 120 includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network 120 include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transfer rotocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network 120 may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of the network 120 may be encrypted using any suitable technique or techniques. [0028] One or more third-party systems 130, such as a promoted content provider system, may be coupled to the network 120 for communicating with the online system 140, which is further described below in conjunction with FIG. 2. In one embodiment, a third-party system 130 is an application provider communicating information describing applications for execution by a client device 110 or communicating data to client devices 110 for use by an application executing on the client device. In other embodiments, a third-party system 130 provides content or other information for presentation via a client device 110. A third-party system 130 may also communicate information to the online system 140, such as advertisements, content, or information about an application provided by the third-party system 130. [0029] FIG. 2 is an example block diagram of an architecture of the online system 140, according to an embodiment. The online system 140 shown in FIG. 2 includes a user profile store 205, a content store 210, an action logger 215, an action log 220, an edge store 225, a promoted content request store 230, a web server 235, an impressions log 240, a training data generator 245, benefits predictors 250, an attribution selector 255, and a combined bid generator 260. In other embodiments, the online system 140 may include additional, fewer, or different components for various applications. Conventional components such as network interfaces, security functions, load balancers, failover servers, management and network operations consoles, and the like are not shown so as to not obscure the details of the system architecture. [0030] Each user of the online system 140 is associated with a user profile, which is stored in the user profile store 205. The users tracked with a user profile are those who view content on pages of the online system. The online system 140 further includes administrator users and interacts with third-party system users who may be alternatively referred to as “promoters.” A user profile includes declarative information about the user that was explicitly shared by the user and may also include profile information inferred by the online system 140. In one embodiment, a user profile includes multiple data fields, each describing one or more attributes of the corresponding user of the online system 140. Examples of information stored in a user profile include biographic, demographic, and other types of descriptive information, such as work experience, educational history, gender, hobbies or preferences, location, and the like. A user profile may also store other information provided by the user, for example, images or videos. In certain embodiments, images of users may be tagged with identification information of users of the online system 140 displayed in an image. A user profile in the user profile store 205 may also maintain references to actions by the corresponding user performed on content items in the content store 210 and stored in the action log 220. The user profile store 205 and the action log 220 are used in aggregate to train a machine learning model to calculate propensity for user action on the online system 140. In some embodiments, the user profile store 205 and the action log 220 are used on an individual basis as input to the trained machine learning model to calculate propensity of action for a given individual user. [0031] The content store 210 stores objects that each represent various types of content. Examples of content represented by an object include a page post, a status update, a photograph, a video, a link, a shared content item, a gaming application achievement, a check-in event at a local business, a brand page, or any other type of content. Online system users may create objects stored by the content store 210, such as status updates, photos tagged by users to be associated with other objects in the online system, events, groups, or applications. In some embodiments, objects are received from third-party systems or third-party applications separate from the online system 140. In one embodiment, objects in the content store 210 represent single pieces of content, or content “items.” [0032] The action logger 215 receives communications about user actions internal to and/or external to the online system 140, populating the action log 220 with information about user actions. Examples of actions include adding a connection to another user, sending a message to another user, uploading an image, reading a message from another user, viewing content associated with another user, or attending an event posted by another user, among others. In addition, a number of actions may involve an object and one or more particular users, so these actions are associated with those users as well and stored in the action log 220. [0033] The action log 220 may be used by the online system 140 to track user actions on the online system 140, as well as actions on third-party systems 130 that communicate information to the online system 140. Users may interact with various objects on the online system 140, and information describing these interactions is stored in the action log 220. Examples of interactions with objects include: clicking on content, purchasing items after clicking on content, commenting on posts, sharing links, and checking in to physical locations via a mobile device, accessing content items, and any other interactions. Additional examples of interactions with objects on the online system 140 that are included in the action log 220 include: commenting on a photo album, communicating with a user, establishing a connection with an object, adding an event to a calendar, joining a group, creating an event, authorizing an application, using an application, expressing a preference for an object (“liking” the object), and engaging in a transaction. Additionally, the action log 220 may record a user’s interactions with promoted content on the online system 140 as well as with other applications operating on the online system 140. In some embodiments, data from the action log 220 is used to infer interests or preferences of a user, augmenting the interests included in the user’s user profile and allowing a more complete understanding of user preferences. [0034] The action log 220 may also store user actions taken on a third-party system 130, such as an external website, and communicated to the online system 140. For example, an e- commerce website that primarily sells sporting equipment at bargain prices may recognize a user of an online system 140 through a social plug-in enabling the e-commerce website to identify the user of the online system 140. Because users of the online system 140 are uniquely identifiable, e- commerce websites, such as this sporting equipment retailer, may communicate information about a user’s actions outside of the online system 140 to the online system 140 for association with the user. Hence, the action log 220 may record information about actions that users perform on a third- party system 130, including webpage viewing histories, promoted content that was engaged, purchases made, and other patterns from shopping and buying. [0035] In one embodiment, an edge store 225 stores information describing connections between users and other objects on the online system 140 as edges. Some edges may be defined by users, allowing users to specify their relationships with other users. For example, users may generate edges with other users that parallel the users’ real-life relationships, such as friends, co- workers, partners, and so forth. Other edges are generated when users interact with objects in the online system 140, such as expressing interest in a page on the online system, sharing a link with other users of the online system, and commenting on posts made by other users of the online system. [0036] In one embodiment, an edge may include various features each representing characteristics of interactions between users, interactions between users and objects, or interactions between objects. For example, features included in an edge describe rate of interaction between two users, how recently two users have interacted with each other, the rate or amount of information retrieved by one user about an object, or the number and types of comments posted by a user about an object. The features may also represent information describing a particular object or user. For example, a feature may represent the level of interest that a user has in a particular topic, the rate at which the user logs into the online system 140, or information describing demographic information about a user. Each feature may be associated with a source object or user, a target object or user, and a feature value. A feature may be specified as an expression based on values describing the source object or user, the target object or user, or interactions between the source object or user and target object or user; hence, an edge may be represented as one or more feature expressions. [0037] The edge store 225 also stores information about edges, such as affinity scores for objects, interests, and other users. Affinity scores, or “affinities,” may be computed by the online system 140 over time to approximate a user’s affinity for an object, interest, and other users in the online system 140 based on the actions performed by the user. [0038] The promoted content request store 230 stores one or more promoted content requests. Promoted content is content that an entity (i.e., a promoted content provider) presents to users of an online system, which allows the promoted content provider to gain public attention for products, services, opinions, causes, or messages and to persuade online system users to take an action regarding the entity’s products, services, opinions, or causes. [0039] In one embodiment, targeting criteria may specify actions or types of connections between a user and another user or object of the online system 140. Targeting criteria may also specify interactions between a user and objects performed external to the online system 140, such as on a third-party system 130. For example, targeting criteria identifies users that have taken a particular action, such as sent a message to another user, used an application, joined a group, left a group, joined an event, generated an event description, purchased or reviewed a product or service using an online marketplace, requested information from a third-party system 130, installed an application, or performed any other suitable action. Including actions in targeting criteria allows advertisers to further refine users eligible to be presented with advertisement content from an advertisement request. As another example, targeting criteria identifies users having a connection to another user or object or having a particular type of connection to another user or object. [0040] The web server 235 links the online system 140 via the network 120 to the one or more client devices 110, as well as to the one or more third-party systems 130. The web server 235 serves webpages, as well as other web-related content, such as JAVA®, FLASH®, XML, and so forth. The web server 235 may receive and route messages between the online system 140 and the client device 110, for example, instant messages, queued messages (e.g., email), text messages, short message service (SMS) messages, or messages sent using any other suitable messaging technique. A user may send a request to the web server 235 to upload information (e.g., images or videos) that are stored in the content store 210. Additionally, the web server 235 may provide application programming interface (API) functionality to send data directly to native client device operating systems, such as IOS®, ANDROID™, WEBOS®, or RIM®. [0041] The impressions log 240 stores metadata regarding impressions made to users of the online system 140. Impressions made to the users may include impressions of promoted content from one or more third-party systems 130. Upon being presented with a promoted content, a user may perform an action against the promoted content. These actions may include viewing the promoted content (for a period of time), clicking or interacting with the promoted content, sharing the promoted content in the online system 140, indicating a preference (e.g., a like) for the promoted content, posting a comment regarding the promoted content, and so on. These actions are stored by the impressions log 240 for each user and each promoted content. [0042] Additionally, after a period of time, which may be a short period of time, e.g., 10 seconds, or a longer period of time, e.g., 2 weeks, the online system 140 may receive from the third-party system 130 that provided the promoted content an indication of a benefit provided to the third-party system 130 by those users who had been presented with the promoted content from the third-party system 130. This benefit may in some cases be referred to as a conversion, and is any event, attribute, or other factor that provides value to a third-party system 130. The benefit may be defined by the third-party system 130 according to an objective or may be a default benefit indicated by the online system 140. Benefits may include an app installation, a purchase, a click, adding an item to a shopping cart, a particular amount of time spent at the third-party system 130, a referral made by the user, signing up for a newsletter, sharing something of the third-party system 130 on a social network, performing an action on the online system 140 (e.g., liking a page on the online system 140 associated with the third-party system 130), visiting a particular webpage of the third-party system 130, a phone call, an internal search on the third-party system 130 using a particular search term, an in-app purchase or event, and so on. The type of benefit provided by each user to a third-party system 130 and the timeframe between the action performed at the online system 140 by a user and the occurrence of the benefit provided by the user to the third-party system 130 are stored in the impressions log 240. [0043] Note that the occurrence of the benefit and the performance of the action need not be initiated by the same client device 110 of the user. For example, a user may view a promoted content on a desktop device but make a purchase on a mobile device. [0044] The impressions log 240 may also store other metadata regarding these actions performed by users and the benefits provided to third-party systems 130. For example, if the action performed by the user is associated with content on the online system 140 (e.g., viewing a video), the impressions log 240 may store information regarding such content, such as a link to the content, the content itself, and so on. The impressions log 240 may store similar information regarding the benefits provided, such as information regarding the purchased item if the benefit provided is a purchase of an item. In some cases, the benefit provided by a user to a third-party system 130 may occur after multiple presentations of promoted content from that third-party system 130 to the user. The impressions log 240 stores in such cases each presentation of promoted content for the user and links this to any eventual occurrence of a benefit provided to the third-party system 130 by the user. [0045] In other embodiments, the impressions log 240 stores more or less information than the information described above. For example, the impressions log 240 may store only the action and benefit (along with respective timestamps) as described above without storing a timeframe between the action and the benefit. In one embodiment, the impressions log 240 is stored in offline storage due to storage requirements (i.e., it is not stored in memory). [0046] The training data generator 245 generates training data from the data in the impressions log 240. The training data generator 245 may standardize the data, may convert the data from a sparse data set to a smaller data set, may apply transforms to the data, and may remove unnecessary data to generate the training data. [0047] In one embodiment, the training data generator 245 standardizes the data in the impressions log 240 to generate the training data. In particular, the training data generator 245 may standardize the indicators in the impressions log 240 indicating the benefits provided by users to the third-party system by converting the benefits indicated by the third-party system into standard types. For example, a third-party system 130 may indicate as a benefit “shopped for SKU #12345.” The training data generator 245 may convert this to a benefit of type “purchase.” [0048] In one embodiment, the training data generator 245 converts sparse data in the impressions log 240 into a smaller data set by consolidating one or more types of benefits into a single type. For example, the training data generator 245 may consolidate a benefit indicating “purchase on a mobile device” and “purchase at a non-mobile device” into a single type of benefit indicating a “purchase.” The training data generator 245 may also standardize different actions performed by the user at the online system 140 that are stored in the impressions log 240. [0049] In one embodiment, the training data generator 245 transforms the data in the impressions log 240. In one case, the training data generator 245 converts a timestamp of an action performed by a user at the online system 140 and a timestamp of a benefit received by the third- party system 130 from a user into a timeframe between the action performed and the benefit received. This timeframe indicates the duration of time from an action performed by a user in response to presentation of the (last) promoted content to the user and the occurrence of the benefit to the third-party system 130. For example, a user may click on a promoted content presented to the user at the online system 140 at a time X (e.g., 10:00 a.m., January 1) and later, at a time Y (e.g., 11:00 p.m., January 3), the user makes a purchase at the third-party system 130. The training data generator 245 computes the difference between the time Y and the time X (e.g., 2 days and 13 hours) to determine the timeframe between the click and the purchase. [0050] In one embodiment, the training data generator 245 performs other actions against the impressions log 240 to generate the training data. This might include removing unnecessary data to generate the training data. For example, the training data generator 245 may remove additional information such as a username, promoted content identifier, and so on, when generating an entry in the training data indicating an action performed. [0051] In one embodiment, the training data generator 245 transforms the data in the impressions log 240 into training data that is classified numerically. For example, the training data generator 245 may convert the types of actions stored in the impressions log 240 into numbers (e.g., a click is represented by “1,” a view only represented by “2”). [0052] In one embodiment, the training data generator 245 generates training data that includes an action performed by a user at the online system 140 (in response to being presented with the promoted content), the type of benefit provided by a user to a third-party system 130, and the timeframe between the action being performed and the benefit being received. [0053] The benefits predictors 250 make predictions for third-party systems 130 regarding the likelihood that a certain type of benefit occurs for a particular type of action performed by a user at the online system 140 in response to being presented with promoted content from the third- party system 130. [0054] Each benefits predictor 250 may be used to generate a separate prediction indicating the likelihood of a certain type of benefit occurring for a third-party system 130 within a certain timeframe in response to a certain type of action being performed by a user at the online system 140 after being presented with a promoted content of the third-party system 130. The timeframes may include 1 day, 2 to 14 days, and 15 days and beyond (or any other combinations of timeframes). As noted, the benefit may be anything that provides a value to a third-party system 130, and an action is an action that a user performs on the online system 140 in relation to being presented with a promoted content from the third-party system 130. [0055] Thus, the online system 140 may have multiple benefits predictors 250, with each benefits predictor 250 predicting the likelihood of a benefit occurring for a different type of benefit, action, and timeframe combination. [0056] To make an accurate prediction of the likelihood, each benefits predictor 250 may select data from the training data that corresponds to a single combination of benefit type, action type, and timeframe range for a particular third-party system 130. In another embodiment, each benefits predictor 250 retrieves similar data directly from the impressions log 240. [0057] An example of a particular benefits predictor is one that estimates the occurrence of a purchase at a third-party system 130 (the benefit) due to a click (an action) by a user against a promoted content provider occurring within the most recent 2 to 14 days (the timeframe). [0058] In one embodiment, each benefits predictor 250 predicts the likelihood by first determining a first probability of the type of action being performed by users given an impression of a promoted content of a third-party system 130 to the users. The benefits predictor 250 may utilize the training data or the data in the impressions log 240 to determine a total number of impressions of promoted content of the third-party system 130 that have been presented to users and determine how many of these resulted in the action being performed by a user. [0059] The benefits predictor 250 may also determine a second probability of the particular type of benefit occurring given that the particular type of action has occurred in the particular timeframe. The benefits predictor 250 may again utilize the training data or the impressions log 240 to determine a number of the particular type of action performed by users for the third-party system 130, and a number of the particular type of benefit occurring within the particular timeframe. [0060] The benefits predictor 250 may combine these two probabilities (e.g., by multiplying their values together) in order to generate the likelihood of a particular combination of benefit, action, and timeframe for a third-party system 130 given that impressions of the third- party system 130 were made to the users for which the likelihood is being calculated. [0061] In one embodiment, each benefits predictor 250 uses a subset of the training data or the data from the impressions log 240 when determining the above-mentioned probabilities. The subset of the data may be data from a particular range of timestamps (e.g., the last 30 days). [0062] Each benefits predictor 250 may use a different or modified method to generate the predictions. In one embodiment, a benefits predictor 250 includes a regression model (e.g., a multidimensional logistic regression model) fitted to the data in the impressions log 240 or the training data. The regression model is used to determine the probability of a particular benefit occurring for a particular combination of action and timeframes. [0063] In one embodiment, the attribution selector 255 determines an attribution value for one or more of the likelihood predictions made by the benefits predictors 250. The attribution value indicates how significant different combinations of types of actions and timeframes are in causing a type of benefit to occur for a third-party system 130. For example, in the case where the benefit is a purchase for a third-party system 130 selling luxury goods, a click type action and a timeframe of 1 day would likely be less significant in causing the purchase to occur, compared to a 2-to-14- day timeframe with a view action. This may be attributed to the fact that users typically do not purchase expensive luxury goods impulsively but may consider the potential purchase before possibly making it. [0064] In another embodiment, the attribution selector 255 may perform a lift analysis to determine the attribution. The lift analysis determines the “lift,” or increase in the benefit, provided to a third-party system 130 due to the presentation of a promoted content to users. [0065] In one case, to perform the lift analysis, the attribution selector 255 may exclude some users that qualify for the targeting criteria of a promoted content from being shown the promoted content of the third-party system 130. The attribution selector 255 may measure the difference in an amount of benefit provided to the third-party system 130 between the excluded users and the non-excluded users. The benefit measured is the type of benefit predicted by the benefits predictor 250. The difference determined by the attribution selector 255 is the amount of lift that showing the promoted content to users creates. The attribution selector 255 may further exclude those cases of lift where a user was shown multiple instances of promoted content. The attribution selector 255 determines the attribution for each action and timeframe combination based on the amount of lift above the average lift that the particular combination provides. This may indicate that this particular combination of action and timeframe influences users to provide a greater benefit for the third-party system 130. Thus, the attribution for benefits predictors 250 with this combination of action and timeframe may have higher attribution values. [0066] FIG. 3 is a block diagram illustrating an exemplary data flow for determining the value of content according to different actions and timeframes. Although certain elements are illustrated in FIG. 3, in other embodiments the elements may be different and the flow of the data through the elements may be different. [0067] Initially, the online system 140 receives the benefits metadata 320 from the third- party system 130A. This benefits metadata 320 includes information regarding the benefit provided by a user to the third-party system 130, may include an identifier of the user (which may be hashed to avoid unnecessary disclosure of personally identifiable information), and includes a timestamp at which the benefit occurred. For example, the benefits metadata 320 may identify that a user made a particular purchase (the benefit) at a particular timestamp. The online system 140 stores this information in the impressions log 240. [0068] The online system 140 also stores actions metadata 310 in the impressions log 240. The actions metadata 310 may include information regarding actions performed by users in response to being presented with promoted content. For example, in relation to the above example, the actions metadata 310 may include an identifier of the user, a click (action) performed by the user against the promoted content, and the timestamp of the click. [0069] The training data generator 245 accesses the benefits metadata 320 and the actions metadata 310 in the impressions log 240 and generates the training data 330 using this metadata. As noted above, the training data generator 245 may perform various transforms and other actions against the data in the impressions log 240 to generate the training data 330. For example, referring again to the above example, the training data generator 245 may include in the training data the information regarding the third-party system 130A indicating a purchase made by the user, the click performed by the user, and a timeframe between the click and the purchase calculated based on the difference of the timestamps indicated in the metadata associated with the benefit and action information. The training data generator 245 may continuously or periodically update the training data based on new information collected in the impressions log 240 and may purge old data from the training data 330. Content Allocation on Pages [0070] FIG. 4 is a flowchart illustrating a request and answer content-to-slot ranking system. A request is for a list of content slots to fill within a given page. The slots on the page have an ordering from first to last (sorted by page utility, e.g., prominence, or as seen first by the user), even though when rendered on a device, the slots may not necessarily be displayed in a linear order in space and time. Slots may be ordered on the page in a list, a position on a grid, a time slot on a timeline, or other concepts of “display ordering.” [0071] In step 402, a request for a content is sent to both the organic and promoted systems in parallel. The request may be based on a search engine query, relation to page query, and/or general solicitation. The request can be in the form of a number of items, a timeline with time slots, or any other structure representing the ordered list of positions with optional contextual information about the display context like grid layout of the page. In some embodiments, requests also include data particular to an active user viewing the page (e.g., contextual information used for selecting the qualified and best content items to present that may be based on a past user history or interaction). [0072] In step 404, both the organic system and the promoted system respond back independently and in parallel with selected item position assignments. Items are typically represented as an ID and metadata to the display system for rendering. The application server receives both organic and promoted responses and merges those responses together to create the final composition. In general, the promotion item position assignments override the organic item position assignments. [0073] In step 406, the platform allocates the organic system responses to slots by overall quality score. In step 408, the platform reorders the allocations of step 406 based on the promoted system response. Reordering is most effective when there is a limit to total retrievable items in the content query. The reordering occurs based on a content auction bid. The bid is the difference in bidder utility from the original position to a higher one (caused via reordering), and accounts for an increased click probability and probability to be shown. If the item was retrieved by the organic system but otherwise would not have been displayed (e.g., because its predicted user quality is too low), then the full bid applies. [0074] Overall quality score refers to an amalgamation of a number of varied measurable statistics. A common method for estimating the “quality” of a content item is to use personalized machine learning models to predict the probability of the user clicking that item. The higher quality items will have a higher likelihood that the user will click or otherwise engage with the item. [0075] Where a given page has many different ways of engaging with content, like clicks, click duration, view time, shares, comments, hides, video plays, conversions, download, purchases, and/or saves, there are additional estimators of quality. For example, “weighted engagement” predicts the probability of every important engagement and computes linear weights so that the contribution of each estimate is roughly equal to the final score. Thus, if one event is twice as rare as another, then it has twice the weight. Negative events like hides have negative weights. Combining these estimates and weights renders a weighted average score. Some example embodies include any combination of: [0076] Capping: Cap the range of each estimate so that an estimate outlier doesn’t dominate the composite score. An example cap is 99% percentile prediction. This way, multiple event types must all be high to have a high score. [0077] Prior belief: Give a higher weight to higher intent, less ambiguous actions. These actions tend to be rarer. For example, a click could be by accident. A “save” action requires multiple steps to confirm positive intent. Even if saves are 10% as common as clicks, a save prediction may have 20x the weight of a click. [0078] Mixing events: Do not mix together event predictions that have different average rates without normalization, for example, the probability of a sale on viewing an item. Cheap sales tend to be more common than expensive sales. When a user is mapped for the probability of sale for a mixture of cheap and expensive items, that user has a resulting model that merely indicates cheap items to be of higher quality than expensive sales for all users, which may not be correct. An example normalizing factor is the average sale rate of the item over all users with appropriate corrections for confounders and use of smoothing priors. [0079] Normalization: If the quality score is compared across users and contexts, then additional normalization may be needed to remove confounders. For example, if all iPhone users click twice as much as all Android users simply because of how the device operates, then without correction, the same item displayed on iPhone versus Android would appear to have twice as much quality contribution from clicking. These corrections may be fitted by standard linear confounder subtraction techniques for average user engagement rates shrunk to priors by demographics and by display context like device type and Internet connection speed. [0080] No interaction with allocation: The underlying prediction (e.g., click) should not depend on the allocation of the item. A technique is to estimate the event probabilities as if the item were placed in the first slot on the first page, or some other constant position. [0081] Disclosed herein are improvements to an overall quality score for use in a promotion system. Using a platform that tunes parameters, an administrator begins with parameters (e.g., a linear combination of weights) from a distribution over users. The platform fits the user responses including time spent and displaced revenue compared to no quality control at the end of treatment to a function. Using function maximization techniques, the platform finds the set of parameters (e.g., engagement weights) that maximize the ratio of expected time spent to displaced revenue as a function of displaced revenue (in effect, identifying the most efficient quality score that exchanges short-term revenue for user experience). Parameters are adjusted to either fix the target amount of short-term revenue to displace or find an average parameter over the range of a reasonable useful target range displaced revenue. Additionally, condition parameter selects optional features like device type, country, and other contextual and user features. [0082] Additional techniques may provide further insight as to overall quality scores. The above techniques referred to use of predicted user engagement of different types to form a composite quality score. Other types of quality scores can also be formed using similar techniques. If using a linear (i.e., weighted average) model to combine all quality scores, then all scores need to be transformed to be comparable on similar scales. A mean subtraction, variance division treatment can work, or other similar “standardization” numerical techniques for comparing different distributions. Examples of other types of scores and how they would be computed include: [0083] Matches user interest: The item is classified by different content topics using a supervised trained topic classifier. Generally, there is a dictionary of different topics, and the classifier will return a list of probabilities that the item has that topic or some other “matching” score. The user also has a set of interests, which are topics in the same dictionary. These topics could also have weights or probabilities. The quality score is a function of the predicted item topics and the user interests. Functions include: [0084] Number of overlaps, or dot product of topic vectors: A trained function of these two sets to produce a “matching” score. In some embodiments, the training is performed using supervised training, e.g., querying the user whether they are “interested in this?”, or by engagement signals (people who have this interest click on items with these interests), or a combination of these options. [0085] High-quality creative: The creative (image, text, video) is in high resolution and without notable degradations like poor compression. These can be detected using standard algorithms. Videos and audio are of a suitable length for the platform. Lower quality creatives have a negative quality score. [0086] Trusted content creator: Human curators approve certain trusted content creators (e.g., Disney). All content from these trusted sources is given a higher quality score. Certain topics as classified using techniques as described in “Matches user interest” may also receive a positive score, for example, arts and crafts on Amazon, because that content aligns with the product design of the system. [0087] Negative or positive future experience score: A supervised model that predicts future product engagement on other items in the future, conditional to observing this content item. If future engagement increases, the score is positive, else if future engagement decreases, the score is negative. This is related to the “slate optimization” technique published by YouTube. [0088] Spam, distracting: An “untrusted” score, generated by standard techniques like created by an account in suspect foreign countries, contains forbidden keywords like “weight loss,” or “<someone in a given profession> hate him!” Topics that can be classified using techniques similar to “Matches user interest” may be applied in reverse. Thus, if the content item has those topics, the item receives a negative score. [0089] Violated policy, e.g., fraction of text: negative scores applied to “forbidden topics” like nudity and/or applies a rule like “% of image that is text.” [0090] Creative type offsets: Rules for certain creative formats that are a greater or lesser distraction to users. For example, large video ads that fill the screen have a flat negative quality contribution simply because that type of content is a significant interruption. [0091] Query relevance/contextual relevance: In search platforms, whether the content is relevant to this query influences overall quality. The query relevance is trained through use of a “probability of relevant” score, a numeric or ordinal score, or other solutions. Generally, the probability that a human reviewer would label an item, query pair as “irrelevant” is a negative quality score, and probabilities of other labels for definitions of “match” (e.g., related, same category, close match, exact match) have positive quality scores. Platform training on query relevance may further result from using user engagement signals as in “Matches user interest.” Query relevance may be built on top of topic classifier solutions (e.g., using prediction features) as in “Matches user interest.” [0092] Landing page quality: Rather than scoring the quality of the item itself, score the quality of the destination that the item links to, e.g., webpage, mobile app, product recommendation page. Most techniques listed above can be reapplied with appropriate contextual modifications to also score the quality of the landing page and/or destination. [0093] Creator trust: A prior belief that this content creator will create other high-quality content based on the quality scores of past submitted content. An average quality score of other contents using a prior-shrinkage solution based on the number of impressions in the recent past for the creator is an example implementation. [0094] The machine learning models may initially be trained using “priors.” A prior is a default parameter used when there isn’t data to compute a better value. For rates, e.g., click rates, one method to innately populate is to use the platform or advertiser average click-through rate (CTR) and then use a formula like additive smoothing so that as the platform obtains more data, the estimate gradually improves. [0095] FIG. 5 is a flowchart illustrating a bidding series content-to-slot ranking system. Figure 5 replaces steps after 404 of Figure 4 and begins at step 506 in alternate of steps 406 and onward of Figure 4. Page compositions don’t always have a linear ordering of page utility. In some embodiments, the same content items in different slot allocations have different page utility scores. Expected utility of a content item may be different in different slots. Two overall quality score controls include: 1) a threshold control relating to the expected level of experience in a slot and 2) an impact control relating to the cost of a lower experience in that slot. [0096] The threshold control refers to an expected value for how negative (in a standardized unit, e.g., monetary or utility score) a user’s experience is with a promoted content in comparison to the experience when an organic item is allocated to the given slot on the page. The expected value is at most zero, because promoted content is not expected to be of a higher experience than the organic content. Even in a circumstance where promoted content has a higher experience than the organic content, a high confidence of a fantastic user experience is also an even higher confidence in the absence of a positive user experience. [0097] As an illustrative model, a maximum quality score may be 75% and an inflection point parameter may be 99% of the overall quality score of all prior content items. During configuration of the platform with a given page, an administrator fits a sigmoid curve to the quality score where below the inflection point, there is no change, but above the inflection point, as the quality score increases, the resulting value approaches the maximum quality score. [0098] The threshold values should be calibrated with the actual page experience in real user studies. For example, consider the average probability for showing organic content in a slot. [0099] For control 1, if the expected level is a constant, then all positions are assumed to be of equal user experience. Where the expected level is a constant, allocated content is a good model for “feed-like” pages that encourage browsing and a diversity of content. If constant, then an item in organic display can be shifted to any other position without a loss in user experience. An example of a suitable environment is Amazon’s shopping platform after a first page. [00100] Alternatively, if the expected level is not a constant, then typically top positions have higher expected quality than lower ones. Such circumstances make more sense for “search- like” pages where there is a perceived “best” or “best set” result in roughly descending order. [00101] The impact control refers to how much the expected negative user experience matters to the overall page experience in a slot. The impact control is a platform-specific tuned weight. In general, a negative experience in a top slot matters more than one in a lower slot. One reason for this is that a user will commonly end a user session (leave the page) if there is a large amount of negative user experience for a content item they observe. If the user sees that negative experience at the beginning of the session (at the top/start of the page), then the loss to the platform is the rest of the session. If the user sees the negative experience at the end of the session, then they may end the session, but there is little of the session remaining, and so the impact is smaller. Estimating the impact of ending a session at different positions is one method for choosing priors for the impact curve per allocation position. [00102] For control 2, the cost of a lower expected display experience in a slot refers to a cost of a negative experience as measured from page utility score. Each slot may have a different exchange rate of expected display experience at a slot and overall effect on page utility score. [00103] In some embodiments, the impact score is constant across all slots and then a degraded experience is equally costly everywhere. In other embodiments, the impact score is variable across slots and will tend to decrease in lower positions. [00104] The overall impact of these two parameters (threshold and impact) is that the platform allows modeling of different sensitivities to negative user experiences generated by promoted content for different slots. Additionally, the platform enables an administrator to access quantitative methods to make sense of why these parameters were selected versus a total black- box solution. [00105] In step 506, where each content item has an overall quality score, for each slot, the platform transforms the overall quality score to expected displayed experience (EDE) based on the slot exchange rate. In step 508, the platform computes the user value of having the EDE at the respective slot. The user value is a numerically represented value to the page’s viewing user of the page display organization. User value is represented in monetary amounts as the bid cost to a content auction at the given slot. In some embodiments, it is preferable to use expected rather than actual organic experience to reduce variance and not degrade promoted experience if organic experience is degraded. [00106] In step 510, the platform reallocates to increase promoted content load. From a top slot, the platform fills in slots between organic content slots with promotions. In step 512, the platform computes the total page utility score of a current allocation. [00107] In step 516, the platform determines whether a minimum spacing (e.g., a predetermined threshold or 0) has been met. If the minimum spacing has not been met, the platform performs step 514 and generates additional allocations, each with varied parameters. In step 514, the platform reduces the spacing between promoted content and increases the exchange rate, and then repeats the allocation and total page utility score computation using the new parameters. For example, the ratio of promoted content to organic content shifts toward promoted content, and the value required by the promoted content needs to become greater to compensate for the increased ratio. [00108] If the minimum spacing is met, the method proceeds to step 518. In step 518, the platform identifies the content allocation that was computed to have the highest page utility score across all slots. [00109] If all promoted content is as good (in overall quality score) as organic content, then promotion load will be maximized and user value will always be zero. The exchange rate should increase because the entire presentation is changing as promotion load increases. The method described in Figure 5 assumes that promotions are evenly spaced, and even spacing is an optimal allocation. Promotions do not necessarily have to be evenly spaced. The method may be run with subsets of slots as opposed to the total available slots in order to vary the promotion spacing. That is, the spacing of promotions in the highest-ranked set of slots may vary from the promotion spacing in lower-ranked slots. [00110] FIG. 6 is an illustrative example of the method of Figure 5. In a given example, there are 10 slots available on a page. Organic content is portrayed through blank space. The organic items do not add to page utility. Content items A through E are promoted. The figure displays 5 different allocations of progressively decreasing spacing of promoted content load. The table includes an auction bid for each item (the table in the figure illustrates a single bid auction, such as a Vickrey auction), and a user experience cost of that item, represented by the “Comp” column in the Figure 6 table, based on the quality score of the content and the promotion load. The overall utility score of the content is the combination of the auction bid and the user experience cost. In some circumstances, the user experience cost increases in response to an increased promotion load. In each of the sample allocations, the total utility of the allocation is expressed at the bottom of the table. The third allocation has the greatest utility based on bids and cost to user experience. [00111] Notably, the promoted content is added to each allocation in order of individual utility. Item A has the highest individual utility of the available promoted content (4) and thus is the only promoted content inserted into the allocation with the highest spacing between promoted content. Competition Controls [00112] In some embodiments, a given promoter user may select whether they want to appear as both promoted and organic items in the same page. It’s not obvious which solution is optimal for promoters. If they choose “Best” (as in the best option available to them), then they may be competing with competitors but have better unit economics. If they choose “Both,” then competitors may be blocked, but they may be paying for click that they otherwise would have received on the organic listing. Google Ads and Apple App Store use “Both” whereas Amazon usually uses “one or the other” but not necessarily “best.” Disclosed herein, the promoter is enabled, via graphical user interface (GUI) controls, to choose whether they want “best” or “both” auction display options. Given this option, it is never less profitable to not promote by the auction model. This is enforced by comparing the outcome of promotion to non-promotion for every won slot that is also potentially deliverable by the organic system. [00113] FIG.7 depicts a flowchart illustrating competition controls. In step 702, at delivery time (of content on page), if a promoter is using serial organize, then promoted retrieval prioritizes “both” triggered campaigns/items over “best” triggered campaigns/items if the item is also in the organic response in early deduplication unless other priority rules select otherwise. In step 704, allocation is performed as described in any other allocation method disclosed herein, including allocation of organic items. If “Best” is selected, the allocation process does not repeat and organic items that are also promotion auction winners are displayed as organic. If “Both” is selected, the allocation process repeats and promotion auction winners that are also organic items are displayed in both a slot for organic content and a slot for promoted content without exclusions. [00114] In step 706, allocation is performed an additional time, but excludes a promotion item winner that could also be allocated as an organic item. In step 708, for each such qualified item (of step 706), the platform compares outcomes (potential allocations) of steps 704 and 706. If 706 is better for the promotion than the allocation of 704, then that promoted item is excluded from the promotion auction. If promotion (as opposed to organic delivery) causes a change in increased rank/positional higher slot, then the promoted version of the item is “best” delivery. [00115] In Promoted Case: [00116] Value is item bid at position x – Vickrey price, item does not appear in organic listings [00117] Surplus: V(a,x) – P(X) [00118] In Counter Factual Organic Case: [00119] Value is item bid at position y, price is zero [00120] Surplus: V(a,y) [00121] X is a higher rank/more valuable then y (or else “a” would decline this auction) [00122] V(a,x)>V(a,y) [00123] V(a,x) – P(x) – V(a,y) +q < 0 [00124] Where “q” is an optional price discount to encourage promotions by the platform and is otherwise zero. [00125] In some configurations, a complementary bid is set to 0 for position X. In response, even though the organic system ranks lower, the content being delivered by the organic system is a strong prior of user, quality. To promote use of the promotion system, set comp bid of otherwise- delivered-by-organic-system items to 0 regardless of position. Delivery Billed on Credit to Be Paid by Future Sale [00126] Promoters frequently prefer to manage their expenses, risk and margins by paying for promotions on a per action (e.g., per incremental sale) basis. However, literally billing (and optimizing) for such rare, variable, or delay events creates challenges for promotion delivery services for a number of reasons. [00127] One such reason is that the time between the promotion impression and the attributed action can be large, especially in circumstances where the promotion is attempting to sell a product. The time may be measured in minutes, hours, or even days. In the meantime, the platform must decide whether to continue showing the promotion or discontinue due to projected insufficient budget. Such projections are often incorrect. [00128] Another reason is that rare events tend toward high variance. For small promotion campaigns, especially those created by small businesses with small budgets (“mom and pop” stores or local restaurants), the variance can result in too many attributed billable actions in a campaign, causing the budget to be exceeded, or too few billable actions, resulting in promotion campaigns that run “for free.” [00129] In order to address the variance and/or gaps in attributed actions, application of a credit account connected to web promotions that is paid by a fraction of future promotion attributed to sales circumvents the billable action inefficiencies and potential exploits or dissatisfying behavior. The credit account enables small campaigns to be billed for “fractional” actions in aggregate and reflects a similar risk profile as action billing either by random chance or by purposeful exploitive behavior by promoters. [00130] For example, billing occurs to the credit account on a per impression basis, but repayment of the credit account need only occur on a per sale basis. Additionally, the repayment rate for the credit account may be variable. That is, initial campaign conversions (sales) or late campaign conversions may have increased repayment rates (e.g., the percentage of the sale paid to the platform may be 2:1 initially to compensate for providing the credit account). [00131] The platform assumes the risk of issuing a revolving credit limit for promotions based on its estimate of future attributed actions. The platform also gracefully forgives promotion debt or paying down of promotion debt at a discount for addressing cold start, small business incentives, and platform promotion growth. The credit design is easier for visibility of management of the billing. Auction design is opaque to promoters, while forgiving or discounting a debt is highly visible and easy to understand. Credit billing and debt forgiveness preserve both the perceived value of the promotions to the promoter and the aggregate demand-based price support in the auction. [00132] Furthermore, since the platform only earns revenue when the promoter makes a sale, this incentivizes the platform to deliver the most sales versus other proxy engagements that may not lead to a sale. The platform alignment facilitates a fully platform managed optimization portfolio product designed to maximize total platform sales. [00133] In some embodiments, the size and availability of the credit account is used to increase diversity within the content auctions. To encourage promoters of different company sizes and services, the credit limit may be determined by a machine learning model that increases potential credit limits and repayment rates for promoters that increase content auction diversity. The model evaluates continuously and updates dynamically in response to continuous content auctions being executed for the page. [00134] The platform must decide how much credit to issue a promoter. If the limit is too low, then the platform could miss out on future revenue or frustrate promoters. If the limit is too high, then promotions will be shown that cannot be billed, leading to either default, debt relief, or collections activity, and displacing other paying promoters and organic contents for no revenue to the platform. [00135] FIG. 8 illustrates a flowchart for determining the credit account of promoters. In step 802, using machine learning models based on past sales data, customer relationship, and platform projections, the platform estimates a total number of sales of a given promoter’s campaign, the increase in sales attributed to hypothetical future promotions, and the fraction of sales revenue that the platform can expect to receive. [00136] In step 804, during each campaign, the platform evaluates how promoters are utilizing credit. That is, the platform determines where promoters are issued credit, how much of the credit line is being used by the promoter (on a per impression basis), and how quickly that credit is being repaid through sales. [00137] In step 806, the system determines whether the promoter is likely to pay off their debt through the current promotion scheme. If the determination of step 806 is positive, in step 808, the platform reissues credit and potentially increases the limit. If the determination of step 806 is negative, in step 810, the platform takes remedial action. Remedial actions include any of: lowering the credit limit; forgiving debt; requiring cash payment; or providing promotion optimization assistance. [00138] Use of the credit platform is intended to boost cold start and exploration into content auctions. The credit platform further supports diversity in strategic demand, and small versus large promoters. [00139] Trying to address subsidies and other strategic “boosts” on the auction side is difficult and confusing for non-experts. Conversely, controlling from the billing side is much easier to manage. Additionally, issuing credit makes for a simpler GUI for promoters to understand. In some embodiments, in order to compensate for credits, the first few sales per period (day) pay down debts at a higher than 1:1 ratio. For example, the first three sales per day pays down promotion debt at 2:1. [00140] Debt repayment rate is easy to apply to all advertisers without subsidizing big sellers who do not need it. This saves the need for a given promoter to “qualify” as a Small Media Business (SMB) or openly billing large businesses more. For cold start promotion campaigns (new promoters), debit credit lines are lower than a 1:1 ratio, e.g., 0.5:1, up to a volume limit. The shift of repayment rate is easy to visualize and manage in the user interface without hidden mechanisms. [00141] The combination of the above features enables small, new promoters to try the system and submit real bids and demonstrate real behavior while subsidizing them in a controlled, transparent way. Autobidder/Pacer [00142] A “pacer” is an automated economic system that attempts to maximize an objective, usually total profit, given some constraints on how many auctions a bidder (such as a promoter) can enter. Constraints on pacers are typically: [00143] 1) restrictions on duplicated promotions per user [00144] 2) budget limitations [00145] An “autobidder” refers to a system that operates as an agent for the promoter across multiple auctions. The purpose of this series of inventions is that the promoter only needs to enter a single value, their “true” maximum value of winning, and the platform will always make this the best possible bid. If there were only one auction, then prior art inventions would be sufficient to make this true. However, because page content auctions execute multiple simultaneous auctions, additional systems are needed. [00146] If the promoter is bidding for engagement, like a click, then a bid for insertion is a function of the probability of a click. Likewise, the page itself is bidding for user quality in the form of a complementary “user value” bid. The platform or page itself is bidding in the content auction in order to provide an effective minimum bid/quality score combination that promoters must meet before having a winning bid. The platform bids automatically in each content auction with values that vary based on importance of slot. [00147] A platform level autobidder makes use of machine learning to automatically enter bids for each promoter user. Without a universal platform autobidder, there is an incentive for each promoter to increase their profits by lowering their bids. However, promoters will never do this as effectively as a universally coordinated platform bidder. [00148] The platform bidder does not prevent promoters from attempting to game the auction system. Rather, the system is designed so that promoters will only be less profitable in expectation by trying to game the system. From an engineering perspective, the platform bidder enables a page administrator to more cleanly isolate different control systems. For example, if the bidding system always chooses the optimal bid for every auction with the input of a single “true maximum” value of a click, then an administrator can build a system on top of this that determines what the true maximum value of a click is for different targeted users without also having an interaction with what to bid. [00149] Other disclosed embodiments solve for optimal allocation, bid, and price for each auction. However, those features are complicated when there are multiple, simultaneous auctions or a series of auctions for the same item (slot) with a different user/iteration of the page. [00150] The platform autobidder is the architecture that promoters engage with to automatically make bids, and the pacer is the portion of the autobidder that regulates the rate at which bids are made based on predetermined parameters. The autobidder is a machine learning model that learns from the decisions it makes in response to user parameters and the resultant outcomes therefrom. [00151] This system is called a “pacer” because, historically, these systems are implemented as control-feedback systems that try to “pace” an interpolated goal over time. It could also be a constant, cost-per-impression (CPM) price throughout the day. For an average cost-per-action (CPA) pacer, the target is a constant average CPA for a time window. [00152] Below are four different pacer embodiments that operate under different parameters. Those pacer configurations are: [00153] 1) Probabilistic pacer: Randomly determines to not participate in auctions to conserve budget. The simplest implementation is to randomly enter auctions with p(Entry) = budget / Budget of winning all auctions at the bid in a time period. A probabilistic pacer can be built on top of other embodiments listed below as an alternative to scaling the bid. [00154] 2) Discount pacer: Raises and lowers the bid by a scalar in real time so that the cumulative percent of budget spent is compared to the cumulative percent of auctions for a time period. Herein, a discount pacer bids across different users via the platform autobidder. The result of having a single platform autobidder is that auctions for the least profitable users are lost first to conserve profit for the most profitable users. [00155] 3) Target performance pacer, or average cost pacer: Rather than pacing a cumulative budget spent, the parameters pace a target aggregate metric (e.g., average cost per action, or average cost per impression). Herein, a machine learning model prioritizes total surplus given the pacing goal of the target aggregate metric. For an average cost per action pacer, this is achieved by attempting to win every impression with the same expected cost per action for every impression won. [00156] 4) Option pacer: Rather than pace over all users, attempt to reach each user most efficiently given some estimate of future opportunities to reach that user. Herein, the pacer uses machine learning models to estimate this over short intervals, for example, within the same day, and accounting for restrictions on duplications. A rational bidder bids lower for a user that will appear again in another auction soon as compared to a user that will never appear again. [00157] A challenge in auction design is that analytical results, namely, the guarantees of Vickrey auctions, are restricted to a single auction. In repeated auctions, there is “option value” of losing the current auction to win a potentially more profitable auction later. Therefore, bidders should bid lower than their true value. The amount less than true value to bid may be approximated using supervised trained models that learn the optimal fraction of the total value to bid per auction while accounting for the “option value discount” and some information about the current auction. [00158] Given a target audience, a value formula (e.g., click bid), a budget description (total amount, if any, and any period of time), and repetition constraints (frequency caps, minimum item spacing rules, item winner diversity controls), the autobidder will optimize for the best bid per auction to maximize the bidder’s sum profit. If an autobidder has an unlimited budget and there is no repetition constraint, then the maximum profit is to enter and win every auction, and there is no “option value” of losing a current auction to win a future one. The rational bid is to bid the maximum “true value” Vickrey auction bid in every auction. [00159] Prior art uses this assumption. However, this assumption is not realistic of actual promotion delivery services on the platform side, which impose diversity, spacing, and repetition controls, nor on the bidder side, which often has a desired repetition pattern to maximize impact for the bidder and to minimize negative values of annoying users. Herein, the autobidder includes a number of machine learning objectives: [00160] For each bidder, individually, the objective is to maximize their total surplus given constraints (e.g., daily budget), given that all other bidders are also doing this simultaneously. The objective is to maximize the total surplus of the entire platform, given a reasonable reserve price. A solution shows empirically in simulation that: [00161] A. The optimal bid is for every bidder to bid their true value without bid shading. [00162] B. The total surplus of the platform is maximized, both for each bidder and for the platform in aggregate. [00163] C. The clearing price of slots is similar for similar slots across auctions. Slots sold to the same user on the same day should all have a similar price, differing primarily by higher costs for lower user value slots and higher costs for higher value allocation slots. Further, slots sold to similar but different users should have similar distributions of prices. [00164] A properly functioning autobidder will produce option discounts that are (a lower discount multiplier refers to lower necessary bid prices, and vice versa): [00165] Lower multipliers (cheaper bid prices) for frequent users than infrequent users, [00166] Lower multipliers earlier in the day or in the beginning of user sessions than later, [00167] Lower multipliers for users with high prices versus users with low prices due to competitive bidding differences, all else being equal. This helps balance bidder demand between users to maximize total surplus and avoid a “flood and starve” promotion delivery pattern. Additionally, prices still vary by how much bidders value users; all else being equal, prices for more valuable users (e.g., clickers in the United States without conversion attribution blocking services) will be higher than for other users. [00168] For the same user (user viewing the page), promoter bids with more fixed auction negative complementary bids (worse user value) will also tend to have lower option discounts. This is because bidders pay all the negative user experience value (low utility score requires a higher bid to win the auction). This has the added benefit of avoiding delivering items to users who are least likely to like them. A “fixed auction negative complementary bid” is a per auction, allocation-independent complementary bid that may be different from a user complementary bid. Preventing Autobidder from Colluding with Itself [00169] One challenge an autobidder platform has is to avoid collusion and bidder information leakage. As a result of the autobidder making all bids for all promoters, the models have to avoid “learning” that a system of automated collusion via the platform-level autobidder in which all bidders collude to bid artificially low reduces competitive prices. Here, the “collusion” is from all bidders using the autobidder system, not from any deliberate action by bidders. [00170] The platform autobidder operates with a machine learning model that uses the results of each auction as training data; however, the individual data associated with each promoter user is also stored by the platform autobidder. With perfect data, and configuration to “optimize return for promoters,” the platform autobidder may learn to never offer true value bids for any of the promoter users, thereby colluding with itself. To prevent the autobidder from learning how to collude, the platform may include guidelines that prevent the practice and/or cause individual bid generation routines for each respective promoter user to operate with some degree of independence and less than perfect information. [00171] The platform autobidder is configured to prevent inadvertently leaking “private value” competitive information across bidders. Revealing partner private values discourages bidders from bidding their true value based on competitive information advantages. The resultant effect is to alert competitors to high-value opportunities and reduce the bidders’ advantage. In order to prevent the platform autobidder from colluding with itself, some embodiments are configured to: [00172] Never reveal details of how the system autobidder makes bids. Bidders can only see the outcomes of their own auctions through standard reporting tools. The operation of the platform autobidder is a black box to all auction participants. The autobidder is a system that runs on private servers. [00173] Set reserve prices, which will prevent prices from ever dropping more than that minimum. [00174] Not allow autobidders to know the bids and prices of any other competitors as features in a given auction. The autobidder will only have access to information about the winners of previous auctions, including prices, volume, and time-series data at the user level and user aggregate level. [00175] Auction winner identities will be anonymous and not distinguishable for at least a threshold period. In some embodiments, auctions of a predetermined age provide full information as training data. The autobidder is unable to solve for other types of “non-compete” collusion, such as two high bidders entering alternating auctions so that they don’t inflate each other’s prices. [00176] The autobidder has access to a value representing a potential complementary bid from the platform for user quality control (that is, a bidding entity associated with the page itself as opposed to a third-party promoter). The visibility of the complementary bid to the promoter users varies by embodiment. [00177] There is a distinction between choosing to bid on other users of the page versus the same user of the page. Each scenario can include separate machine learning models. [00178] An example design of the platform autobidder includes a set of users (page viewers), each of whom has a set of attributes. These users generate auction opportunities for a period of time. For each of these auction opportunities, a number of bidders will bid in the auction. Each bidder has a true value of the item. Some bidders may have budget limits. Bidders may have some targeting constraints and “affinity” page-viewer-user attributes. There are simple click engagement prediction models that use the collection of bidder affinity attributes and user attributes to generate click predictions from distributions with different expected values. [00179] Characteristics of the users will be: [00180] A set of a dozen or so binary user features. These are used for simulating differences in event prediction. [00181] A classification of new, core, or casual. New users join during process and do not have predetermined characteristics. Core users will frequently generate auctions on many days and many auctions per day. Casual users will generate fewer auctions per day on fewer days. [00182] A time to live, after which the user will stop using the platform. [00183] The goals are to generate an autobidder model that maximizes the total surplus when applied to Vickrey auctions, and to generate an understanding of different marketplace dynamics on the ideal autobidder like user auction generation frequency, number of items per auction, and budget constraints. The platform may also have additional constraints like minimum time period between repeated item winners from the same bidder. [00184] Surplus is just “value - price.” True value is inferred from bidders. It is important that promoters are incentivized to always provide their true value as a bid. The design of the autobidder model training is based on each auction per bidder acting as a training example. The set of features per training example are variable, and can include: [00185] User features: the set of user features; the user ID; how long this user has been on the platform; features representing the characteristics of past auctions for this user, including: prices and/or frequency and recency of auctions; model-based features predicting the likelihood and quality of future auctions. [00186] Auction features: the time of the auction; the demand curve; the allocation- independent complementary bid for user quality from the platform. [00187] Bidder features: the remaining and spent budget, if any; if there is no remaining budget, the ad automatically loses; the label is the sum surplus per auction. [00188] Data learned from future auctions does not influence the parameters of current auctions; however, in some embodiments, the autobidder makes use of a training stage to set initial auction parameters, and a future holdout evaluation that adheres to above constraints. The training process is to maximize total surplus in the entire period of simulation, which is achieved by maximizing the expected surplus per auction. The model training may use a reinforcement learning type algorithm to maximize. Dynamic Promotion Load [00189] Changing the allocation pattern of promotion contents mixed together with organic contents can change the probability that the user interacts with promotional items. This invention trains a model to adjust baseline event predictions used in bid formulation (e.g., click) for different promotion allocations to more correctly maximize total marketplace value. [00190] Dynamic promotion loading builds on the above descriptions for dynamically changing the promotion allocation based on the user quality of all deliverable promotions. For example, if all promotions are of terrible quality, maybe you can only show one on the first page. If enough promotions are of fantastic quality, then the entire page could all be promotions. [00191] In a practical application at a per-auction level of dynamic promotion load, multiple advertisers work to be included in allocations. A first user searches on an online shopping platform (the page) for “Lawnmowers.” Ordinarily, the shopping platform will use its organic system to fill the page with ~10 results for “lawnmowers.” However, the shopping platform’s promotion system determines that there are many promoted results—so many, in fact, that it can fill the entire page of ~10 results with all promoted items with the maximum value trade-off between revenue and user experience. [00192] The shopping platform knows this because their promotion system solves for all other allocations of fewer promotions, and the maximum value is to fill the entire page. Furthermore, the shopping platform determines that the maximum value is to show 5 results from “John Deere” and 5 results from “Craftsman.” When computing the prices, John Deere is billed as if it didn’t participate in the auction at all, not as if each lawnmower displayed by John Deere wasn’t in the auction. [00193] If John Deere didn’t participate at all, then the shopping platform would have shown 6 Craftsman lawnmowers, 2 other lawnmowers promoted by Toro and Honda, and 1 organic result. John Deere is billed in total for this auction the difference in value between what was shown and the hypothetical alternative allocation. This total bill is divided among John Deere’s 5 winning items so that each item is billed for its externality impact on user experience and the remaining bill is divided according to proportional bids so that the profit ratio for every 5 winning promotions is equal to the externality accounting. Craftsman is billed in the same way. [00194] The user then searches on the shopping platform for “Coffee Mug.” The shopping platform runs the same system as before, but there is only one promoter for this query, and that promoter has 2 great-quality promotions and 1 low-user-quality promotion, all from promoter “Dropshipr Mugz.” The shopping platform solves for showing 2 of the 3 promotions, one in slot/position 4 and the other in slot/position 10. Dropshipr Mugz is billed the reserve minimum price for both promotions because if Dropshipr Mugz didn’t participate in the auction, there would be no competition, and the impact to the user externality was less than the reserve price in this case. [00195] The user next searches for “Lawn Mower Mug.” In this instance, the shopping platform has only one promotion of poor user quality from Dropshipr Mugz. The shopping platform solves that the maximum value is to show this promotion in the last slot. Dropshipr Mugz is billed the reserve price, but plus the externality on the user, which in this case is nearly all of Dropshipr Mugz’s bid. The profit to Dropshipr Mugz is nearly $0 for winning this auction. The shopping platform gets the $x for showing the promotion, but this is offset by the negative user experience, and so the shopping platform’s expected profit from the promotion system is near zero, too. [00196] In response, the shopping platform is worried about the lack of vendor competition in lawnmowers. The shopping platform increases the promotion system penalty parameters for showing multiple items from the same vendor. One week later, the user searches for “Lawnmowers” again. This time, the shopping platform maximizes value by showing 3 vendors on the main page: 3 lawnmowers from John Deere, 3 from Craftsman, 3 from other vendors, and 1 organic result. Multiple Items from a Promoter Without Self-Competition [00197] A promoter can have multiple promotions compete in the same auction and appear together in the same presentation. Bidders are enabled to generate complementary bidding against duplicates per advertiser to approximate a combinatorial Vickrey auction but with greater computational efficiency so that the solution can be found in a real-time content delivery system. [00198] In auction theory literature and in other types of markets, like electromagnetic spectrum auctions, bidders can enter packages of bids. For example, a bidder could bid a large amount for all users geo-located in Ohio, Michigan, and Indiana as a package, but if that bidder cannot win Ohio, then they don’t want to bid much for Michigan or Indiana, and the bidder never wants to win both scenarios simultaneously. [00199] The state of the art in promoted content auctions does not allow this type of “package” bidding language. Instead, bidders choose an “optimized action,” like a click, and bids are computed in real time as a function of the estimated probability of a click, p(click). Critically, the bid of every item doesn’t depend on the bids of any other items. The auction bid, or per- insertion “eCPM bid,” is = p(click)*click_bid. Promoters bid differently in different auctions because p(click) is computed for every user in every auction. [00200] In simple content auctions, auctions use “greedy” allocation. Simply, all items have a single eCPM bid, as well as a complementary bid to control for platform objectives like platform quality. Final bids are sorted and allocated in decreasing bid order. [00201] Herein, bids can depend on the slot/position allocated. For example, the probability of click may be much higher in the top position on the page, or the user quality control complementary bid may be much lower on the page, or showing the contents lower on the page may result in them not being shown at all if the user doesn’t scroll far enough. [00202] As a result, the system cannot simply sort bids, because the bid is different for every slot. The resulting interaction between allocation and bid can be represented by a matrix where every row is a promoted item competing in the auction, and every column is the position in which the promotion would be allocated. In some embodiments, the “nonlinear page maximization” is solved efficiently using bipartite graph matching algorithms. The result is that there is no such thing as a “highest” bid, and the price per position doesn’t necessarily decrease from top to bottom. Prices are computed by repeating the entire allocation without the winning promotion, per winning promotion, and billing the difference in value. This is called “page maximization.” [00203] Greedy solutions tend to poorly allocate in top positions, which should have been allocated to very high quality or cost-per-click (CPC) performance ads. Instead, greedy solutions tend to place CPM brand ads in top positions and then fail to allocate CPC performance ads. [00204] In some embodiments herein, the allocation pattern per page itself can be maximized. Maximizing per page allocation allows some pages to have only one promotion, other pages to have a few promotions, and other pages to be nearly all promotions. The platform promotion system determines in real time which page allocation generates the highest available value. In this invention, the function of bidding per item per position can be different. [00205] The page allocation is determined from a machine learning model on different components used to compute bids. An example component computes the probability of click depending on different positions. For example, the probability of click in the top position is different if showing just one promotion on the page versus 50% promotion load. Herein, an “allocation-independent” bid is used to generate top promotion candidates for entry in the auction. For example, click estimates are computed for the top position on the page in the most common allocation pattern on the platform (e.g., 25% promotion/organic load allocation). [00206] In some embodiments, while item bids can depend on the allocation, they don’t depend on the bids on other promoted items. Not having a dependency on bids on other promoted items simplifies the bidding language and computation efficiency. The computation efficiency is a key value because the entire allocation maximization, auction, and pricing must happen in milliseconds. [00207] The bidding platform employs a bidding heuristic for winning multiple items. Winning multiple items per auction, per promoter, enables a limited form of package bidding for each promoter. A general solution for package bidding is a combinatorial auction, which is NP- Hard (infeasibly expensive) to solve. Moreover, for promoted auctions, there is an expectation of diminishing returns for more complex optimizations because of 1) limitations to the correctness of the underlying bidding models, e.g., how good the pCTR model is, and 2) CPU usage and latency are expensive and limited. [00208] The heuristic is to allow package bidding as part of the platform promotion system without changing the promoter bidding interface, i.e., still cost per click (CPC) or cost per impression (CPM). For example, consider the below practical application: [00209] A given user searches on the shopping platform for “Lawnmower” again. This time, by fitting a click prediction model, John Deere has learned that they get an average of 0.5 clicks per search if they win 5 slots, but only 0.05 clicks per search if they win only 1 slot. The person running the John Deere promotion campaign bids $10 per click, but they’ve observed that they prefer to win as many slots as possible per auction to maximize their profit. That is, for every additional promotion inserted into an auction, they get more value per winning item on average. [00210] Likewise, Craftsman observes the opposite effect. They get an average of 0.5 clicks per search if they win 5 slots, but they get 0.2 clicks per search if they win only 1 slot. [00211] To respond to promoter needs, the heuristic: [00212] 1) fits a machine learning model to detect this type of repetition-dependent bidding as a function of observable biddable actions, like clicks; and [00213] 2) has an efficient allocation heuristic algorithm to allow a form of packaged bidding to handle the case in (1). [00214] FIG.9 is a flowchart describing an allocation heuristic for package bidding. In step 902, the platform computes promoter bids as a function of bidded actions (e.g., clicks) as normal. In step 904, the platform computes an event prediction function that accounts for multiple presentations from the same promoter, e.g., click prediction if 1 wins, 2 wins, 3 wins, etc. [00215] In step 906, the platform runs the auction as normal using a “single allocation” click prediction bid. In step 908, the platform identifies how many items each promoter won. In the circumstance where the model in step 904 estimates a higher average slot value by winning more slot allocations, then in step 910, the platform increments click probabilities in bids as if +1 more slot allocation was won. Conversely, in the circumstance where the model in step 904 estimates a lower average slot value by winning more slot allocations, then in step 912, the platform decrements click probabilities in bids as if -1 fewer item was won, down to 1. [00216] In step 914, the platform reruns the auction using the new per item bids. In step 916, the platform identifies whether the allocation changed. If so, then the method repeats from step 908. If not, then in step 918, the platform computes the final allocation with the final bids computed from the previous step using the promoter bids set for the number of won slots per promoter. In step 920, the whole cycle repeats until allocation does not change, or there is a maximum number of iterations, or there is a cycle. In step 922, the platform computes final prices using Vickrey pricing by repeating this algorithm for every winning promoter. [00217] Back to the practical example: The user is still searching for “Lawnmower” on the shopping platform. The auction runs as normal in step 906, but John Deere’s per item bids are all raised (to try and win more), and Craftsman’s per item bids are lowered (to try and win fewer items at higher profitability). After some iterations, the allocation settles on 6 John Deere slots, 3 Craftsman slots, and a Toro slot. Multiple Winning Slots per Promoter [00218] Disclosed herein, in promoted content auctions, promoters often choose a single, typically “best” slot, for entry in the auction. In those embodiments, each promoter can win at most one slot with one promoted content per auction. However, some embodiments efficiently deliver multiple winning slots to the same promoter. [00219] The currently described embodiments are particularly useful for shopping-like products, where a large number of related items that are owned by the same promoting interest are shown together. For example, a search for “jeans” is likely to show multiple items from the same brands (e.g., Levi’s). [00220] A goal of this disclosed embodiment is to bridge the gap between inter-auction repetition controls and intra-auction repetition controls and build a framework for a “continuous auction” design that produces similar auction results regardless of whether using a single big auction with probabilistic opportunities that shrink to 0 or a series of smaller auctions. [00221] FIG. 10 is a flowchart illustrating a method of executing a continuous auction that enables winning of multiple slots. In step 1002, the platform retrieves the set of all promotions that can be shown in the current request on a per promoter basis. In step 1004, the platform collects promotions into sets on a per promoter basis. Each set is partitioned into all sets of promotions that can be shown together. In a common case, there is one set of all items per promoter; however, in some embodiments, there are limitations within a potential set. For example, if content item A and content item B can never be shown with each other, and there is another content item C, then the set of sets includes {A, C} and {B, C}. [00222] In step 1006, the platform ranks on a per promoter basis rather than on a per content basis. The embodiment scores the utility of all items (in each set) per promoter for this request and sorts the insertion preference in decreasing utility order. In some embodiments, the platform optionally caps the set size to a system parameter maximum number of promotions a given promoter can have (e.g., 3). The scoring of each set may include some heuristics to improve computational complexity. Two example heuristics include: [00223] Method A: For each promoter, compute page allocation for promoted items just for that promoter with no repetition bias. Rank the promotions in order of non-increasing utility from allocation. [00224] Method B: Choose a “representative allocation position” (e.g., middle of the page, most common allocation) and compute utility for all promoter items at that position. Rank promotions in order of non-increasing utility at that position. This is more computationally efficient than A. [00225] In step 1008, the platform records duplicate penalties. There is a utility penalty for representing the same promoter in every slot available for promotion. The user experience is reduced as a result of the lack in diversity. Using platform-specific parameters for repetition controls (implemented via machine learning models and/or heuristics), penalties are added to the set-wide utility scores based on the expected negative user experience to duplicates in the order of the rank from step 1006. The penalties are applied to the all-positions-utilities matrix used in the nonlinear allocation step. A final effect, in dollars, of the repetition depends on the position. [00226] The penalty process may be performed as a loop. For every slot, for every content item per promoter in order of rank from step 1006, the platform assigns a penalty from 0 to “n” where “0” is the first item and always has a resulting duplication penalty of 0. The loop proceeds across every content item in the set. The size of the penalty (0 to n) increases with the size of the set. If the utility is less than 0 for the combined utility of the content items in the slots during the loop, the loop is terminated. Otherwise, the loop continues to the next slot/item until there are no more items. The steps within the loop include [00227] 1) For the current item, compute the all-slots negative expected user experience (value “x”). [00228] 2) Compute all-slots utility (using any embodiment described herein to do so). [00229] 3) Multiply x by a scaling parameter and subtract from x an additive parameter (y=ax+b). The parameters are used to adjust the penalty during each iteration. [00230] 4) Compute all-positions utility with the adjusted (lower) utility for the current content item from the given promoter. [00231] 5) Record the difference in utility between (2) and (4). [00232] 6) Add the item with the utilities computed in (4) to the list of items to allocate. [00233] 7) Increment the scaling and/or additive parameters by a platform parameter amount (e.g., cause each iteration through the loop to increase the penalty based on the set size). Various embodiments of (7) are both linear and non-linear. [00234] In step 1010, the platform allocates items based on the matrix of all position utilities as described elsewhere in this application. In step 1012, the platform computes prices based on counterfactual value computation per promoter. Unlike in the standard Vickrey method of removing each winning item and recomputing allocation, instead, each promoter with a winning item is removed and allocation is recomputed. Prices are now computed per promoter, not per item. In step 1014, the platform assigns the price per content item. [00235] In step 1008, the platform recorded the negative utility attributed to duplication penalties. The platform does not directly add this negative utility to the price in the same manner as is done for quality control (described elsewhere, herein). Instead, the platform divides the negative utility among all the promoter’s winning items evenly, accounting for the probability that some of the duplicates will not be shown. [00236] In a simple case, with a two-item set, where every allocated item is always shown, and the repetition penalty was $0 for the first item and $1 for the second item, then both items will have a price increase of +$0.50. [00237] Using the expected probability of impression given an insertion, the platform computes the cost increase so that in expectation, the repetition penalty will be evenly applied to all items. For example, if the repetition penalty is $1, there are 2 items, and the second item has a 50% probability to be shown. The extra price is then: $0.33 for the first item and $0.66 for the second item. [00238] As a result of the nature of the non-linear allocation algorithm, it’s possible for a price to exceed a bid. In that case, the platform sets the price equal to the bid and divides the remainder among the remaining items proportionally. Modelling Slot Allocation as a Function of Realization on Slots [00239] A component in this disclosure is using a complete bipartite graph algorithm to prioritize entire page utility. However, in some embodiments, the utility of slots may scale and is modeled by “p(impression|insertion).” For ordinary page maximization, all slots are modeled as equally valuable. Likewise, for Vickrey auction price computation, all slots are considered equally valuable. For example, there may be three slots (1, 2, 3) and four bidders bidding accordingly (A:$4, B:$3, C:$2, D:$1). Ordinary page maximization doesn’t have a concept of “ordering.” Ads A, B, and C will win the auction, but they could be allocated to slots 1, 2, and 3 in any order to achieve the same maximized objective. Likewise, in Vickrey auction pricing, all of A, B, and C will pay D’s bid, $1, because it doesn’t matter where D is allocated. [00240] However, the true value of a slot varies. Where slots 1, 2 and 3 are, for example, time slots in a streaming video, it is more likely that a viewer will close the video before one or more of those time slots are played. In response, the platform prioritizes allocating higher overall utility promotions into the slots that are more likely to be viewed (render an impression). [00241] The extent to which the allocation changes is based on the probability that a user will notice (approximated by the probability to view) each promotion insertion. For example, position 1 is perhaps viewed with 100% probability, position 2 with 50% probability, and position 3 with 10% probability, given a new page view. These statistics are collected based on prior views of videos on the given channel. A viewer’s behavior with content is monitored (e.g., whether the user skips a portion of the video or page using the progress or scroll bar) and supplies training data to models that predict the likelihood a given user will see a given promotion. [00242] Using the probabilities above, and the multipliers applied to each slot, the overall utility of the allocation of the page is modified to where 1:A, 2:B, 3:C with a total maximum utility of $4+$1.5+$0.2 = $5.7. [00243] In some embodiments (described above), the platform assumes that the probability of a slot materializing is independent of the allocation of promoted items. The assumption is a simple model and is algorithmically efficient. However, a more complex model implements a conditional probability of a future impression to materialize given the estimated user quality of the promotion and the user quality of the page presentation prior to a present slot. Using conditional probability computes a more optimal allocation/price and preserves the user encouraged promoter behavior of “bidding true value,” but is less algorithmically efficient. [00244] If the promotion is not shown, the page does not generate utility. Note that the dollar values provided are overall utility scores. These are not necessarily the absolute value paid by a promoter. The values are modified by the quality of the promotion. Display of a low-quality promotion increases the chances of the user ending the session. When the platform is prioritizing utility, a consideration in a conditional probability model is whether the page will be closed prior to displaying later slots. Thus, the platform prioritizes allocating higher quality promotions (reflected by overall utility scores) in higher value slots so as to improve the chances of realizing impressions on the later/lower valued slots. [00245] The Vickrey prices also change. Inserting an ad now also displaces other ads to positions where they may not be shown at all. This increases prices in top slots versus lower slots. An ad now pays: ($3 + $1 + $0.1 = $4.1) - $5.7 + $4 = $2.4. This compares to only “$1” without accounting for the probability of impression. Vickrey prices for other slots further account for the value that the ad displaced changed. For slot 2, the probability of A in slot 1 being shown is still 100%; however, because A is high quality, and projected as non-disruptive, the projected realization for slot 2 is also 100%, and further increases realization in slot 3 from 10% to 20%. [00246] The resulting price for B in slot 2 is then: ($4 + $2 + $0.2 = $6.2) - ($4 + $0.4) = $1.8. And the price for C in slot 3, the last slot, uses 100% probability everywhere, which is the same as the original “set” solution: $8 - $7 = $1. [00247] FIG. 11 illustrates a flowchart depicting allocation based on projected realization. In step 1102, the platform computes a conditional probability of impression given a promotion insertion for every position on the page given for every impression viewed. Markov models are implemented where there is some independent probability to view each next position as an impression given the currently viewing each position. The computation is performed for every slot based on the insertion in each slot that came before. There is an initial base probability of session end for each slot. That base probability is modified by the quality of each potential insertion in a respective slot. The insertion further influences the base probabilities of session end in subsequent slots. [00248] In step 1104, the platform computes allocation to prioritize total page value given utilities scaled conditionally in each allocated slot. In step 1106, for each winner in each position after allocation, the platform computes the projected scaled utilities of the best identified allocation. Then, in step 1108, the platform scales the utilities for the items allocated to those positions and in the final allocation using the impression probabilities given an impression at the current slot for computing Vickrey prices. Placing Promoted Content in a Webpage [00249] FIG. 12 shows a method to place promoted content in a webpage. In step 1202, a hardware or software processor executing instructions described in this application can send a request for a content to an organic system and a promoted system. The promoted system provides a bid for placement of a promoted content associated with the promoted system, and the organic system provides an organic content without placing a bid for a placement of the organic content. For example, the request for the content can be a search query, the promoted content can be advertisements related to the search query, and the organic content can be the answers to the search query that are not advertisements. [00250] In step 1204, the processor can receive multiple organic contents and multiple promoted contents from the organic system and the promoted system, respectively. In step 1206, the processor can compute a quality score for each organic content and for each promoted content, where the quality score represents a probability that a user engages with each organic content and each promoted content. [00251] In step 1208, the processor can create a first arrangement of the webpage by allocating multiple content slots associated with the webpage to the multiple organic contents based on the quality scores associated with each organic content. [00252] In step 1210, the processor can reorder the content slots by moving a promoted content among the multiple promoted contents based on the quality score associated with the promoted content. To move the promoted content, the processor can determine an auction bid indicating an increase in a probability that the user engages with the promoted content when the promoted content is reordered to a different content slot. Based on the auction bid, the processor can move the promoted content. [00253] The quality score can be computed in various ways as described in this application. For example, in one embodiment, to compute the quality score, the processor can gather data associated with the user such as an interaction with a previously promoted content. The interaction can include clicking the previously promoted content, sharing the previously promoted content, indicating a preference for the previously promoted content, or posting a comment regarding the previously promoted content. The processor can train a machine learning model based on the gathered data and can compute the quality score using the trained machine learning model. [00254] In another embodiment, to compute the quality score, the processor can utilize the benefits predictor, as described in this application. As described above, the processor can gather data associated with the user such as an interaction with a previously promoted content. The processor can generate training data based on the data associated with the user by standardizing the data associated with the user, as described in this application. Based on the training data, the processor can predict a likelihood that a type of benefit to the promoted system occurs for a type of action performed by a user in response to being presented with the promoted content. [00255] To determine the likelihood that the type of benefit occurs for the type of action performed within a predetermined timeframe, the processor can determine a first probability and a second probability. To determine the first probability, the processor can determine a total number of impressions of the promoted content presented to multiple users, and obtain the first probability by determining a percentage of the total number of impressions resulting in the type of action being performed by the multiple users. To determine the second probability, the processor can determine a number of the type of actions performed by the multiple users and a number of the type of benefits occurring within the timeframe. For example, the second probability can be a ratio of the number of the type of action divided by the number of the type of benefit occurring within the timeframe. The processor can determine the likelihood based on the first probability and the second probability, by, for example, multiplying the two. [00256] In a third embodiment, to compute the quality score, the processor can obtain a dictionary including multiple topics associated with the multiple promoted contents. The topics can include sports, entertainment, technology, science, etc. The processor can classify the promoted content into a first subset of topics among the multiple topics. The processor can generate a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics. [00257] For example, the promoted content can be related to technology and science but not to sports and entertainment. Consequently, the first subset of topics can include technology and science. The first subset of probabilities can include (0, 0, .7, .9), indicating that the promoted topic is not related to sports and entertainment, thus the probability of 0, and is related to technology with .7 probability, and to science with .9 probability. [00258] The processor can obtain a second subset of topics among the multiple topics and a second subset of probabilities. Each probability in the second subset of probabilities can indicate the user’s affinity toward a corresponding topic among the second subset of topics. For example, the second subset of topics can include sports, technology, and science, and the second subset of probabilities can include (1, 0, .8, .3). [00259] The processor can compute the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics. [00260] For example, the first measure of similarity can include a dot product between the first subset of probabilities (0, 0, .7, .9) and the second subset of probabilities (1, 0, .8, .3). The first measure of similarity can have a value of .7*.8+.9*.3= 0.83. The second measure of similarity can include a number of overlapping topics between the first subset of topics and the second subset of topics. In the present case the overlapping topics include technology and science, and the second measure of similarity can have a value of 2. [00261] In a fourth embodiment, to compute the quality score, the processor can determine that the promoted content contains an image, a sequence of images, or an audio, and determine a degradation of the promoted content such as through compression or copying. The processor can calculate the quality score based on the degradation of the promoted content, where the lower the degradation, the higher the quality score. [00262] In a fifth embodiment, to compute the quality score, the processor can determine the promoted system providing the promoted content. In other words, the processor can determine the source of the promoted content, such as a reputable company, e.g., Disney, Netflix, Atlantic Records, etc. The processor can calculate the quality score based on the source of the promoted content. [00263] In a sixth embodiment, to compute the quality score, the processor can determine a quality of a destination to which the promoted content connects and can compute the quality score based on the quality of the destination. For example, the promoted content can include a link to a webpage. To determine the quality of the promoted content, the processor can evaluate the webpage, instead of evaluating the link. [00264] In a seventh embodiment, to compute the quality score, the processor can obtain a dictionary including multiple topics associated with the multiple promoted contents, as described in this application. The processor can classify the promoted content into a first subset of topics among the multiple topics. The processor can generate a first subset of probabilities indicating a correspondence between the promoted content and a topic in the first subset of topics. The processor can classify the request into a second subset of topics among the multiple topics. For example, the request can be a search query, and the processor can determine the topic of the search query. The processor can generate a second subset of probabilities indicating a correspondence between the request and a topic in the second subset of topics. The processor can compute the quality score based on a first measure of similarity between the first subset of probabilities and the second subset of probabilities or a second measure of similarity between the first subset of topics and the second subset of topics. The first measure and the second measure of similarity can be calculated as described in this application. [00265] The processor can iterate through various configurations before selecting a webpage to present to the user. The webpage can include multiple slots that can accommodate either organic or promoted content. The various slots can have different impact on the user experience. For example, the first slot in a linearly ordered list has a higher impact than the last slot in the linearly ordered list. [00266] In one embodiment, to determine the webpage configuration to present to the user, the processor can obtain a user experience cost and an impact control. The user experience cost represents a difference in a user’s experience between viewing an organic content item in a content slot on the webpage and viewing a promoted content item in the content slot on the webpage. The impact control represents an effect the difference in the user’s experience has on a webpage utility score. As explained in Figure 6, the webpage utility score is a combination, e.g., an addition, of the bids placed for the slot and the user experience cost. For example, the user experience cost can be modulated by the impact control, by, e.g., multiplying the user experience caused by the impact control. The processor can compute a first webpage utility score of the first arrangement based on the auction bid, the user experience cost and the impact control. The processor can create a second arrangement by reallocating the multiple content slots to increase a promoted content load. The processor can compute a second webpage utility score of the second arrangement, compare the first webpage utility score and the second webpage utility score, and select a webpage having a higher webpage utility score to display to the user. [00267] In another embodiment, to determine the webpage configuration to present to the user, the processor can determine a utility value associated with the content slot among the multiple content slots by determining the user experience cost between showing the organic content item in the content slot and showing the promoted content item in the content slot. The processor can obtain a termination condition indicating whether minimum spacing between the multiple content slots has been achieved. Until the termination condition is satisfied, the processor can iteratively perform the following two steps. First, the processor can create an arrangement associated with the webpage, where the arrangement has different promoted content compared to the previous arrangement. The different promoted content can include different spacing between promoted content contained in the arrangement and promoted content in the previous arrangement. In addition, the arrangement can have an increased load of the promoted content. Second, the processor can calculate a webpage utility score associated with the arrangement based on the user experience cost. The processor can compare the multiple webpage utility scores to obtain the highest score. The processor can select a webpage having the highest score to display to the user. [00268] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. Computer System [00269] FIG. 13 is a block diagram that illustrates an example of a computer system 1300 in which at least some operations described herein can be implemented. As shown, the computer system 1300 can include: one or more processors 1302, main memory 1306, non-volatile memory 1310, a network interface device 1312, video display device 1318, an input/output device 1320, a control device 1322 (e.g., keyboard and pointing device), a drive unit 1324 that includes a storage medium 1326, and a signal generation device 1330 that are communicatively connected to a bus 1316. The bus 1316 represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. Various common components (e.g., cache memory) are omitted from FIG. 13 for brevity. Instead, the computer system 1300 is intended to illustrate a hardware device on which components illustrated or described relative to the examples of the figures and any other components described in this specification can be implemented. [00270] The processor 1302 can perform the various methods described in this application for example the methods described in FIGS. 4, 5, 7-12. The processor 1302 can be part of the system 140 in FIG.1. The main memory 1306, the nonvolatile memory 1310 and/or the drive unit 1324 can store the instructions executed by the processor 1302. The network 1314 can be the network 120 in FIG.1 and can enable communication between client devices 110 in FIG. 1, third- party system 130 in FIG. 1, and the online system 140. [00271] The computer system 1300 can take any suitable physical form. For example, the computing system 1300 can share a similar architecture as that of a server computer, personal computer (PC), tablet computer, mobile telephone, game console, music player, wearable electronic device, network-connected (“smart”) device (e.g., a television or home assistant device), AR/VR systems (e.g., head-mounted display), or any electronic device capable of executing a set of instructions that specify action(s) to be taken by the computing system 1300. In some implementation, the computer system 1300 can be an embedded computer system, a system-on- chip (SOC), a single-board computer system (SBC) or a distributed system such as a mesh of computer systems or include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1300 can perform operations in real-time, near real- time, or in batch mode. [00272] The network interface device 1312 enables the computing system 1300 to mediate data in a network 1314 with an entity that is external to the computing system 1300 through any communication protocol supported by the computing system 1300 and the external entity. Examples of the network interface device 1312 include a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, and/or a repeater, as well as all wireless elements noted herein. [00273] The memory (e.g., main memory 1306, non-volatile memory 1310, machine- readable medium 1326) can be local, remote, or distributed. Although shown as a single medium, the machine-readable medium 1326 can include multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 1328. The machine-readable (storage) medium 1326 can include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computing system 1300. The machine-readable medium 1326 can be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium can include a device that is tangible, meaning that the device has a concrete physical form, although the device can change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state. [00274] Although implementations have been described in the context of fully functioning computing devices, the various examples are capable of being distributed as a program product in a variety of forms. Examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and non-volatile memory devices 1310, removable flash memory, hard disk drives, optical disks, and transmission-type media such as digital and analog communication links. [00275] In general, the routines executed to implement examples herein can be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1304, 1308, 1328) set at various times in various memory and storage devices in computing device(s). When read and executed by the processor 1302, the instruction(s) cause the computing system 1300 to perform operations to execute elements involving the various aspects of the disclosure. [00276]