Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EFFICIENT OFF-POLICY CREDIT ASSIGNMENT
Document Type and Number:
WIPO Patent Application WO/2020/180522
Kind Code:
A1
Abstract:
Systems and methods are provided for efficient off-policy credit assignment (ECA) in reinforcement learning. ECA allows principled credit assignment for off-policy samples, and therefore improves sample efficiency and asymptotic performance. One aspect of ECA is to formulate the optimization of expected return as approximate inference, where policy is approximating a learned prior distribution, which leads to a principled way of utilizing off-policy samples. Other features are also provided.

Inventors:
LIU HAO (US)
SOCHER RICHARD (US)
XIONG CAIMING (US)
Application Number:
PCT/US2020/019493
Publication Date:
September 10, 2020
Filing Date:
February 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SALESFORCE COM INC (US)
International Classes:
G06N3/00; G06N3/08; G06N7/00
Foreign References:
US201962813937P2019-03-05
US201962849007P2019-05-16
US201962852258P2019-05-23
US201916653890A2019-10-15
Other References:
CHEN LIANG ET AL: "Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 July 2018 (2018-07-06), XP081552382
TANMAY GANGWANI ET AL: "Learning Self-Imitating Diverse Policies", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 May 2018 (2018-05-25), XP080882578
DILIN WANG ET AL: "Variational Inference with Tail-adaptive f-Divergence", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 October 2018 (2018-10-29), XP081491951
RISHABH AGARWAL ET AL: "Learning to Generalize from Sparse and Underspecified Rewards", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 February 2019 (2019-02-19), XP081365082
CHEN LIANG ET AL: "Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision", PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (VOLUME 1: LONG PAPERS), 23 April 2017 (2017-04-23), Stroudsburg, PA, USA, pages 23 - 33, XP055700157, DOI: 10.18653/v1/P17-1003
ANONYMOUS: "Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization", 25 September 2019 (2019-09-25), XP055699189, Retrieved from the Internet [retrieved on 20200528]
WILLIAMS: "Simple statistical gradient-following algorithms for connectionist reinforcement learning", MACHINE LEARNING, vol. 8, no. 3-4, pages 229 - 256, XP055646574, DOI: 10.1007/978-1-4615-3618-5_2
LIANG ET AL.: "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", vol. 1, 2017, ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, article "Neural symbolic machines: Learning semantic parsers on freebase with weak supervision", pages: 23 - 33
ESPEHOLT ET AL., IMPALA: SCALABLE DISTRIBUTED DEEP-RL WITH IMPORTANCE WEIGHTED ACTOR-LEARNER ARCHITECTURES, 2018
SHANNON: "IRE Nat. Conv. Rec", vol. 4, 1959, article "Coding theorems for a discrete source with a fidelity criterion", pages: 1
COVER ET AL.: "Elements of information theory", 2012, JOHN WILEY-SONS
WANG ET AL.: "Variational inference with tail-adaptive f-divergence", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2018, pages 5742 - 5752
LIANG ET AL., LEARNING DEPENDENCY-BASED COMPOSITIONAL SEMANTICS. COMPUTATIONAL LINGUISTICS, vol. 39, no. 2, 2018, pages 389 - 446
PASUPAT ET AL.: "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", vol. 1, 2015, ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, article "Compositional semantic parsing on semi-structured tables", pages: 1470 - 1480
KINGMA ET AL., A METHOD FOR STOCHASTIC OPTIMIZATION, 2014
Attorney, Agent or Firm:
WOO, Philip W. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for efficient off-policy credit assignment in reinforcement learning, the method comprising:

receiving sample results from one or more agents implementing a learning model, wherein each sample result can be either successful or unsuccessful;

computing a learned prior distribution from the sample results;

computing one or more adaptive weights using the learned prior distribution;

generating an estimate of a gradient using the one or more adaptive weights; and

updating a policy for the learning model using the estimated gradient.

2. The method of claim 1, wherein computing one or more adaptive weights comprises generating at least one of a high-reward result credit and a zero-reward result credit.

3. The method of claim 2, wherein generating an estimate of a gradient comprises generating a high-reward score function gradient using the high-reward result credit to weight successful sample results.

4. The method of claim 2 or 3, wherein generating an estimate of a gradient comprises generating a zero-reward score function gradient using the zero-reward result credit to weight unsuccessful sample results.

5. The method of any of claims 1-4, wherein the learning model is applied to a task of semantic parsing.

6. The method of any of claims 1-5, wherein the sample results correspond to executable software code generated by the learning model based on natural language instructions text.

7. A system for efficient off-policy credit assignment in reinforcement learning, the system comprising:

a memory storing machine executable code; and

one or more processors coupled to the memory and configurable to execute the machine executable code to cause the one or more processors to: receive sample results from one or more agents implementing a learning model, wherein each sample result can be either successful or unsuccessful;

compute a learned prior distribution from the sample results;

compute one or more adaptive weights using the learned prior distribution;

generate an estimate of a gradient using the one or more adaptive weights; and update a policy for the learning model using the estimated gradient.

8. The system of claim 7, wherein the one or more processors are configurable to generate at least one of a high-reward result credit and a zero-reward result credit.

9. The system of claim 8, wherein the one or more processors are configurable to generate a high-reward score function gradient using the high-reward result credit to weight successful sample results.

10. The system of claim 8 or 9, wherein the one or more processors are configurable to generate a zero-reward score function gradient using the zero-reward result credit to weight unsuccessful sample results.

11. The system of any of claims 7-10, wherein the learning model is applied to a task of semantic parsing.

12. The system of any of claims 7-11, wherein the sample results correspond to executable software code generated by the learning model based on natural language instructions text.

13. A non- transitory machine-readable medium comprising executable code which when executed by one or more processors associated with a computer are adapted to cause the one or more processors to perform a method for efficient off-policy credit assignment in reinforcement learning, the method comprising:

receiving sample results from one or more agents implementing a learning model, wherein each sample result can be either successful or unsuccessful;

computing a learned prior distribution from the sample results;

computing one or more adaptive weights using the learned prior distribution;

generating an estimate of a gradient using the one or more adaptive weights; and

updating a policy for the learning model using the estimated gradient.

14. The non-transitory machine-readable medium of claim 13, wherein computing one or more adaptive weights comprises generating at least one of a high-reward result credit and a zero-reward result credit.

15. The non-transitory machine-readable medium of claim 14, wherein generating an estimate of a gradient comprises generating a high-reward score function gradient using the high-reward result credit to weight successful sample results.

16. The non-transitory machine-readable medium of claim 14 or 15, wherein generating an estimate of a gradient comprises generating a zero-reward score function gradient using the zero- reward result credit to weight unsuccessful sample results.

17. The non-transitory machine-readable medium of any of claims 13-16, wherein the learning model is applied to a task of semantic parsing.

18. The non-transitory machine-readable medium of any of claims 13-17, wherein the sample results correspond to executable software code generated by the learning model based on natural language instructions text.

Description:
EFFICIENT OFF-POLICY CREDIT ASSIGNMENT

RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application No. 62/813,937, filed March 5, 2019, U.S. Provisional Patent Application No. 62/849,007 filed May 16, 2019, and U.S. Provisional Patent Application No. 62/852,258 filed May 23, 2019, and U.S. Nonprovisional Patent Application No. 16/653,890, filed on October 15, 2019, which are incorporated by reference herein in their entireties.

COPYRIGHT NOTICE

[0002] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

TECHNICAL FIELD

[0003] The present disclosure relates generally to neural networks and deep learning models, and more specifically, to efficient off-policy credit assignment.

BACKGROUND

[0004] The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.

[0005] Artificial intelligence, implemented with neural networks and deep learning models, has demonstrated great promise as a technique for automatically analyzing real-world information with human-like accuracy. In general, such neural network and deep learning models receive input information and make predictions based on the same, for example, to direct the action of an actor or agent. Neural networks and deep learning models can learn through a process of reinforcement learning, where feedback in the form of rewards is provided to the neural network or learning model. The rewards allow or enable the neural network or learning model to gauge or measure the success or failure of its actions. Learning with sparse rewards, however, can be very challenging for a neural network or deep learning model.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] Figure 1 is a simplified diagram of a computing device, according to some embodiments.

[0007] Figure 2 is a simplified diagram of a framework for reinforcement learning with efficient off-policy credit assignment, according to some embodiments.

[0008] Figure 3 is a simplified diagram of a method for reinforcement learning with efficient off- policy credit assignment, according to some embodiments.

[0009] Figure 4 is a simplified diagram of a method for gradient estimation, according to some embodiments.

[0010] Figure 5 is a simplified diagram of a method for updating policy, according to some embodiments.

[0011] Figure 6 illustrates an algorithm for efficient credit assignment with learned prior and adaptive weights, according to some embodiments.

[0012] Figure 7 illustrates an algorithm for systematic exploration, according to some embodiments.

[0013] Figure 8 illustrates an algorithm for adaptive weights, according to some embodiments.

[0014] Figure 9 illustrates an algorithm for learned prior, according to some embodiments.

[0015] Figure 10 illustrates an example of a task, for which efficient off-policy credit assignment for reinforcement learning may be applied.

[0016] Figure 11 illustrates example results of ECA models compared to other approaches, according to some embodiments.

[0017] Figure 12 illustrates examples of programs generated from natural language queries using models trained with ECA or other approaches, according to some embodiments.

[0018] In the figures, elements having the same designations have the same or similar functions. DETAILED DESCRIPTION

[0019] This description and the accompanying drawings that illustrate aspects, embodiments, implementations, or applications should not be taken as limiting— the claims define the protected invention. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail as these are known to one skilled in the art. Like numbers in two or more figures represent the same or similar elements.

[0020] In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non functional.

Overview

[0021] Neural networks and deep learning models can learn, for example, through a process of reinforcement learning. Reinforcement learning attempts to model a complex probability distribution of rewards in relation to a large number of state-action pairs. In reinforcement learning, an agent or actor is run through sequences of state-action pairs. The neural network or model observes the rewards that result, and adapts or modifies its predictions to those rewards until it accurately predicts the best action for the agent to take based on the current state. A reward is the feedback a neural network or learning model receives in order to gauge or measure the success or failure of the agent’s actions.

[0022] A policy is the strategy that the neural network or learning model employs to determine the agent’s next action. In other words, a policy defines how the agent acts from a specific state. Every algorithm for reinforcement learning must follow some policy in order to decide which action(s) to perform at each state. A learning algorithm, or portion thereof, that takes into account the current policy is considered an on-policy learner. In contrast, an off-policy learner learns based on something other than the current policy.

[0023] Policy optimization can be used to improve the performance of a neural network or learning model, for example, in such settings or applications as robotic learning, program synthesis, architecture search, and conversational dialogue. Despite this, policy optimization still often suffers from the need to carefully shape reward function to guide the policy optimization, which means domain- specific knowledge is required. To mitigate the issue, there has been a recent surge of interest in developing policy optimization algorithms which can leam from a binary signal indicating successful task completion or other unshaped, sparse reward signals in various application domains, including sparse reward robotic control, and weakly-supervised semantic parsing. While utilizing off- policy samples could be helpful for learning, it remains a challenge to efficiently utilize off-policy samples, which leads to poor sample efficiency and hinders further application. It also remains unclear how existing credit assignment methods connect with each other.

[0024] According to some embodiments, the present disclosure provides systems and methods for efficient off-policy credit assignment (ECA) in reinforcement learning. ECA allows principled credit assignment for unsuccessful samples in deterministic environments with discrete actions, and therefore improves sample efficiency and asymptotic performance. One aspect, in some embodiments, is to formulate the optimization of expected return as approximate inference, where policy is trained to approximate a learned prior distribution. In this manner, off-policy samples can be utilized in deterministic environments to approximate the divergence, instead of only using the successful samples when doing policy gradient. This provides or leads to a higher sample efficiency. ECA can generalize previous credit assignment methods.

[0025] In some examples, ECA is used in the application of weakly- supervised semantic parsing and program synthesis from natural language, where no logical forms are available and only the final binary success or failure feedback is available as supervision. It is demonstrated that ECA in this context significantly outperforms other methods.

[0026] As used herein, the term“network” may comprise any hardware or software -based framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith. [0027] As used herein, the term“module” may comprise hardware or software-based framework that performs one or more functions. In some embodiments, the module may be implemented on one or more neural networks.

Computing Device

[0028] According to some embodiments, the systems of the present disclosure— including the various networks, models, and modules— can be implemented in one or more computing devices.

[0029] Figure 1 is a simplified diagram of a computing device 100 according to some embodiments. As shown in Figure 1, computing device 100 includes a processor 110 coupled to memory 120. Operation of computing device 100 is controlled by processor 110. And although computing device 100 is shown with only one processor 110, it is understood that processor 110 may be representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 100. Computing device 100 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.

[0030] Memory 120 may be used to store software executed by computing device 100 and/or one or more data structures used during operation of computing device 100. Memory 120 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.

[0031] Processor 110 and/or memory 120 may be arranged in any suitable physical arrangement. In some embodiments, processor 110 and/or memory 120 may be implemented on a same board, in a same package (e.g., system- in-package), on a same chip (e.g., system-on-chip), and/or the like. In some embodiments, processor 110 and/or memory 120 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 110 and/or memory 120 may be located in one or more data centers and/or cloud computing facilities.

[0032] As shown, memory 120 includes an application module 130, a reinforcement learning module 140, and a samples database 150. The reinforcement learning module 140 includes an efficient off-policy credit assignment (ECA) module 145. In some examples, memory 120 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the methods described in further detail herein. In some examples, application module 130, reinforcement learning module 140 and sample database 150 may be implemented using hardware, software, and/or a combination of hardware and software.

[0033] The application module 130 may implement or support an agent or actor that performs an application or task, such as, for example, semantic parsing and program synthesis from natural language. Semantic parsing or program synthesis— which is a form of natural language processing (NLP)— is the mapping from natural language utterances into executable programs. For the task or application performed or provided by application module 130, computing device 100 receives input data 160. For the application of semantic parsing, for example, the input data 140 may include natural language instructions text. The natural language instructions text may comprise one or more utterances or text in natural language form relating to functions, routines, or operations that are performed on a computer. In this case, application module 130 may implement or provide a semantic parser that can operate on or process the natural language instructions text to map or generate software code, routines, processes for programs that can be executed or run on a computer to perform the functions, routines, or operations. In some examples, semantic parser of application module 130 considers and maps a natural language instruction or question JC to a structured query z in a programming language such as SQL, Python, or source codes. The software code, routines, or processes results generated by semantic parser of application module 130 are provided as output 170 from computing device 100.

[0034] In some examples, actor or agent of the application module 130 may be weakly supervised; that is, the module 130 does not receive substantial or significant training examples or feedback from human annotators to improve its output or results. Weakly supervised semantic parsing can be formulated as a reinforcement learning problem with sparse reward, where a model, called a policy p, generates a program based on the given context and query and receives a sparse feedback on whether the execution of the generated program gives a correct answer (i.e., a“successful” program), and the goal is to learn a policy p that maximizes the expected reward and generalizes to new contexts. Learning with sparse rewards in deterministic environments with discrete actions is challenging, and yet important in combinatorial optimization and semantic parsing. [0035] Reinforcement learning module 140 provides or supports learning for application module 130, including to adjust or optimize policy p. To address the issue or problem of weak supervision, reinforcement learning module 145 provides or supports efficient off-policy credit assignment (EC A) in the process of reinforcement learning. Efficient off-policy credit assignment module 145 provides a principled way of utilizing off-policy samples, formulating the optimization of expected return as approximate inference, where policy is approximating a learned prior distribution. This improves sample efficiency and asymptotic performance. Efficient off-policy credit assignment leads to rigorously generalizing previous credit assignment methods. The efficient off-policy credit assignment approach encourages the model itself to cover target distribution and account for uncertainty. The effectiveness of the efficient off-policy credit assignment approach is demonstrated, in some examples, on weakly-supervised semantic parsing and program synthesis from natural language problems.

[0036] In some embodiments, in the example of sematic parsing, efficient off-policy credit assignment module 145 automatically assigns credit to both successful results (e.g., successful programs) and unsuccessful results (e.g., unsuccessful programs) of past experience based on divergence between the model distribution itself and optimal probability distribution on results. The prior results (both successful and unsuccessful) are experience learned prior (past experience), and considered off-policy because they do not necessarily relate or connect with the current policy of the neural network or model. Thus, embodiments of the present disclosure provide an efficient and principled way of using off-policy (past) experience. The results of past experience can be stored in and/or retrieved from the samples database 150. It is shown that the off-policy credit assignment module 145 both distinguishes incorrect results that coincidentally output the correct result and explores the large space effectively from sparse rewards.

[0037] In some embodiments, the efficient off-policy credit assignment approach or framework considers or implements reinforcement learning with entropy regularization. Entropy regularization is commonly used to improve policy optimization in reinforcement learning.

Example Application - Weakly-Supervised Semantic Parsing

[0038] Semantic parsing or program synthesis is the mapping from natural language utterances made by a human user into executable programs. Semantic parsing or program synthesis considers the problem of learning to map a natural language instruction or question JC to a structured query z in a programming language such as, for example, SQL, Python, or other source code. [0039] Other work on the statistical learning of semantic parsers utilized supervised learning, where pairs of language utterances and programs are provided to the parsers as training examples. However, supervised learning can be problematic in that it requires collecting training examples at scale from expert human annotators who are familiar with both programming languages and domain knowledge. This has led to a wide range of work on weakly- supervised semantic parsing.

[0040] According to some embodiments described herein, semantic parsing is considered in the weakly supervised setting, where there is no access to the ground-truth program during training and the model is required to learn from weak feedback or a sparse reward. That is, the only feedback is received at the end of the episode when the generated completed program is executed. And even then, the feedback is limited in scope, being simply an indicator or binary signal of whether the task has been completed successfully or unsuccessfully. It has been a challenge to effectively use both successful and unsuccessful past experience in order to improve semantic parsing, and the results obtained therefrom.

[0041] Figure 10 illustrates an example of a semantic parsing task. In this example, an agent is presented with a context x that comprises a natural language question (e.g.,“Which player ranked the most?”) and a table (with a number of potential answers to the question). The agent is asked to generate a program z = (zi, Z2, , Zn). The agent receives a reward of 1 if execution of z on the relevant data table leads to the correct answer y (e.g.,“Nicky English”). The model or agent needs to discover the programs that can generate the correct answer in a given context and generalizes over unseen context.

[0042] Reinforcement learning may be applied for the situation or case of weakly supervised semantic parsing. Weakly supervised semantic parsing can be formulated as a reinforcement learning problem with sparse reward, where a model, called policy, generates a program based on the given context and query and receives a sparse feedback on whether the execution of the generated program gives the correct answer. The goal of the agent is to learn a policy that maximizes the expected reward and generalizes to new contexts.

[0043] This problem can be formulated as a Markov decision process over the environment states s E S and agent actions (program generation) z E Z, under an unknown dynamic environment which is defined by a transition probability T (s' I s, z). The agent’s action at time step t (z t ) is selected by the conditional probability distribution p (z t I s t ) called policy. In some embodiments, an autoregressive model can be used as the policy, where state s t is based on natural language input and the previous t steps generation, where z <t = (z i , . . . , z t - 1) denotes a prefix of the program z, x e X denotes the context which contains both a natural language input and an interpreter on which the program will be executed. And pb (z I x) satisfy Vz E Z : pb (z I x) > 0 and å z ez pb (z \ c) = 1. The reward function is sparse. For example, it is natural to define a binary reward that is 1 when the output equals the answer and 0 otherwise. This means the agent will only receive a reward 1 at the end of an episode if it successfully completes the task. Therefore, R( z) is evaluated by running the complete program z on an interpreter or a database F to see whether the program gives correct answer:

1, if F(z) =

Riz) (2)

0, otherwise

[0044] In policy gradient methods, a set of candidate policies pb (z I x) parameterized by Q is considered. The optimal policy is obtained by maximizing the expected cumulative reward, where the objective is expressed as

[0045] where p (JC) denotes the distribution of x. A straightforward way to estimate Equation 3 is by sampling (x, y) from training dataset D = The gradient of Equation 3 can be calculated with REINFORCE (described in more detail in Williams,“Simple statistical gradient following algorithms for connectionist reinforcement learning,” Machine Learning , 8(3-4):229-256, (1992), which is incorporated by reference herein) and estimated using Monte Carlo samples. Ύ2 ^ log 7G* (z f x)R(z) , (4)

( ii- D z

[0046] Unfortunately, since the search space of programs is very large, most samples z have reward R( z) = 0, and thus do not contribute to the gradient estimation in Equation 4. Besides, because the variance of score function estimators is very high, it is challenging to estimate the gradient in Equation 4 with a small number of successful programs. Previous methods have proposed to estimate gradient as a combination of expectations inside and outside successful programs buffer; however, those methods have been restricted to using successful programs only, and suffer from high sample complexity.

[0047] To mitigate or address this challenge, according to some embodiments, systems and methods of the present disclosure utilize both successful and unsuccessful programs in past experience.

Efficient Off-Policy Credit Assignment

[0048] According to some embodiments, systems and methods of the present disclosure provide or implement reinforcement learning, with an efficient off-policy credit assignment, for a neural network agent or model, such as used, for example, for semantic parsing in a weakly- supervised situation.

[0049] System: A corresponding system or framework 200 for the algorithm or approach for reinforcement learning with efficient off-policy credit assignment, according to the present disclosure, is shown in Figure 2. Framework 200 provides for efficient off-policy credit assignment, e.g., for weakly- supervised semantic parsing, according to some embodiments. This framework can be directly applied to combinatorial optimization, machine translation and other deterministic environments. In some embodiments, the framework 200 provides reinforcement learning for a neural network model or agent. In some examples, portions of framework 200 can be implemented in the computing device 100 as shown in Figure 1.

[0050] In some embodiments, the framework 200 is implemented as an actor- learner model, where each of one or more actors 210a-c perform tasks or take actions based on one or more policies p, and reinforcement learning is applied so that the neural network model may learn from the experiences (both successful and unsuccessful) of the actors 210.

[0051] Each actor 210 can be an instance or implementation of the application module 130 (Figure 1), performing the associated tasks or actions, such as semantic parsing. For this, the actor 210 may receive natural language instructions text as input data 160. In some examples, the natural language instructions text input 160 can be provided by a human operator, and may comprise one or more utterances or text in natural language form relating to functions, routines, or operations that are performed on a computer. Each actor 210 operates on the natural language instructions text input 160 to develop, derive, or generate predictions or results for executable program code. In some examples, each actor 210 (application module 130) uses natural language explanations text 145 to map or generate software code, routines, processes for programs that can be executed or run on a computer to perform the functions, routines, or operations. Some of the program code can be successful, while other program code is unsuccessful. Thus, the samples received from the one or more actors 210 can include both successful programs (where the generated code is appropriate for the natural language instruction text) and unsuccessful programs (where the generated code is not appropriate for the natural language instruction text).

[0052] The framework 200 includes one or more storage areas or buffers which, in some embodiments as shown, can include a high reward programs buffer 220 (buffer B) and a zero reward programs buffer 230 (buffer C). High reward programs buffer 220 may store samples of successful program results, for which a high reward should be assigned or given. Zero reward programs buffer 230 may store samples of unsuccessful program results, for which zero reward should be given. With reference to Figure 1, these buffers 220, 230 can be included or implemented in sample database 150.

[0053] In some embodiments, the models or actors 210 may employ or implement a seq2seq model as pb (z I x), and two key-variable memory as successful programs buffer 220 (buffer B) and unsuccessful programs buffer 230 (buffer C), and may be associated with a domain- specific language interpreter (as described in more detail in Liang et ah,“Neural symbolic machines: Learning semantic parsers on freebase with weak supervision,” In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics ( Volume 1: Long Papers ), pp. 23-33. Association for Computational Linguistics, 2017. doi: 10.18653/vl/P17-1003 (2017), incorporated by reference herein). In some examples, the code is based on open source implementation of Memory Augmented Policy Optimization (MAPO) which implements a distributed actor-leamer architecture (as described in more detail in Espeholt et ah,“Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures,” arXiv preprint arXiv: 1802.01561, (2018), incorporated by reference herein) to accelerate sampling through distributed actors.

[0054] With reference to Figure 2, the framework 200 also includes a gradient module 250 and a learner module 260. In some embodiments, one or both of the gradient estimation module 250 and learner module 260 can be implemented in or be part of off-policy credit assignment module 135 (Figure 1). The gradient estimation module 250 generates or derives a general gradient estimation. According to some embodiments, the gradient estimation module 250 periodically estimates a gradient (as described below) using samples from buffers 210 and 220. The resulting gradient is a weighted sum of gradients estimated based on successful and unsuccessful programs in past experience. The weights depend on function /, the likelihood of samples under current policy and stratified sampling. It is demonstrated that this gradient estimation generalizes the previous methods in semantic parsing, such as MAPO, Maximize Marginal Likelihood (MML), Iterative Maximum Likelihood (IML), and Reward Augmented Likelihood (RAML), each of which can be reduced through some particular choice of function/and weights. Based on the gradient estimation, the learner module 260 updates the policy p for the neural network or model. In this way, reinforcement learning is provided or implemented for the application module 130 (Figure 1).

[0055] Method: Figure 3 is a corresponding method 300 for the system or framework 200 of Figure 2, according to some embodiments. One or more of the processes 310-340 of method 300 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform one or more of the processes 310-340. In some embodiments, method 300 may correspond to the method used or performed by off-policy credit assignment module 145 for reinforcement learning of application module 130.

[0056] At a process 310, samples of results may be received from one or more actors 230 (e.g., implementing application module 130). These samples can be, for example, software code, routines, or processes for one or more executable programs generated or output by the semantic parser of application module 130. Each of these samples can be either successful (e.g., the generated program yields the correct results) or unsuccessful (e.g., the generated program yields wrong results).

[0057] At a process 320, the samples of successful programs and unsuccessful programs are stored in memory. In some examples, samples of successful programs are stored into high reward programs buffer 220 (buffer B), and samples of unsuccessful programs are stored into zero reward programs buffer 230 (buffer C). Both successful and unsuccessful samples of prior experience can be used for reinforcement learning of the model for application module 130.

[0058] At a process 330, the gradient estimation module 250 of the framework 200 periodically estimates a gradient based on the samples of successful programs and unsuccessful programs (stored in buffers 220 and 230). Gradient estimation module 250 applies the efficient off-policy credit assignment (ECA) of the present disclosure which, in some embodiment, generates a general gradient estimate based on weighted samples from the past experience. The generation of this gradient estimate is described below in more detail. The gradient estimation generalizes previous methods and empirically demonstrate its effectiveness in semantic parsing. [0059] At a process 340, using the gradient estimate generated by module 250, the learner module 260 updates the policy for the neural network model or actor, as further described herein.

[0060] Some examples of computing devices, such as computing device 100 may include non- transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the processes of method 300. Some common forms of machine readable media that may include the processes of method 300 are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.

[0061] Entropy Regularized Reinforcement Learning: In some embodiments, the algorithm or approach of the present disclosure utilizes or employs an entropy regularization term to encourage exploration. In this approach, entropy regularization is cast as a general inference problem, where policy distribution is learned to approximate a prior distribution under a certain divergence measure. In some examples, the prior distribution can be optimized to guide the approximate inference.

[0062] A more general maximum entropy objective favors stochastic policies by augmenting the objective with the relative entropy of the policy, where l is a regularization weight, H (pq (z | c); p(z)) is the relative entropy regularization term between policy pb (z I x) and prior distribution p(z). The summation over (x, y) ~ D is omitted for notation simplicity in the remainder of this description.

[0063] Lemma 1. Equation 5 is equivalent to minimizing the following objective,

V (x) = A log / z exp(R(z)/A) is a“soft-version” of value function, serving as a normalization constant here. From Equation 6, the distribution p(z) is approximated with a simpler distribution from a family {pb (z I x) : Q E Q], where Q is the parameter that we want to optimize, and pb (z I x) is represented as an autoregressive policy in Equation 1. [0064] Learned Prior Distribution: According to some embodiments, prior distribution is learned in order to optimize Equation 3. The learned prior distribution, in some embodiments, can be considered or serve as an initial estimate. The goal or objective is that entropy regularization encourages the policy to be similar to a non-uniform distribution p(z), which assigns higher probability mass to more important programs. Basically, this is equivalent to

[0065] Proposition 1. Given a policy p q (z I x), new prior distribution TT(Z) should be updated as, and substituting Equation 7 into Equation 6 leads to a mutual information regularization,

[0066] Proposition 1 states that optimizing p(z) is equivalent to mutual information regularization. Alternatively, optimizing pb (z I x) and p(z) leads to a complex mixture distribution of 7i(z), increasing the expressive power of prior distribution for credit assignment. Therefore, the optimization of Equation 3 becomes or results in maximizing mutual-information-regularized expected reward given by,

[0067] Equation 9 draws connection with rate distortion theory (as described in more detail, for example, in Shannon,“Coding theorems for a discrete source with a fidelity criterion,” IRE Nat. Corn. Rec, 4 (142- 163): 1 (1959), and Cover et al., Elements of information theory, John Wiley-Sons (2012), both of which are incorporated by reference herein); intuitively, the policy pb (z I x) is encouraged to discard reward-irrelevant information in context subject to a limited channel capacity given by I (x z). This has been observed in the widely-used Maximize Marginal Likelihood (MML) method. The objective is to maximize /MML with respect to 0, 7M ML = log · The gradients of /MML have the form,

The weight w{ z) (associated or related with the credit given for samples) is basically a“normalized” likelihood of pb (z I x) on program space Z. An algorithm (Algorithm 3) for generating the adaptive weights or credits for successful (high reward) and unsuccessful (zero reward) sample or programs is illustrated in Figure 8. Theorem 1 shows that an information theoretical prior may be a similar “normalized” likelihood of pb (z I x) on context space X. More empirical findings on this are described below. To stabilize the algorithm, the prior distribution is gradually changed according to the following equation, where h is a changing rate in [0,1] . The algorithm will alternatively learn policy distribution pb (z I x) with Equation 6 and update prior distribution p(z) with Equation 11. How to learn pb (z I x) with Equation 6 is described next.

[0068] Further details for the learned prior distribution 7f(z) are provided in the algorithm (Algorithm 4) illustrated in Figure 9, according to some embodiments.

[0069] General Gradient Estimation: While D KL ( pb (z I x) II 7f(z)) is the typical divergence measure widely used in variational inference, natural language processing and reinforcement learning, it often leads to model collapse because of its mode-seeking property. Therefore, directly optimizing Equation 6 often gives a suboptimal model pb (z I x). Alternative divergence measures may therefore be considered. In some embodiments, the systems and methods of the present disclosure approach this problem by minimizing the general /-divergence between p (z) and pb (z I x). /-divergence includes a large spectrum of divergences (e.g., KL and reverse KL divergence) and is shown to be powerful in various settings, where /: M + ® E is any twice-differentiable convex function. It can be shown by Jensen’s inequality that D F (p II q) > 0 for any p and q. Further, if f(t) is strictly convex at / = 1, then D \ (p(z) II Tie (z I x)) = 0 implies p( z) = po (z I x). In some embodiments, stochastic optimization can be used to minimize Equation 12, and the then gradient of Equation 12 is given by:

[0070] Lemma 2. Assume f is a differentiable convex function and log p q (z I x) is differentiable with respect to Q. For f-divergence defined in equation 12, we have

[0071] Equation 13 shows that the gradient of /-divergence between (pb (z I x) and p(z) can be specified through pf or /. Note that here z can be both successful or unsuccessful samples (e.g., programs), taking advantage of the fact that the search spaces of programs are enumerable and deterministic.

[0072] Algorithm or Approach for Gradient Estimation: According to embodiments of the present disclosure, samples of both successful or unsuccessful programs (from which learned prior distribution can be generated or calculated) can be used to estimate this gradient, thus providing an approach for efficient off-policy credit assignment for reinforcement learning. In some embodiments, the approach optimizes the prior distribution to guide the approximate inference.

[0073] Proposition 2. An unbiased and low variance estimation of Equation 13 is given by

[0074] This generates the gradient estimation according to embodiments of the present disclosure, as further explained with reference to Figure 4. Figure 4 is a simplified diagram of a method 400 for gradient estimation. In some embodiments, method 400 can be performed or implemented by gradient estimation module 250 (Figure 2).

[0075] At a process 402, using samples of successful programs (e.g., obtained or retrieved from high reward programs buffer 220, or buffer B), module 250 computes a high-reward program credit, and at a process 404, module 250 generates a high-reward score function gradient:

[0076] At a process 406, using samples of unsuccessful programs (e.g., obtained or retrieved from zero reward programs buffer 230, or buffer C), module 250 computes a zero-reward program credit, and at a process 408, module 250 generates a zero-reward score function gradient:

[0077] At a process 410, module 250 generates and applies clipped stratified sampling weights w to the high-reward score function gradient and the zero-reward score function gradient. The weighted function gradients are added together at a process 412 to give a gradient estimation with efficient off- policy credit assignment. This gradient estimation uses successful trajectories, thus pb (z I x) will not forget them. The gradient estimation also uses unsuccessful trajectories in the past, which improves sample efficiency and leads to a better approximation.

[0078] Further details for gradient estimation, for example, as implemented by gradient estimation module 250 of the framework 200, are provided in Figures 6-8. Figure 6 illustrates the algorithm (Algorithm 1) for efficient credit assignment with learned prior and adaptive weights, according to some embodiments. Figure 7 illustrates an algorithm (Algorithm 2) which may be employed in Algorithm 1 for systematic exploration to enforce a policy to take actions that lead to unexplored sequences. Figure 8 illustrates an algorithm (Algorithm 3) for generating or deriving the adaptive weights related to off-policy samples.

[0079] The gradient estimation generated by gradient estimation module 250 may be used for updating policy p of the neural network or model. Figure 5 is a simplified diagram of a method 500 for updating policy, according to some embodiments. In some embodiments, method 500 can be performed or implemented by learner module 260 (Figure 2). [0080] At a process 502, learner module 260 receives or obtains the gradient estimation from gradient estimation module 250. At process 504, learner module 260 implements aspects of a central learner, as described in more detail in Espeholt et al., 2018, which is incorporated by reference herein. At processes 506, 508, and 510, the learner module 260 updates one or more policies (e.g., policies #1, #2, ... #N) for the neural network model.

[0081] According to some embodiments, related to gradient estimation, the systems and methods of the present disclosure minimize the general /-divergence. In choosing /-divergence such that it achieves a good exploration and exploitation trade-off, some embodiments follow the approach as described in Wang et al.,“Variational inference with tail-adaptive f-divergence,” Advances in Neural Information Processing Systems, pp. 5742-5752 (2018), which is incorporated herein by reference. Specifically, let {z ;} be drawn from buffers B and C and vvv = p (z i I x) / tΐ(z;). Substitute py (pb (z I x) / 7i(z)) with the inverse of approximate tail probability given by > t), the corresponding algorithm for adaptive weights is summarized in Algorithm 3, as shown in Figure 8, which incorporates learned prior Algorithm 4, as shown in Figure 9. In some examples, to overcome cold start problem in sparse reward policy gradient, the systems and methods clip w to a given range (for example, as described in more detail in Fiang et al.,“Fearning dependency-based compositional semantics. Computational Finguistics,” 39(2/389-446 (2018), incorporated by reference herein) such that w = max (w, wi) and w = min (w, w u ) where m £ w u .

[0082] Equation 14 generalizes previous work in semantic parsing, including MMF, MAPO, RAMF, and IMF, where different methods correspond to different choices for the way of credit assignment.

Training and Experiments

[0083] In some examples, the neural networks or models of the present disclosure can be trained or evaluated with datasets, such as the weakly-supervised semantic parsing benchmarks WIKITAB FEQUESTIONS and WIKISQF. The WIKITABFEQUESTIONS dataset (which is described in more detail in Pasupat et al.,“Compositional semantic parsing on semi- structured tables,” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Fong Papers), pp. 1470-1480. Association for Computational Finguistics (2015) doi: 10.3115/vl/P15- 1142, incorporated by reference herein) contains 2,108 tables and 18,496 question-answer pairs built from tables extracted from Wikipedia. WIKISQF— which is described in more detail in Zhong et al., 2017 incorporated by reference herein— is a recent large scale dataset on learning natural language interfaces for databases. It contains 24,241 tables extracted from Wikipedia and 80,654 question- program pairs. It is annotated with programs (SQL). In both datasets, question-answers are split into train, evaluation, and test sets. In some examples, the question-answer pairs of the datasets are used for weakly supervised training.

[0084] In some embodiments, the construction (as described in Pasupat et al. (2015), referenced above) is followed for converting a table into a directed graph that can be queried. The rows and cells of the table are converted to graph nodes while column names become labeled directed edges. In some embodiments, at the beginning of training, a policy with random initialization will assign small probabilities to the successful programs. This causes them to be ignored during gradient estimation performed by gradient estimation model or module 250. To overcome cold start problem in sparse reward policy gradient, the probability of programs in buffer B are clipped as discussed above. In some embodiments, the systems and methods of the present disclosure use systematic exploration, for example, according to Algorithm 2 as shown in Figure 6, to enforce a policy to take actions that lead to unexplored sequences.

[0085] In some embodiments, the Adam Optimizer (as described in more detail in Kingma et ah, “A method for stochastic optimization,” arXiv preprint 368 arXiv: 1412.6980 (2014), which is incorporated by reference herein) is used for experiments and training. Memory weight clipping is 0: 1. In some embodiments, for training of the models hyper-parameter sweeps can be performed via random search, for example, over the interval (10 4 , 10 2 ) for learning rate and the interval (10 4 , 10 ' ) for entropy regularization. All the hyperparameters may be tuned on the evaluation set.

Results

[0086] Results on the systems and methods employing efficient off-policy credit assignment (ECA) for reinforcement learning are presented, and may be compared against other methods or approaches for weakly- supervised semantic parsing, such as REINFORCE, MML, IML MAPO, and RAML, as seen in the tables of Figure 11.

[0087] Table 1 of Figure 11 shows results on both WIKITAB LEQUESTIONS and WIKISQL benchmarks, for the various approaches. It can be seen in Figure 11 that ECA noticeably improves upon previous methods in terms of both sample efficiency and asymptotic performance significantly by performing better credit assignment. The comparison shows that both adaptive weight and learned prior (LP) leads to significant improvement. [0088] Table 2 and Table 3 present the results on weakly- supervised semantic parsing. ECA outperforms previous approaches or methods for weakly supervised semantic parsing. Table 2 and Table 3 show that the improvement is significant as the results are averaged across 5 trials. These results demonstrate the efficacy of the ECA compared to previous credit assignment methods.

[0089] Figure 12 illustrates examples of programs generated from natural language queries in WIKITABLE-QUESTIONS using models trained with ECA or MAPO. Figure 12 shows that, in some examples, ECA is capable of generating correct programs that capture the meaning of the natural language queries while MAPO generates either wrong answer programs or spurious programs.

[0090] This description and the accompanying drawings that illustrate inventive aspects, embodiments, implementations, or applications should not be taken as limiting. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the embodiments of this disclosure. Like numbers in two or more figures represent the same or similar elements.

[0091] In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non functional.

[0092] Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Thus, the scope of the invention should be limited only by the following claims, and it is appropriate that the claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.