Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TRAINING AND/OR UTILIZING MACHINE LEARNING MODEL(S) FOR USE IN NATURAL LANGUAGE BASED ROBOTIC CONTROL
Document Type and Number:
WIPO Patent Application WO/2021/231895
Kind Code:
A1
Abstract:
Techniques are disclosed that enable training a goal-conditioned policy based on multiple data sets, where each of the data sets describes a robot task in a different way. For example, the multiple data sets can include: a goal image data set, where the task is captured in the goal image; a natural language instruction data set, where the task is described in the natural language instruction; a task ID data set, where the task is described by the task ID, etc. In various implementations, each of the multiple data sets has a corresponding encoder, where the encoders are trained to generate a shared latent space representation of the corresponding task description. Additional or alternative techniques are disclosed that enable control of a robot using a goal-conditioned policy network. For example, the robot can be controlled, using the goal-conditioned policy network, based on free-form natural language input describing robot task(s).

Inventors:
SERMANET PIERRE (US)
LYNCH COREY (US)
Application Number:
PCT/US2021/032499
Publication Date:
November 18, 2021
Filing Date:
May 14, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
B25J9/16
Other References:
HUANG HAOSHUO ET AL: "Transferable Representation Learning in Vision-and-Language Navigation", 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 27 October 2019 (2019-10-27), pages 7403 - 7412, XP033723701, DOI: 10.1109/ICCV.2019.00750
COREY LYNCH ET AL: "Learning Latent Plans from Play", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 5 March 2019 (2019-03-05), XP081562846
CHAI JOYCE Y. ET AL: "Language to Action: Towards Interactive Task Learning with Physical Agents", PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 1 July 2018 (2018-07-01), California, pages 2 - 9, XP055832406, ISBN: 978-0-9992411-2-7, Retrieved from the Internet [retrieved on 20210816], DOI: 10.24963/ijcai.2018/1
AARON VAN DEN OORD ET AL: "Representation Learning with Contrastive Predictive Coding", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 July 2018 (2018-07-10), XP081246283
SIMON STEPPUTTIS ET AL: "Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 26 November 2019 (2019-11-26), XP081540071
PRATYUSHA SHARMA ET AL: "Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 October 2018 (2018-10-16), XP080924375
WANG XIN ET AL: "Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 15 June 2019 (2019-06-15), pages 6622 - 6631, XP033686699, DOI: 10.1109/CVPR.2019.00679
Attorney, Agent or Firm:
HIGDON, Scott et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method implemented by one or more processors, the method comprising: receiving a free-form natural language instruction describing a task for a robot, the free-form natural language instruction generated based on user interface input provided by a user via one or more user interface input devices; processing the free-form natural language instruction using a natural language instruction encoder, to generate a latent goal representation of the free-form natural language instruction; receiving an instance of vision data, the instance of vision data generated by at least one vision component of the robot, and the instance of vision data capturing at least part of an environment of the robot; generating output based on processing, using a goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction, wherein the goal-conditioned policy network is trained based on at least (i) a goal image set of training instances, in which training tasks are described using goal images, and (ii) a natural language instruction set of training instances, in which training tasks are described using free-form natural language instructions; and controlling one or more actuators of the robot based on the generated output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one action indicated by the generated output.

2. The method of claim 1, further comprising: receiving an additional free-form natural language instruction describing an additional task for the robot, the additional free-form natural language instruction generated based on additional user interface input provided by the user via the one or more user interface input devices; processing the additional free-form natural language instruction using the natural language instruction encoder, to generate an additional latent goal representation of the additional free-form natural language instruction; receiving an additional instance of vision data generates by the at least one vision component of the robot; generating, using the goal-conditioned policy network, additional output based on processing at least (a) the additional instance of vision data and (b) the additional latent goal representation of the additional free-form natural language instruction; and controlling the one or more actuators of the robot based on the generated additional output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one additional action indicated by the generated additional output.

3. The method of claim 2, wherein the additional task for the robot is distinct from the task for the robot.

4. The method of any preceding claim, wherein each training instance in the goal image set of training instances, in which training tasks are described using goal images, comprises an imitation trajectory provided by a human and a goal image describing the training task performed by robot in the imitation trajectory.

5. The method of claim 4, wherein generating each training instance is the goal image set of training instances comprises: receiving a data stream, capturing the state of the robot and corresponding actions of the robot, while the human is controlling the robot to interact with the environment; for each training instance in the goal image set of training instances: selecting a sequence of image frames from the data stream; selecting the last image frame in the sequence of image frames as a training goal image, describing the training task, performed in the sequence of image frames; and generating the training instance by storing, as the training instance, the selected sequence of image frames as the imitation trajectory portion of the training instance, and the training goal image as the goal image portion of the training instance.

6. The method of any preceding claim, wherein each training instance, in the natural language instruction set of training instances, in which training are described using free-form natural language instructions, comprises an imitation trajectory provided by a human and a free-form natural language instruction describing the training task performed by robot in the imitation trajectory.

7. The method of claim 6, wherein generating each training instance is the natural language instruction set of training instances comprises: receiving a data stream, capturing the state of the robot and corresponding actions of the robot, while the human is controlling the robot to interact with the environment; for each training instance in the natural language instruction set of training instances: selecting a sequence of image frames from the data stream; providing the sequence of image frames to a human reviewer; receiving a training free-form natural language instruction describing a training task performed by the robot in the sequence of image frames; generating the training instance by storing, as the training instance, the selected sequence of image frames as the imitation trajectory portion of the training instance, and the training free-form natural language instruction as the free-form natural language instruction portion of the training instance.

8. The method of claims 5 and 7, wherein training the goal-conditioned policy network, based on at least (i) the goal image set of training instances, in which training tasks are described using goal images, and (ii) the natural language instruction set of training instances, in which training tasks are described using free-form natural language instructions, comprises: selecting a first training instance from the goal image set of training instances, wherein the first training instance includes a first imitation trajectory and a first goal image describing the first imitation trajectory; generating a latent space representation of the first goal image by processing, using a goal image encoder, the first goal image portion of the first training instance; processing, using the goal-conditioned policy network, at least (1) the initial image frame in the first imitation trajectory and (2) the latent space representation of the first goal image portion of the first training instance, to generate first candidate output; determining a goal image loss based on the first candidate output and one or more portions of the first imitation trajectory; selecting a second training instance from the natural language instruction set of training instances, wherein the second training instance includes a second imitation trajectory and a second free-form natural language instruction describing the second imitation trajectory; generating a latent space representation of the second free-form natural language instruction by processing, using the natural language encoder, the second free-form natural language instruction portion of the second training instance, wherein the latent space representation of the first goal image and the latent space representation of the second freeform natural language instruction are represented in a shared latent space; processing, using the goal-conditioned policy network, at least (1) the initial image frame in the second imitation trajectory and (2) the latent space representation of the second free-form natural language instruction portion of the second training instance, to generate second candidate output; determining a natural language instruction loss based on the second candidate output and one or more portions of the second imitation trajectory; determining a goal-conditioned loss based on the goal image loss and the natural language instruction loss; and updating one or more portions of the goal image encoder, the natural language instruction encoder, and/or the goal-conditioned policy network, based on the determined goal-conditioned loss. 9. The method of any preceding claim, wherein the goal-conditioned policy network is trained, based on a first quantity of training instances of the goal image set of training instances, and a second quantity of training instances of the natural language instruction set of training instances, wherein the second quantity is less than fifty percent of the first quantity.

10. The method of claim 9, wherein the second quantity is less than ten percent of the first quantity, less than five percent of the first quantity, or less than one percent of the first quantity.

11. The method of any preceding claim, wherein the generated output comprises a probability distribution over an action space of the robot, and wherein controlling the one or more actuators based on the generated output comprises selecting the at least one action based on the at least one action with the highest probability in the probability distribution.

12. The method of any of claims 1-10, wherein the generating output based on processing, using the goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction further comprises generating output based on processing, using the goal-conditioned policy network, (c) the at least one action, and wherein controlling the one or more actuators based on the generated output comprises selecting the at least one action based on the at least one action satisfying a threshold probability.

13. A method implemented by one or more processors, the method comprising: receiving a free-form natural language instruction describing a task for the robot, the free-form natural language instruction generated based on user interface input provided by a user via one or more user interface input devices; processing the free-form natural language instruction using a natural language instruction encoder, to generate a latent goal representation of the free-form natural language instruction; receiving an instance of vision data, the instance of vision data generated by at least one vision component of the robot, and the instance of vision data capturing at least part of an environment of the robot; generating output based on processing, using a goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction; controlling one or more actuators of the robot based on the generated output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one action indicated by the generated output; receiving a goal image instruction describing an additional task for the robot, the goal image instruction provided by the user via the one or more user interface input devices; processing the goal image instruction using a goal image encoder, to generate a latent goal representation of the goal image instruction; receiving an additional instance of vision data, the additional instance of vision data generated by the at least one vision component of the robot, and the additional instance of vision data capturing at least part of the environment of the robot; generating additional output based on processing, using the goal-conditioned policy network, at least (a) the additional instance of vision data and (b) the latent goal representation of the goal image instruction; and controlling the one or more actuators of the robot based on the generated additional output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one additional action indicated by the generated additional output.

14. A method implemented by one or more processors, the method comprising: selecting a first training instance from the goal image set of training instances, wherein the first training instance includes a first imitation trajectory and a first goal image describing the first imitation trajectory; generating a latent space representation of the first goal image by processing, using a goal image encoder, the first goal image portion of the first training instance; processing, using a goal-conditioned policy network, at least (1) the initial image frame in the first imitation trajectory and (2) the latent space representation of the first goal image portion of the first training instance, to generate first candidate output; determining a goal image loss based on the first candidate output and one or more portions of the first imitation trajectory; selecting a second training instance from the natural language instruction set of training instances, wherein the second training instance includes a second imitation trajectory and a second free-form natural language instruction describing the second imitation trajectory; generating a latent space representation of the second free-form natural language instruction by processing, using the natural language encoder, the second free-form natural language instruction portion of the second training instance, wherein the latent space representation of the first goal image and the latent space representation of the second freeform natural language instruction are represented in a shared latent space; processing, using the goal-conditioned policy network, at least (1) the initial image frame in the second imitation trajectory and (2) the latent space representation of the second free-form natural language instruction portion of the second training instance, to generate second candidate output; determining a natural language instruction loss based on the second candidate output and one or more portions of the second imitation trajectory; determining a goal-conditioned loss based on the goal image loss and the natural language instruction loss; and updating one or more portions of the goal image encoder, the natural language instruction encoder, and/or the goal-conditioned policy network, based on the determined goal-conditioned loss. 15. A computer program comprising instructions that when executed by one or more processors of a computing system, cause the computing system to perform the method of any preceding claim.

16. A computing system configured to perform the method of any one of claims 1 to 14.

17. A computer-readable storage medium storing instructions executable by one or more processors of a computing system to perform the method of any one of claims 1 to 14.

Description:
TRAINING AND/OR UTILIZING MACHINE LEARNING MODEL(S) FOR USE IN NATURAL

LANGUAGE BASED ROBOTIC CONTROL

Background

[0001] Many robots are programmed to perform certain tasks. For example, a robot on an assembly line can be programmed to recognize certain objects, and perform particular manipulations to those certain objects.

[0002] Further, some robots can perform certain tasks in response to explicit user interface input that corresponds to the certain task. For example, a vacuuming robot can perform a general vacuuming task in response to a spoken utterance of "robot, clean". However, typically, user interface inputs that cause a robot to perform a certain task must be mapped explicitly to the task. Accordingly, a robot can be unable to perform certain tasks in response to various free-form natural language inputs of a user attempting to control the robot. For example, a robot may be unable to navigate to a goal location based on free-form natural language input provided by a user. For instance, a robot can be unable to navigate to a particular location in response to a user request of "go out the door, turn left, and go through the door at the end of the hallway."

Summary

[0003] Techniques disclosed herein are directed towards training a goal-conditioned policy network based on multiple datasets, where training tasks are described in a different way in each of the datasets. For example, a robot task can be described using a goal image, using natural language text, using a task ID, using natural language speech, and/or using additional or alternative task description(s). For instance, a robot can be trained to perform the task of putting a ball into a cup. A goal image description of the example task may be a picture of the ball inside the cup, a natural language text description of the task may be a natural language instruction of "put the ball into the mug", and a task ID description of the task may be "task id = 4", where 4 is the ID associated with the task of placing the ball into the cup. In some implementations, multiple encoders can be trained (i.e., one encoder per dataset) such that each encoder can generate a shared latent goal space representation of a task by processing the task description. In other words, different ways of describing the same task (e.g., an image of the ball in a cup, natural language instructions of "put the ball into the mug", and/or "task id = 4") can map to the same latent goal representation based on processing the task descriptions with the corresponding encoder. While techniques described herein are directed towards training a robot to perform tasks, this is not meant to be limiting. Additional and/or alternative networks can be trained based on multiple datasets, each of the datasets having a different context, in accordance with techniques described herein.

[0004] Additional or alternative implementations are directed towards controlling a robot based on output generated using a goal-conditioned policy network. In some implementations, the robot can be trained using multiple datasets (e.g., trained using a goal image dataset and a natural language instruction dataset), and only one task description type can be used at inference time to describe the task(s) for the robot (e.g., providing the system with only natural language instructions, only goal images, only task IDs, etc. at inference). For example, a system can be trained based on a goal image data set and a natural language instruction dataset, where the system is provided with natural language instructions to describe tasks for the robot at runtime. Additionally or alternatively, in some implementations the system can be provided with multiple instruction description types at runtime (e.g., provided with natural language instructions, goal images, and task IDs at runtime, provided with natural language instructions and task IDs at runtime, etc.). For example, a system can be trained based on a natural language instruction dataset and a goal image dataset, where the system can be provided with natural language instructions and/or goal image instructions at runtime.

[0005] In some implementations, a robot agent may achieve task agnostic control using a goal-conditioned policy network, where a single robot is able to reach any reachable goal state in its environment. In conventional teleoperated multitask demonstrations, the diversity of the collected data may be constrained to an upfront task definition (e.g., human operators are provided with a list of tasks to demonstrate). In contrast, the human operator in teleoperated "play" is not constrained to a set of predefined tasks when generating play data. In some implementations, a goal image dataset can be generated based on teleoperated "play" data. Play data can include continuous logs (e.g., a data stream) of low-level observations and actions collected while a human teleoperates a robot and engages in behavior that satisfies their own curiosity. Collecting play data, unlike collecting expert demonstrations, may not require task segmenting, labeling, or resetting to an initial state, thus enabling play data to be quickly collected in large quantities. Additionally or alternatively, play data may be structured based on human knowledge of object affordances (e.g., if people see a button in a scene, they tend to press it). Human operators may try multiple ways of achieving the same outcome and/or explore new behaviors. In some implementations, play data can be expected to naturally cover an environment's interaction space in a way expert demonstrations may not.

[0006] In some implementations, a goal image dataset may be generated based on teleoperated play data. Segments of the play data stream (e.g., a sequence of image frames) may be selected as an imitation trajectory, where the last image in the selected segment of the data stream is the goal image. In other words, goal images describing the imitation trajectory, in the goal image dataset, can be generated in hindsight, where the goal image is determined based on the sequence of actions, in contrast to generating the sequence of actions based on a goal image. In some implementations, short-horizon goal image training instances can be quickly and/or cheaply generated based on a data stream of teleoperated play data.

[0007] In some implementations, a natural language instruction data set may additionally or alternatively based on teleoperated play data. Segments of the play data stream (e.g., a sequence of image frames) may be selected as an imitation trajectory. One or more humans may then describe the imitation trajectory, thus generating a natural language instruction in hindsight (in contrast to generating an imitation trajectory based on a natural language instruction). In some implementations, the natural language instructions collected may cover functional behavior (e.g., "open the drawer", "press the green button", etc.), general non task- specific behaviors (e.g., "move your hand slightly to the left", "do nothing" etc.), and/or additional behaviors. In some implementations, the natural language instructions can be freeform natural language, without constraints placed on the natural language instruction can provide. In some implementations, multiple humans can describe imitation trajectories using free-form natural language which may result in different descriptions of the same object(s), behavior(s), etc. For example, an imitation trajectory may capture a robot picking up a wrench. Multiple human describers can provide different free-form natural language instructions for the imitation trajectory such as "grab the tool", "pick up the wrench", "grasp the object", and/or additional free-form natural language instructions. In some implementations, this diversity in free-form natural language instructions can lead to a more robust goal-conditioned policy network, where wider range of free-form natural language instructions can be implemented by an agent.

[0008] A goal-conditioned policy network and corresponding encoders can be trained based on an image goal dataset and a free-form natural language instruction dataset in a variety of ways. For example, a system can process a goal image portion of a goal image training instance using a goal image encoder to generate a latent goal space representation of the goal image. The latent goal space representation of the goal image and an initial frame of the imitation trajectory portion of the goal image training instance to generate goal image candidate output. A goal image loss can be generated based on the goal image candidate output and the goal image imitation trajectory. Similarly, the system can process a natural language instruction portion of a natural language instruction training instance to generate a latent space representation of the natural language instruction. The natural language instruction and the initial frame of the imitation trajectory portion of the natural language instruction training instance can be processed using the goal-conditioned policy network to generate natural language instruction candidate output. A natural language instruction loss can be generated based on the natural language instruction candidate output and the imitation trajectory portion of the natural language instruction training instance. In some implementations, the system can generate a goal-conditioned loss based on the goal image loss and the natural language instruction loss. One or more portions of the goal-conditioned policy network, the goal image encoder, and/or the natural language instruction encoder can be updated based on the goal-conditioned loss. However, this is merely an example of training the goal-conditioned policy network, the goal image encoder, and/or the natural language instruction encoder. Additionally and/or alternative training methods can be used. [0009] In some implementations, the goal-conditioned policy network can be trained using different sized goal image data sets and natural language instruction data sets. For example, the goal-conditioned policy network can be trained based on a first quantity of goal image training instances and a second quantity of natural language instruction training instances, where the second quantity is fifty percent of the first quantity, less than fifty percent of the first quantity, less than ten percent of the first quantity, less than five percent of the first quantity, less than one percent of the first quantity, and/or greater than or less than additional or alternative percentages of the first quantity.

[0010] Accordingly, various implementations set forth techniques for learning a shared latent goal space for many task descriptions for use in training a single goal-conditioned policy network. In contrast, conventional techniques train multiple policy networks, one policy network for each task description type. Training a single policy network can allow a wider variety of data to be utilized in training the network. Additionally or alternatively, the policy network can be trained using a larger quantity of training instances of one data type. For example, the goal-conditioned policy network can be trained using hindsight goal image training instances, which can be automatically generated from an imitation learning data stream (e.g., the hindsight goal image training instances are inexpensive to automatically generate when compared natural language instruction training instances which may require human provided natural language instructions). By training the goal-conditioned policy network using both a goal image data set and a natural language instruction data set, where the majority training instances are automatically generated goal image training instances, the resulting goal-conditioned policy network can be robust at generating actions for a robot based on natural language instructions, without requiring the computing resources (e.g., processor cycles, memory, power, etc.) and/or human resources (e.g., the time required for a group of people to provide natural language instructions, etc.) to generate a large natural language instruction data set.

[0011] The above description is provided only as an overview of some implementations disclosed herein. These and other implementations of the technology are disclosed in additional detail below. [0012] It should be appreciated that all combinations of the foregoing concepts and additional concepts described in greater detail herein are contemplated as being part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the subject matter disclosed herein.

Brief Description of the Drawings

[0013] FIG. 1 illustrates an example environment in which implementations described herein may be implemented.

[0014] FIG. 2 illustrates an example of generating action output, using a goal-conditioned policy network, in accordance with various implementations described herein.

[0015] FIG. 3 is a flowchart illustrating an example process of controlling a robot based on a natural language instruction in accordance with various implementations disclosed herein. [0016] FIG. 4 is a flowchart illustrating an example process of generating goal image training instance(s) in accordance with various implementations disclosed herein.

[0017] FIG. 5 is a flowchart illustrating an example process of generating natural language instruction training instance(s) in accordance with various implementations disclosed herein. [0018] FIG. 6 is a flowchart illustrating an example process of training a goal-conditioned policy network, a natural language instruction encoder, and/or a goal image encoder in accordance with various implementations disclosed herein.

[0019] FIG. 7 schematically depicts an example architecture of a robot.

[0020] FIG. 8 schematically depicts an example architecture of a computer system.

Detailed Description

[0021] Natural language is a versatile and intuitive way for humans to communicate tasks to a robot. Existing approaches provide for learning a wide variety of robotic behaviors from general sensors. However, each task must be specified with a goal image— something that is not practical in open-world environments. Implementations disclosed herein are directed towards a simple and/or scalable way to condition policies on human language instead. Short robot experiences from play can be paired with relevant human language after-the-fact. To make this efficient, some implementations utilize multi<ontext imitation , which can allow for the training of a single agent to follow image or language goals, where just language conditioning is used at test time. This can reduce the cost of language pairing to less than a small percentage (e.g., less than 10%, 5%, or 1%) of collected robot experience, with the majority of control still learned via self-supervised imitation. At test time, a single agent trained in this manner can perform many different robotic manipulation skills in a row in a 3D environment, directly from images, and specified only with natural language (e.g. "open the drawer...now pick up the block...now press the green button..."). Additionally, some implementations use a technique that transfers knowledge from large unlabeled text corpora to robotic learning. Transfer can significantly improve downstream robotic manipulation. It also, for example, can allow the agent to follow thousands of novel instructions at test time in zero shot in multiple different languages.

[0022] A long-term motivation in robotic learning is the idea of a generalist robot— a single agent that can solve many tasks in everyday settings, using only general onboard sensors. A fundamental but less considered aspect, alongside task and observation space generality, is general task specification: the ability for untrained users to direct agent behavior using the most intuitive and flexible mechanisms. Along these lines, it is hard to imagine a truly generalist robot without also imagining a robot that can follow instructions expressed in natural language.

[0023] More broadly, children learn language in the context of a rich, relevant, sensorimotor experience. This motivates the long-standing question in artificial intelligence of embodied language acquisition: how might intelligent agents ground language understanding in their own embodied perception? An ability to relate language to the physical world potentially allows robots and humans to communicate on common ground over shared sensory experience — something that could lead to much more meaningful forms of human-machine interaction. [0024] Furthermore, language acquisition, at least in humans, can be a highly social process. During their earliest interactions, infants contribute actions while caregivers contribute relevant words. While the actual learning mechanism at play in humans is not fully understood, implementations disclosed herein explore what robots can leam from similar paired data.

[0025] However, even simple instruction following, can poses a notoriously difficult learning challenge, subsuming many longterm problems in Al. For example, a robot presented with the command "sweep the block into the drawer must be able to relate language to low-level perception (what does a block look like? what is a drawer?). It must perform visual reasoning (what does it mean for the block to be in the drawer?). Additionally, it must solve a complex sequential decision problem (what commands do I send to my arm to "sweep"?). It can be noted these questions cover only a single task, whereas the generalist robot setting demands single agents that perform many tasks. [0026] In some implementations, the setting of open-ended robotic manipulation can be combined with open-ended human language conditioning. Existing techniques typically include restricted observation spaces (e.g. games, 2D gridworlds, simplified actuators, (e.g. binary pick and place primitives, and synthetic language data. Implementations herein are directed towards the combination of 1) human language instructions, 2) high-dimensional continuous sensory inputs and actuators, and/or 3) complex tasks like long-horizon robotic object manipulation. At test time, a single agent that can perform many tasks in a row can be considered in some implementations, where each task can be specified by a human in natural language. For example, "open the door all the way to the right...now pick up the block...now push the red button...now close the door". Furthermore, the agent should be able to perform any combination of subtasks in any order. This can be referred to as the "ask me anything" scenario, which can test aspects of generality, such as general-purpose control, learning from onboard sensors, and/or general task specification.

[0027] Existing techniques can provide a starting point for learning general-purpose skills from onboard. However, like other methods that combine relabeling with image observations, existing techniques require that tasks be specified using a goal image to reach. While trivial in a simulator, this form of task specification can be impractical in open-world environments.

[0028] In some implementations, the system can extend existing techniques to the natural language setting by:

[0029] (1) Cover the space with teleoperated play. In some implementations, the system can collect a teleoperated "play" dataset. These long temporal state-action logs can (automatically) be relabeled into many short-horizon demonstrations, solving for image goals.

[0030] (2) Pair play with human language. Existing techniques typically pair instructions with optimal behavior. In contrast, in some implementations described herein, behavior from play can be paired after-the-fact with optimal instructions (i.e., Hindsight Instruction Pairing). This can yield a dataset of demonstrations, solving for human language goals.

[0031] (3) Multicontext imitation learning. In some implementations, a single policy can be trained to solve image and/or language goals. Additionally or alternatively, in some implementations, only language conditioning is used at test time. To make this possible, the system can utilize Multicontext Imitation Learning. Multicontext imitation learning can be highly data efficient. It reduces the cost of language pairing, for example, to less than a small percentage (e.g., less than 10%, less than 5%, less than 1%) of collected robot experience to enable language conditioning, with the majority of control still learned via self-supervised imitation.

[0032] (4) Condition on human language at test time. In some implementations, at test time, a single policy trained in this manner can perform many complex robotic manipulation skills in a row, directly from images, and specified entirely with natural language.

[0033] Additionally or alternatively, some implementations include transfer learning from unlabeled text corpora to robotic manipulation. A transfer learning augmentation can be used, which can be applicable to any language conditioned policy. In some implementations, this can improve downstream robotic manipulation. Importantly, this technique can allow the agent to follow of novel instructions in zero shot (e.g., follow thousands of novel instructions, and/or follow instructions across multiple languages) .Goa I conditioned learning can be used to train a single agent to reach any goal. This can be formalized as a goal conditioned policy which outputs next action α ∈ A, conditioned on current state s ∈ S and a task descriptor g ∈ G. Imitation approaches can learn this mapping using supervised learning over a dataset D = of expert state-action trajectories τ = {(s 0 , α 0 ), ... } solving for a paired task descriptor (such as a one-hot task encoding). A convenient choice for a task descriptor can be some goal state g = s g ∈ S. This can allow any state visited during collection to be relabeled as a "reached goal state", with the preceding states and actions treated as optimal behavior for reaching that goal. Applied to some original dataset D, this can yield a much larger dataset of relabeled examples providing the inputs to a simple maximum likelihood objective for goal directed control: relabeled goal conditioned behavioral cloning (GCBC):

[0034]

[0035] While relabeling can automatically generate a large number of goal-directed demonstrations at training time, it may not account for the diversity of those demonstrations, which may come entirely from the underlying data. To be able to reach any user-provided goalmotivates data collection methods, upstream of relabeling, that fully cover state space. [0036] Human teleoperated "play" collection can directly addresses the state space coverage problem. In this setting, an operator may no longer be constrained to a set of predefined tasks, but rather can engage in every available object manipulation in a scene. The motivation is to fully cover state space using prior human knowledge of object affordances. During collection, the stream of onboard robot observations and actions are recorded, yielding an unsegmented dataset of unstructured but semantically meaningful behaviors, which can be useful in a relabeled imitation learning context.

[0037] Learning from play can combine relabeled imitation learning with teleoperated play. First, unsegmented play logs are relabeled using Algorithm 2. This can yield a training set holding many diverse, short-horizon examples. In some implementations, these can be fed to a standard maximum likelihood goal conditioned imitation objective:

[0038]

[0039] A limitation of learning from play— and other approaches that combine relabeling with image state spaces— is that behavior must be conditioned on a goal image s g at test time. Some implementations described herein can focus on a more flexible mode of conditioning: humans describing tasks in natural language. Succeeding at this may require solving a complicated grounding problem. To address this, Hindsight Instruction Pairing, a method for pairing large amounts of diverse robot sensor data with relevant human language, can be used. In some implementations, to leverage both image goal and language goal datasets, Multicontext Imitation Learning can be used. Additionally or alternatively, language learning from play (LangLfP) can be used, which ties together these components to learn a single policy that follows many human instructions over a long horizon.

[0040] From a statistical machine learning perspective, a candidate for grounding human language in robot sensor data is a large corpora of robot sensor data paired with relevant language. One way to collect this data is to choose an instruction, then collect optimal behavior. Additionally or alternatively, some implementations can sample any robot behavior from play, then collect an optimal instruction, which can be referred to as Hindsight Instruction Pairing (Algorithm 3). Much like how a hindsight goal image is an after-the-fact answer to the question "which goal state makes this trajectory optimal?", a hindsight instruction is an after-the-fact answer to the question "which language instruction makes this trajectory optimal?". In some implementations, these pairs can be obtained by showing humans onboard robot sensor videos, then asking them "what instruction would you give the agent to get from first frame to last frame"? [0041] The hindsight instruction pairing process can assume access to D play , which can be obtained using Algorithm 2, and a pool of non-expert human overseers. From D play , a new dataset can be created, which consists of short- horizon play sequences τ paired with l ∈ L a human-provided hindsight instruction with no restrictions on vocabulary and/or grammar.

[0042] In some implementations, this process can be scalable because pairing happens after- the-fact, making it straightforward to parallelize (e.g., via crowdsourcing). The language collected may also be naturally rich, as it sits on top of play and is similarly not constrained by an upfront task definition. This can result in instructions for functional behavior (e.g. "open the drawer", "press the green button"), as well as general non task-specific behavior (e.g. "move your hand slightly to the left." or "do nothing"). In some implementations, it may be unnecessary to pair every experience from play with language to learn to follow instructions. This can be made possible with Multicontext Imitation Learning, described herein.

[0043] So far, a way to create two contextual imitation datasets has been described: D play holding hindsight goal image examples and D (play,lang) ' holding hindsight instruction examples. In some implementations, a single policy can be trained that is agnostic to either task description. This can allow the sharing of statistical strength over multiple datasets during training, and/or can allow the use of just language specification at test time.

[0044] With this motivation, some implementations use multicontext imitation learning (MCIL), a simple and/or widely applicable generalization of contextual imitation to multiple heterogeneous contexts. The main idea is to represent a large set of policies by a single, unified function approximator that can generalize over states, tasks, and/or task descriptions. MCIL can assume access to multiple imitation learning datasets D = {D°, ...,D K }, each with a different way of describing tasks. In some implementations, each holds pairs of state-action trajectories τ paired with some context c∈ C. For example, D° might contain demonstrations paired with one-hot task ids (a conventional multitask imitation learning dataset), D 1 might contain image goal demonstrations, and D 2 might contain language goal demonstrations.

[0045] Rather than train one policy per dataset, MCIL instead trains a single latent goal conditioned policy over all datasets simultaneously, learning to map each task description type to the same latent goal space z∈ R d . This latent space can be seen as a common abstract goal representation shared across many imitation learning problems. To make this possible, MCIL can assume a set of parameterized encoders one per dataset, each responsible for mapping task descriptions of a particular type to the common latent goal space, i.e. For instance, these could be a task id embedding lookup, an image encoder, a language encoder respectively, one or more additional or alternative values, and/or combinations thereof.

[0046] In some implementations, MCIL has a simple training procedure: At each training step, for each dataset D k in D, sample a minibatch of trajectory-context pairs ( r k , c k )~D k , encode the contexts in latent goal space then compute a simple maximum likelihood contextual imitation objective:

[0047]

[0048] The full MCIL objective can average this per-dataset objective over all datasets at each training step, [0049]

[0050] and the policy and all goal encoders are trained end to end to maximize See Algorithm 1 for full minibatch training pseudocode.

[0051] In some implementations, multicontext learning has properties that can make it broadly useful beyond learning from play. While the dataset D can be set to D = herein, this approach can be used more generally for training over any set of imitation datasets with different descriptions— e.g. task id, language, human video demonstration, speech, etc. Being context-agnostic can enable a highly efficient training scheme: learn the majority of control from the cheapest data source, while learning the most general form of task conditioning from a small number of labeled examples. In this way, multicontext learning can be interpreted as transfer learning through a shared goal space. This can reduce the cost of human oversight to the point where it can be practically applied. Multicontext learning can allow the training of an agent to follow human instructions with a small percentage (e.g., less than 10%, less than 5%, less than 1%, etc.) of collected robot experience requiring paired language, with the majority of control learned instead from relabeled goal image data.

[0052] In some implementations, language conditioned learning from play (LangLfP) is a special case of multicontext imitation learning. At a high level, LangLfP trains a single multicontext policy over datasets D = {D play, D (play,lang) }, consisting of hindsight goal image tasks and hindsight instruction tasks. In some implementations, can be a neural network encoders mapping from image goals and instructions respectively to the same latent visuo- lingual goal space. LangLfP can learn perception, natural language understanding, and control end-to-end with no auxiliary losses.

[0053] Perception module. In some implementations, τ in each example consists of a sequence of onboard observations O t and actions. Each observation can contain a high-dimensional image and/oran internal proprioceptive sensor reading. A learned perception module Ρ θ maps each observation tuple to a low-dimensional embedding, e.g., fed to the rest of the network. This perception module can be shared with g enc , which defines an additional network on top to map encoded goal observation s g to a point in z space. [0054] Language module. In some implementations, the language goal encoder s enc tokenizes raw text Z into subwords, retrieves subword embeddings from a lookup table, and/or then summarizes embeddings into a point in z space. Subword embeddings can be randomly initialized at the beginning of training and learned end-to-end by the final imitation loss. [0055] Control module. Many architectures can be used to implement the multicontext policy For example, Latent Motor Plans (LMP) can be used. LMP is a goal-directed imitation architecture that uses latent variables to model the large amount of multimodality inherent to freeform imitation datasets. Concretely, it can be a sequence-to-sequence conditional variational autoencoder (seq2seq CVAE) autoencoding contextual demonstrations through a latent "plan" space. The decoder is a goal conditioned policy. As a CVAE, LMP lower bounds maximum likelihood contextual imitation, and can be easily adapted to the multicontext setting.

[0056] LangLfP training. LangLfP training can be compared with existing LfP training. At each training step, a batch of image goal tasks can be sampled from and a batch of language goal tasks can be sampled from Observations are encoded into the state space using the perception module P θ . Image and language goals can be encoded into latent goal space z using encoders g enc and s enc· The policy can be used to compute the multicontext imitation objective, averaged over both task descriptions. In some implementations, a combined gradient step can be taken with respect to all modules— perception, language, and control- optimizing the whole architecture end-to-end as a single neural network.

[0057] Following human instructions at test time. At the beginning of a test episode the agent receives as input its onboard observation O t and a human-specified natural language goal Z. The agent encodes Z in latent goal space z using the trained sentence encoder S enc . The agent then solves for the goal in closed loop, repeatedly feeding the current observation and goal to the learned policy sampling actions, and executing them in the environment. The human operator can type a new language goal Z at any time.

[0058] Large "in the wild" natural language corpora can reflect substantial human knowledge about the world. Many recent works have successfully transferred this knowledge to downstream tasks in NLP via pretrained embeddings. In some implementations described herein, similar knowledge transfer can be achieved to robotic manipulation ? [0059] There are many benefits to this type of transfer. First, if there is a semantic match between the source corpora and the target environment, more structured inputs may act as a strong prior, shaping grounding or control. Additionally or alternatively, language embeddings have been shown to encode similarity between large numbers of words and sentences. This may allow an agent to follow many novel instructions in zero shot, provided they are sufficiently

"close" to ones it has been trained to follow. Note, given the complexity of natural language, it may be likely that robots in open-world environments will need to be able to follow synonym commands outside of a particular training set.

[0060] Algorithm 1 Multicontext imitation learning

[0061] Input: One dataset per context type (e.g. goal image, language instruction, task id), each holding pairs of (demonstration, context).

[0062] Input: One encoder per context type, mapping context to shared latent goal space, e.g.

[0063] Input: Single latent goal conditioned policy.

[0064] Input: Randomly initialize parameters

[0065] while True do

[0066]

[0067] # Loop over datasets.

[0068] for k = 0 ... K do

[0069] # Sample a (demonstration, context) batch from this dataset.

[0070]

[0071] # Encode context in shared latent goal space.

[0072]

[0073] # Accumulate imitation loss.

[0074]

[0075] end for

[0076] # Average gradients over context types.

[0077] [0078] # Train policy and all encoders end-to-end.

[0079] Update Θ by taking a gradient step

[0080] end while

[0081] Algorithm 2 Creating millions of goal image conditioned imitation examples from teleoperated play.

[0082] Input: the unsegmented stream of observations and actions recorded during play.

[0083] Input:

[0084] Input: w low , w high , bounds on hindsight window size.

[0085] while True do

[0086] # Get next play episode from stream.

[0087]

[0088] for w = w low ... w high do

[0089] for i = 0.. (t — w) do

[0090] # Select each w-sized window.

[0091]

[0092] # Treat last observation in window as goal.

[0093] Sg = s w

[0094]

[0095] end for

[0096] end for

[0097] end while

[0098] Algorithm 3 Pairing robot sensor data with natural language instructions.

[0099] Input: D play , a relabeled play dataset holding (T, S s ) pairs.

[00100] Input:

[00101] Input: get hindsight _instruction( ): human overseer, providing after-the-fact natural language instructions for a given τ.

[00102] Input: K, number of pairs to generate,

[00103] for 0 ... K do [00104] # Sample random trajectory from play.

[00105]

[00106] # Ask human for instruction making x optimal.

[00107]

[00108]

[00109] end for

[00110] Turning now to the figures, example robot 100 is illustrated in FIG. 1. Robot 100 is a "robot arm" having multiple degrees of freedom to enable traversal of grasping end effector 102 along any of a plurality of potential paths to position the grasping end effector 102 in a desired location. Robot 100 further controls the two opposed "claws" of its grasping end effector 102 to actuate the claws between at least an open position and a closed position (and/or optionally a plurality of "partially closed" positions).

[00111] Example vision component 106 is also illustrated in FIG. 1. In FIG. 1, vision component 106 is mounted at a fixed pose relative to the base or other stationary reference point of robot 100. Vision component 106 includes one or more sensors that can generate images and/or other vision data related to shape, color, depth, and/or other features of object(s) that are in the line of sight of the sensors. The vision component 106 may be, for example, a monographic camera, a stereographic camera, and/or a 3D laser scanner. A 3D laser scanner may be, for example, a time-of-flight 3D laser scanner or a triangulation based 3D laser scanner and may include a position sensitive detector (PDS) or other optical position sensor.

[00112] The vision component 106 has a field of view of at least a portion of the workspace of the robot 100, such as the portion of the workspace that includes example object 104. Although resting surface(s) for object 104 is not illustrated in FIG.l, those objects may rest on a table, a tray, and/or other surface(s). Objects 104 may include a spatula, a stapler, and a pencil. In other implementations, more objects, fewer objects, additional objects, and/or alternative objects may be provided during all or portions of grasp attempts of robot 100 as described herein.

[00113] Although a particular robots 100 is illustrated in FIG. 1, additional and/or alternative robots may be utilized, including additional robot arms that are similar to robot 100, robots having other robot arm forms, robots having a humanoid form, robots having an animal form, robots that move via one or more wheels (e.g., self-balancing robots), submersible vehicle robots, an unmanned aerial vehicle ("UAV"), and so forth. Also, although particular grasping end effectors are illustrated in FIG. 1, additional and/or alternative end effects may be utilized, such as alternative impactive grasping end effectors (e.g., those with grasping "plates", those with more or fewer "digits"/"claws"), ingressive grasping end effectors, astrictive grasping end effectors, contigutive grasping end effectors, or non-grasping end effectors. Additionally, although a particular mountings of vision component 106 is illustrated in FIG. 1, additional and/or alternative mountings may be utilized. For example, in some implementations, vision components may be mounted directly to robots, such as on non-actuable components of the robots or on actuable components of the robots (e.g., one the end effector or on a component close to the end effector). Also, for example, in some implementations, a vision component may be mounted on a non-stationary structure that is separate from its associated robot and/or may be mounted in a non-stationary manner on a structure that is separate from its associated robot.

[00114] Data from robot 100 (e.g., vision data captured using vision component 106), along with natural language instruction(s) 130, captured using user interface input device(s) 128, can be utilized by action output engine 108, to generate action output. In some implementations, robot 100 can be controlled (e.g., one or more actuators of robot 100 can be controlled) to perform one or more actions based on the action output. In some implementations, user interface input device(s) 128 may include, for example, a physical keyboard, a touch screen (e.g., implementing a virtual keyboard or other textual input mechanisms), a microphone, and/or a camera. In some implementations, the natural language instruction(s) 130 can be free-form natural language instruction(s).

[00115] In some implementations, latent goal engine 110 can process natural language instruction 130, using natural language instruction encoder 114, to generate a latent state representation of the natural language instruction. For example, a keyboard user interface input device 128 can capture a natural language instruction of "push the green button". Latent goal engine 110 can process the natural language instruction 130 of "push the green button", using natural language instruction encoder 114, to generate a latent goal representation of "push the green button".

[00116] In some implementations, goal image training instance engine 126 can be used to generate goal image training instance(s) 124 based on teleoperated "play" data 122. Teleoperated "play" data 122 can be generated by a human controlling a robot in an environment, where the human controller does not have defined tasks to perform. In some implementations, each goal image training instance 124 can include an imitation trajectory portion and a goal image portion, where the goal image portion describes the task a robot task. For example, a goal image can be an image of a closed drawer, which can describe robot action(s) of closing the drawer. As another example, a goal image can be an image of an open drawer, which can describe robot action(s) of opening the door. In some implementations, goal image training instance engine 126 can select a sequence of image frames from a teleoperated play data stream. Goal image training instance engine 126 can generate one or more goal instance training instances by storing the selected sequence of image frames as the imitation trajectory portion of a training instance, and storing the last image frame of the sequence of image frames as the goal image portion of the training instance. In some implementations, goal image training instance(s) 124 can be generated in accordance with process 400 of FIG. 4 described herein.

[00117] In some implementations, natural language instruction training instance engine 120 can be used to generate natural language training instance(s) 118 using teleoperated play data 122. Natural language instruction training instance engine 120 can select a sequence of image frames from a data stream of teleoperated play data 122. In some implementations, a human describer can provide a natural language instruction describing the task being performed by the robot in the selected sequence of image frames. In some implementations, multiple human describers can provide natural language instructions describing the task being performed by the robot in the same selected sequence of image frames. Additionally or alternatively, multiple human describers can provide natural language instructions describe the task being performed in distinct sequences of images frames. In some implementations, multiple human describers can provide natural language instructions in parallel. Natural language instruction training instance engine 120 can generate one or more natural language instruction training instances by storing the selected sequence of image frames as an imitation trajectory portion of a training instance, and storing the human provided natural language instruction as the natural language instruction portion of the training instance. In some implementations, natural language training instance(s) 124 can be generated in accordance with process 500 of FIG. 5 described herein.

[00118] In some implementations, training engine 116 can be used to train goal-conditioned policy network 112, natural language instruction encoder 114, and/or goal image encoder 132. In some implementations, goal-conditioned policy network 112, natural language instruction encoder 114, and/or goal image encoder 132 can be trained accordance with process 600 of FIG. 6 described herein.

[00119] FIG. 2 illustrates an example of generating action output 208 in accordance with a variety of implementations. Example 200 includes receiving natural language instruction input 202 (e.g., receiving natural language instruction input via one or more user interface input devices 128 of FIG. 1). In some implementations, natural language instruction input 202 can be free-form natural language input. In some implementations, natural language instruction input 202 can be text natural language input. Natural language instruction encoder 114 can process natural language instruction input 202 to generate a latent goal space representation of the natural language instruction 204. Goal-conditioned policy network 112 can be used to process latent goal 204 along with a current instance of vision data 206 (e.g., an instance of vision data captured via vision component 106 of FIG. 1), to generate action output 208. In some implementations, action output 208 can describe one or more actions for a robot to perform to perform the tasks instructed by natural language instruction input 202. In some implementations, one or more actuators of a robot (e.g., robot 100 of FIG. 1) can be controlled based on action output 208 for the robot to perform the task indicated by the natural language instruction input 202.

[00120] FIG. 3 is a flowchart illustrating a process 300 of generating output, using a goal- conditioned policy network in controlling a robot, based on a natural language instruction, in accordance with implementations disclosed herein. For convenience, the operations of the flowchart are described with reference to a system that performs the operations. This system may include various components of various computer systems, such as one or more components of robot 100, robot 725, and/or computing system 810. Moreover, while operations of process 300 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, and/or added.

[00121] At block 302, the system receives a natural language instruction describing a task for a robot. For example, the system can receive a natural language instruction of "push the red button", "close the door", "pick up the screwdriver", and/or additional or alternative natural language instructions describing a task to be performed by a robot.

[00122] At block 304, the system processes the natural language instruction using a natural language encoder to generate a latent space representation of the natural language instruction.

[00123] At block 306, the system receives an instance of vision data capturing a least part of an environment of a robot.

[00124] At block 308, the system generates output based on processing, using a goal- conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the natural language instruction.

[00125] At block 310, the system controls one or more actuators of the robot based on the generated output.

[00126] Process 300 of FIG. 3 is described with respect to controlling a robot based on natural language instructions. In additional or alternative implementations, the system can control a robot based on goal images, task IDs, speech, etc. in place of the natural language instructions or in addition to the natural language instructions. For example, a system can control a robot based on natural language instructions and goal image instructions, where the natural language instructions are processed using a corresponding natural language instruction encoder and the goal images are processed using a corresponding goal image encoder.

[00127] FIG. 4 is a flowchart illustrating a process 400 of generating goal image training instance(s) in accordance with implementations disclosed herein. For convenience, the operations of the flowchart are described with reference to a system that performs the operations. This system may include various components of various computer systems, such as one or more components of robot 100, robot 725, and/or computing system 810. Moreover, while operations of process 400 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, and/or added.

[00128] At block 402, the system receives a data stream capturing teleoperated play data. [00129] At block 404, the system selects a sequence of image frames from the data stream. For example, the system can select a one second sequence of image frames in the data stream, a two second sequence of image frames in the data stream, a ten second sequence of image frames in the data stream, and/or additional or alternative lengths of segments of image frames in the data stream.

[00130] At block 404, the system determines the final image frame in the selected sequence of image frames.

[00131] At block 406, the system stores a training instance including (1) the sequence of image frames as an imitation trajectory portion of the training instance and (2) the final image frame as a goal image portion of the training instance. In other words, the system stores the final image as the goal image describing the task captured in the sequence of image frames. [00132] At block 410, the system determines whether to generate an additional training instance. In some implementations, the system can determine to generate additional training instances until one or more conditions are satisfied. For example, the system can continue to generate training instances until a threshold number of training instances are generated, until the entire data stream has been processed, and/or until additional or alternative conditions have been satisfied. If the system determines to generate an additional training instance, the system proceeds back to block 404, selects an additional sequence of image frames from the data stream, and performs an additional iteration of block 406 and 408 based on the additional sequence of image frames. If not, the process ends.

[00133] FIG. 5 is a flowchart illustrating a process 500 of generating natural language instruction training instance(s) in accordance with implementations disclosed herein. For convenience, the operations of the flowchart are described with reference to a system that performs the operations. This system may include various components of various computer systems, such as one or more components of robot 100, robot 725, and/or computing system 810. Moreover, while operations of process 500 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, and/or added. [00134] At block 502, the system receives a data stream capturing teleoperated play data. [00135] At block 504, the system selects a sequence of image frames from the data stream. For example, the system can select a one second sequence of image frames in the data stream, a two second sequence of image frames in the data stream, a ten second sequence of image frames in the data stream, and/or additional or alternative lengths of segments of image frames in the data stream.

[00136] At block 506, the system receives a natural language instruction describing the task in the selected sequence of image frames.

[00137] At block 508, the system stores a training instance including (1) the sequence of image frames as an imitation trajectory portion of the training instance and (2) the received natural language instruction describing the task as the natural language instruction portion of the training instance.

[00138] At block 510, the system determines whether to generate an additional training instance. In some implementations, the system can determine to generate additional training instances until one or more conditions are satisfied. For example, the system can continue to generate training instances until a threshold number of training instances are generated, until the entire data stream has been processed, and/or until additional or alternative conditions have been satisfied. If the system determines to generate an additional training instance, the system proceeds back to block 504, selects an additional sequence of image frames from the data stream, and performs an additional iteration of block 506 and 508 based on the additional sequence of image frames. If not, the process ends.

[00139] FIG. 6 is a flowchart illustrating a process 600 of training a goal-conditioned policy network, a natural language instruction encoder, and/or a goal image encoder in accordance with implementations disclosed herein. For convenience, the operations of the flowchart are described with reference to a system that performs the operations. This system may include various components of various computer systems, such as one or more components of robot 100, robot 725, and/or computing system 810. Moreover, while operations of process 600 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, and/or added.

[00140] At block 602, the system selects a goal image training instance including (1) an imitation trajectory and (2) a goal image.

[00141] At block 604, the system processes the goal image using a goal image encoder to generate a latent goal space representation of the goal image.

[00142] At block 606, the system processes at least (1) an initial image frame of the imitation trajectory and (2) the latent space representation of the goal image, using a goal-conditioned policy network, to generate candidate output.

[00143] At block 608, the system determines a goal image loss based on (1) the candidate output and (2) at least a portion of the imitation trajectory.

[00144] At block 610, the system selects a natural language instruction training instance including (1) an additional imitation trajectory and (2) a natural language instruction.

[00145] At block 612, the system processes the natural language instruction portion of the natural language instruction training instance using a natural language encoder to generate a latent space representation of the natural language instruction.

[00146] At block 614, the system processes (1) an initial image frame of the additional imitation trajectory and (2) the latent space representation of the natural language instruction, using the goal-conditioned policy network, to generate additional candidate output.

[00147] At block 616, the system determines a natural language loss based on (1) the additional candidate output and (2) at least a portion of the additional imitation trajectory. [00148] At block 618, the system generates a goal-conditioned loss based on (1) the image goal loss and (2) the natural language instruction loss.

[00149] At block 620, the system updates one or more portions of the goal-conditioned policy network, the goal image encoder, and/or the natural language instruction encoder based on the goal-conditioned loss. [00150] At block 622, the system determines whether to perform additional training on the goal-conditioned policy network, the goal image encoder, and/or the natural language instruction encoder. In some implementation, the system can determine to perform more training if there are one or more additional unprocessed training instances and/or if other criterion/criteria are not yet satisfied. The other criterion/criteria can include, for example, whether a threshold number of epochs have occurred and/or a threshold duration of training has occurred. Process 600 may be trained utilizing both non-batch learning techniques, batch learning techniques and/or additional or alternative techniques. If the system determines to perform additional training, the system proceeds back to block 602, selects an additional goal image training instance, performs an additional iteration of blocks 604, 606, and 608 based on the additional goal image training instance, selects an additional natural language instruction training instance at block 610, perform an additional iteration of blocks 612, 614, and 616 based on the additional natural language instruction training instance, and perform an additional iteration of blocks 618 and 610 based on the additional goal image training instance and the additional natural language instruction training instance. If not, the process ends. [00151] FIG. 7 schematically depicts an example architecture of a robot 725. The robot 725 includes a robot control system 760, one or more operational components 740a-740n, and one or more sensors 742a-742m. The sensors 742a-742m may include, for example, vision components, light sensors, pressure sensors, pressure wave sensors (e.g., microphones), proximity sensors, accelerometers, gyroscopes, thermometers, barometers, and so forth.

While sensors 742a-m are depicted as being integral with robot 725, this is not meant to be limiting. In some implementations, sensors 742a-m may be located external to robot 725, e.g., as standalone units.

[00152] Operational components 740a-740n may include, for example, one or more end effectors and/or one or more servo motors or other actuators to effectuate movement of one or more components of the robot. For example, the robot 725 may have multiple degrees of freedom and each of the actuators may control actuation of the robot 725 within one or more of the degrees of freedom responsive to the control commands. As used herein, the term actuator encompasses a mechanical or electrical device that creates motion (e.g., a motor), in addition to any driver(s) that may be associated with the actuator and that translate received control commands into one or more signals for driving the actuator. Accordingly, providing a control command to an actuator may comprise providing the control command to a driver that translates the control command into appropriate signals for driving an electrical or mechanical device to create desired motion.

[00153] The robot control system 760 may be implemented in one or more processors, such as a CPU, GPU, and/or other controller(s) of the robot 725. In some implementations, the robot 725 may comprise a "brain box" that may include all or aspects of the control system 760. For example, the brain box may provide real time bursts of data to the operational components 740a-n, with each of the real time bursts comprising a set of one or more control commands that dictate, inter alia, the parameters of motion (if any) for each of one or more of the operational components 740a-n. In some implementations, the robot control system 760 may perform one or more aspects of processes 300, 400, 500, 600, and/or other method(s) described herein.

[00154] As described herein, in some implementations all or aspects of the control commands generated by control system 760 in positioning an end effector to grasp an object may be based on end effector commands generated using a goal-conditioned policy network. For example, a vision component of the sensors 742a-m may capture environment state data. This environment state data may be processes, along with robot state data, using a policy network of the meta-learning model to generate the one or more end effector control commands for controlling the movement and/or grasping of an end effector of the robot. Although control system 760 is illustrated in FIG. 7 as an integral part of the robot 725, in some implementations, all or aspects of the control system 760 may be implemented in a component that is separate from, but in communication with, robot 725. For example, all or aspects of control system 760 may be implemented on one or more computing devices that are in wired and/or wireless communication with the robot 725, such as computing device 810. [00155] FIG. 8 is a block diagram of an example computing device 810 that may optionally be utilized to perform one or more aspects of techniques described herein. Computing device 810 typically includes at least one processor 814 which communicates with a number of peripheral devices via bus subsystem 812. These peripheral devices may include a storage subsystem 824, including, for example, a memory subsystem 825 and a file storage subsystem 826, user interface output devices 820, user interface input devices 822, and a network interface subsystem 816. The input and output devices allow user interaction with computing device 810. Network interface subsystem 816 provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices.

[00156] User interface input devices 822 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computing device 810 or onto a communication network.

[00157] User interface output devices 820 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computing device 810 to the user or to another machine or computing device.

[00158] Storage subsystem 824 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 824 may include the logic to perform selected aspects of the processes of FIGS. 3, 4, 5, 6, and/or other methods described herein.

[00159] These software modules are generally executed by processor 814 alone or in combination with other processors. Memory 825 used in the storage subsystem 824 can include a number of memories including a main random access memory (RAM) 830 for storage of instructions and data during program execution and a read only memory (ROM) 832 in which fixed instructions are stored. A file storage subsystem 826 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 826 in the storage subsystem 824, or in other machines accessible by the processor(s) 814.

[00160] Bus subsystem 812 provides a mechanism for letting the various components and subsystems of computing device 810 communicate with each other as intended. Although bus subsystem 812 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.

[00161] Computing device 810 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 810 depicted in FIG. 8 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device 810 are possible having more or fewer components than the computing device depicted in FIG. 8.

[00162] In situations in which the systems described herein collect personal information about users (or as often referred to herein, "participants"), or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. Also, certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed. For example, a user's identity may be treated so that no personal identifiable information can be determined for the user, or a user's geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and/or used. [00163] In some implementations, a method implemented by one or more processors is provided, the method includes receiving a free-form natural language instruction describing a task for a robot, the free-form natural language instruction generated based on user interface input provided by a user via one or more user interface input devices. In some implementations, the method includes processing the free-form natural language instruction using a natural language instruction encoder, to generate a latent goal representation of the free-form natural language instruction. In some implementations, the method includes receiving an instance of vision data, the instance of vision data generated by at least one vision component of the robot, and the instance of vision data capturing at least part of an environment of the robot. In some implementations, the method includes generating output based on processing, using a goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction, wherein the goal-conditioned policy network is trained based on at least (i) a goal image set of training instances, in which training tasks are described using goal images, and (ii) a natural language instruction set of training instances, in which training tasks are described using freeform natural language instructions. In some implementations, the method includes controlling one or more actuators of the robot based on the generated output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one action indicated by the generated output.

[00164] These and other implementations of the technology disclosed herein can include one or more of the following features.

[00165] In some implementations, the method includes receiving an additional free-form natural language instruction describing an additional task for the robot, the additional freeform natural language instruction generated based on additional user interface input provided by the user via the one or more user interface input devices. In some implementations, the method includes processing the additional free-form natural language instruction using the natural language instruction encoder, to generate an additional latent goal representation of the additional free-form natural language instruction. In some implementations, the method includes receiving an additional instance of vision data generates by the at least one vision component of the robot. In some implementations, the method includes generating, using the goal-conditioned policy network, additional output based on processing at least (a) the additional instance of vision data and (b) the additional latent goal representation of the additional free-form natural language instruction. In some implementations, the method includes controlling the one or more actuators of the robot based on the generated additional output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one additional action indicated by the generated additional output.

[00166] In some implementations, the additional task for the robot is distinct from the task for the robot.

[00167] In some implementations, each training instance in the goal image set of training instances, in which training tasks are described using goal images, includes an imitation trajectory provided by a human and a goal image describing the training task performed by robot in the imitation trajectory. In some implementations, generating each training instance is the goal image set of training instances includes receiving a data stream, capturing the state of the robot and corresponding actions of the robot, while the human is controlling the robot to interact with the environment. In some implementations, the method includes, for each training instance in the goal image set of training instances, selecting a sequence of image frames from the data stream, selecting the last image frame in the sequence of image frames as a training goal image, describing the training task, performed in the sequence of image frames, and generating the training instance by storing, as the training instance, the selected sequence of image frames as the imitation trajectory portion of the training instance, and the training goal image as the goal image portion of the training instance.

[00168] In some implementations, each training instance, in the natural language instruction set of training instances, in which training are described using free-form natural language instructions, includes an imitation trajectory provided by a human and a free-form natural language instruction describing the training task performed by robot in the imitation trajectory. In some implementations, generating each training instance is the natural language instruction set of training instances includes receiving a data stream, capturing the state of the robot and corresponding actions of the robot, while the human is controlling the robot to interact with the environment. In some implementations, the method includes, for each training instance in the natural language instruction set of training instances, selecting a sequence of image frames from the data stream, providing the sequence of image frames to a human reviewer, receiving a training free-form natural language instruction describing a training task performed by the robot in the sequence of image frames, and generating the training instance by storing, as the training instance, the selected sequence of image frames as the imitation trajectory portion of the training instance, and the training free-form natural language instruction as the free-form natural language instruction portion of the training instance.

[00169] In some implementations, the goal-conditioned policy network, based on at least (i) the goal image set of training instances, in which training tasks are described using goal images, and (ii) the natural language instruction set of training instances, in which training tasks are described using free-form natural language instructions, includes selecting a first training instance from the goal image set of training instances, wherein the first training instance includes a first imitation trajectory and a first goal image describing the first imitation trajectory. In some implementations, the method includes generating a latent space representation of the first goal image by processing, using a goal image encoder, the first goal image portion of the first training instance. In some implementations, the method includes processing, using the goal-conditioned policy network, at least (1) the initial image frame in the first imitation trajectory and (2) the latent space representation of the first goal image portion of the first training instance, to generate first candidate output. In some implementations, the method includes determining a goal image loss based on the first candidate output and one or more portions of the first imitation trajectory. In some implementations, the method includes selecting a second training instance from the natural language instruction set of training instances, wherein the second training instance includes a second imitation trajectory and a second free-form natural language instruction describing the second imitation trajectory. In some implementations, the method includes generating a latent space representation of the second free-form natural language instruction by processing, using the natural language encoder, the second free-form natural language instruction portion of the second training instance, wherein the latent space representation of the first goal image and the latent space representation of the second free-form natural language instruction are represented in a shared latent space. In some implementations, the method includes processing, using the goal-conditioned policy network, at least (1) the initial image frame in the second imitation trajectory and (2) the latent space representation of the second free-form natural language instruction portion of the second training instance, to generate second candidate output. In some implementations, the method includes determining a natural language instruction loss based on the second candidate output and one or more portions of the second imitation trajectory. In some implementations, the method includes determining a goal-conditioned loss based on the goal image loss and the natural language instruction loss. In some implementations, the method includes updating one or more portions of the goal image encoder, the natural language instruction encoder, and/or the goal-conditioned policy network, based on the determined goal-conditioned loss.

[00170] In some implementations, the goal-conditioned policy network is trained, based on a first quantity of training instances of the goal image set of training instances, and a second quantity of training instances of the natural language instruction set of training instances, wherein the second quantity is less than fifty percent of the first quantity. In some implementations, the second quantity is less than ten percent of the first quantity, less than five percent of the first quantity, or less than one percent of the first quantity.

[00171] In some implementations, the generated output includes a probability distribution over an action space of the robot, and wherein controlling the one or more actuators based on the generated output comprises selecting the at least one action based on the at least one action with the highest probability in the probability distribution.

[00172] In some implementations, the generating output based on processing, using the goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction further includes generating output based on processing, using the goal-conditioned policy network, (c) the at least one action, and wherein controlling the one or more actuators based on the generated output comprises selecting the at least one action based on the at least one action satisfying a threshold probability. [00173] In some implementations, a method implemented by one or more processors is provided, the method includes receiving a free-form natural language instruction describing a task for the robot, the free-form natural language instruction generated based on user interface input provided by a user via one or more user interface input devices. In some implementations, the method includes processing the free-form natural language instruction using a natural language instruction encoder, to generate a latent goal representation of the free-form natural language instruction. In some implementations, the method includes receiving an instance of vision data, the instance of vision data generated by at least one vision component of the robot, and the instance of vision data capturing at least part of an environment of the robot. In some implementations, the method includes generating output based on processing, using a goal-conditioned policy network, at least (a) the instance of vision data and (b) the latent goal representation of the free-form natural language instruction. In some implementations, the method includes controlling one or more actuators of the robot based on the generated output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one action indicated by the generated output. In some implementations, the method includes receiving a goal image instruction describing an additional task for the robot, the goal image instruction provided by the user via the one or more user interface input devices. In some implementations, the method includes processing the goal image instruction using a goal image encoder, to generate a latent goal representation of the goal image instruction. In some implementations, the method includes receiving an additional instance of vision data, the additional instance of vision data generated by the at least one vision component of the robot, and the additional instance of vision data capturing at least part of the environment of the robot. In some implementations, the method includes generating additional output based on processing, using the goal-conditioned policy network, at least (a) the additional instance of vision data and (b) the latent goal representation of the goal image instruction. In some implementations, the method includes controlling the one or more actuators of the robot based on the generated additional output, wherein controlling the one or more actuators of the robot causes the robot to perform at least one additional action indicated by the generated additional output. [00174] In some implementations, a method implemented by one or more processors is provided, the method includes selecting a first training instance from the goal image set of training instances, wherein the first training instance includes a first imitation trajectory and a first goal image describing the first imitation trajectory. In some implementations, the method includes generating a latent space representation of the first goal image by processing, using a goal image encoder, the first goal image portion of the first training instance. In some implementations, the method includes processing, using a goal-conditioned policy network, at least (1) the initial image frame in the first imitation trajectory and (2) the latent space representation of the first goal image portion of the first training instance, to generate first candidate output. In some implementations, the method includes determining a goal image loss based on the first candidate output and one or more portions of the first imitation trajectory. In some implementations, the method includes selecting a second training instance from the natural language instruction set of training instances, wherein the second training instance includes a second imitation trajectory and a second free-form natural language instruction describing the second imitation trajectory. In some implementations, the method includes generating a latent space representation of the second free-form natural language instruction by processing, using the natural language encoder, the second free-form natural language instruction portion of the second training instance, wherein the latent space representation of the first goal image and the latent space representation of the second freeform natural language instruction are represented in a shared latent space. In some implementations, the method includes processing, using the goal-conditioned policy network, at least (1) the initial image frame in the second imitation trajectory and (2) the latent space representation of the second free-form natural language instruction portion of the second training instance, to generate second candidate output. In some implementations, the method includes determining a natural language instruction loss based on the second candidate output and one or more portions of the second imitation trajectory. In some implementations, the method includes determining a goal-conditioned loss based on the goal image loss and the natural language instruction loss. In some implementations, the method includes updating one or more portions of the goal image encoder, the natural language instruction encoder, and/or the goal-conditioned policy network, based on the determined goal-conditioned loss.

[00175] In addition, some implementations include one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s), and/or tensor processing unit(s) (TPU(s)) of one or more computing devices, where the one or more processors are operable to execute instructions stored in associated memory, and where the instructions are configured to cause performance of any of the methods described herein. Some implementations also include one or more transitory or non-transitory computer readable storage media storing computer instructions executable by one or more processors to perform any of the methods described herein.