Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INCORPORATING UNSTRUCTURED DATA INTO MACHINE LEARNING-BASED PHENOTYPING
Document Type and Number:
WIPO Patent Application WO/2023/215468
Kind Code:
A1
Abstract:
Implementations are described herein for incorporating unstructured data into machine learning-based phenotyping. In various implementations, natural language textual snippet(s) may be obtained. Each natural language textual snippet may describe environmental or managerial features of an agricultural plot that exist during a crop cycle. A sequence-to-sequence machine learning model may be used to encode the natural language snippet(s) into embedding(s) in embedding space. The embedding(s) may semantically represent the environmental or managerial features of the agricultural plot. Using one or more phenotypic machine learning models, phenotypic prediction(s) may be generated about the agricultural plot based on the one or more semantic embeddings and additional structured data about the agricultural plot. Output may be provided at one or more computing devices that is based on one or more of the phenotypic predictions.

Inventors:
YUAN ZHIQIANG (US)
Application Number:
PCT/US2023/020981
Publication Date:
November 09, 2023
Filing Date:
May 04, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MINERAL EARTH SCIENCES LLC (US)
International Classes:
G06N3/0455; G06N3/0442; G06Q10/063; G06Q50/02; G06N3/0464; G06N3/084; G06N3/09
Foreign References:
US10699185B22020-06-30
Attorney, Agent or Firm:
HIGDON, Scott et al. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A method implemented using one or more processors and comprising: obtaining one or more natural language textual snippets, each natural language textual snippet describing one or more environmental or managerial features of an agricultural plot that exist during a crop cycle; using a sequence encoder machine learning model, encoding the one or more natural language snippets into one or more embeddings in embedding space, wherein the one or more semantic embeddings semantically represent the one or more environmental or managerial features of the agricultural plot; using one or more phenotypic machine learning models, generating one or more phenotypic predictions about the agricultural plot based on the one or more semantic embeddings and additional structured data about the agricultural plot; and causing output to be provided at one or more computing devices, wherein the output is based on one or more of the phenotypic predictions.

2. The method of claim 1, wherein the sequence encoder machine learning model comprises at least part of a transformer network.

3. The method of claim 1 or 2, wherein one or more of the natural language snippets is obtained from speech recognition output generated using a spoken utterance captured at a microphone.

4. The method of any of claims 1-3, wherein one or more of the natural language snippets is obtained from electronic correspondence exchanged between two or more individuals associated with the agricultural plot.

5. The method of any of claims 1-3, wherein one or more of the phenotypic predictions comprises crop yield.

6. The method of any of claims 1-3, wherein the one or more phenotypic machine learning models comprise a mixture of experts ensemble that includes: a first phenotypic expert model to encode the one or more natural language textual snippets into the one or more embeddings, and a second phenotypic expert model to process the structured data about the agricultural plot.

7. The method of claim 6, wherein the mixture of experts further comprises a third phenotypic expert model that is used to process outputs of the first and second phenotypic expert models.

8. The method of claim 6 or 7, wherein the third phenotypic expert model comprises a gating network.

9. The method of claim 8, wherein the gating network is trained to assign relative weights to outputs of the first and second phenotypic expert models.

10. The method of any of claims 1-3, wherein the structured data comprises sensor data gathered by one or more sensors carried through the agricultural plot by one or more agricultural vehicles.

11. The method of claim 10, wherein the sensor data includes image data captured by one or more vision sensors carried by one or more of the agricultural vehicles, and one or more of the phenotypic predictions are generated by processing the image data using one or more convolutional neural networks.

12. A system comprising one or more processors and memory storing instructions that, in response to execution of the instructions, cause the one or more processors to: obtain one or more natural language textual snippets, each natural language textual snippet describing one or more environmental or managerial features of an agricultural plot that exist during a crop cycle; using a sequence encoder machine learning model, encode the one or more natural language snippets into one or more embeddings in embedding space, wherein the one or more semantic embeddings semantically represent the one or more environmental or managerial features of the agricultural plot; using one or more phenotypic machine learning models, generate one or more phenotypic predictions about the agricultural plot based on the one or more semantic embeddings and additional structured data about the agricultural plot; and cause output to be provided at one or more computing devices, wherein the output is based on one or more of the phenotypic predictions.

13. The system of claim 12, wherein the sequence encoder machine learning model comprises at least part of a transformer network.

14. The system of claim 12 or 13, wherein one or more of the natural language snippets is obtained from speech recognition output generated using a spoken utterance captured at a microphone.

15. The system of any of claims 12-14, wherein one or more of the natural language snippets is obtained from electronic correspondence exchanged between two or more individuals associated with the agricultural plot.

16. The system of any of claims 12-15, wherein one or more of the phenotypic predictions comprises crop yield.

17. The system of any of claims 12-16, wherein the one or more phenotypic machine learning models comprise a mixture of experts ensemble that includes: a first phenotypic expert model to encode the one or more natural language textual snippets into the one or more embeddings, and a second phenotypic expert model to process the structured data about the agricultural plot.

18. The system of claim 17, wherein the mixture of experts further comprises a third phenotypic expert model that is used to process outputs of the first and second phenotypic expert models.

19. The system of claim 17 or 18, wherein the third phenotypic expert model comprises a gating network.

20. A non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by a processor, cause the processor to: obtain one or more natural language textual snippets, each natural language textual snippet describing one or more environmental or managerial features of an agricultural plot that exist during a crop cycle; using a sequence encoder machine learning model, encode the one or more natural language snippets into one or more embeddings in embedding space, wherein the one or more semantic embeddings semantically represent the one or more environmental or managerial features of the agricultural plot; using one or more phenotypic machine learning models, generate one or more phenotypic predictions about the agricultural plot based on the one or more semantic embeddings and additional structured data about the agricultural plot; and cause output to be provided at one or more computing devices, wherein the output is based on one or more of the phenotypic predictions.

Description:
INCORPORATING UNSTRUCTURED DATA INTO MACHINE LEARNING-BASED PHENOTYPING

Background

[0001] Phenotypic traits of agricultural plots are typically predicted or estimated (collectively, "inferred") by processing structured agricultural data of known dimensions that is obtained from various sources using statistical and/or machine learning model(s). For example, genotypic data (e.g., crop strains), climate data, sensor data, data about management practices (e.g., irrigation and fertilization practices, crop rotations, tillage practices), and/or soil features are often stored in organized, consistent, and predictable manners, e.g., akin to one or more database schemas. Similarly, images of crops captured by vision sensors often have known dimensions, or at least can be converted into known dimensions using dimensionality reduction techniques. Consequently, phenotypic machine learning model(s) for predicting phenotypic traits can be designed to process inputs of known/ static dimensions.

[0002] Not every grower has the time, resources, or inclination to methodically gather and organize comprehensive environmental and agricultural management practices into a structured form. The sparser the grower's structured data, the less reliable the phenotypic inferences that are drawn from it. It may be possible to extrapolate and/or interpolate some types of missing data. For example, if a particular grower lacks a particular type of climate sensor, replacement climate values can be interpolated and/or extrapolated from nearby climate sensors, or from publicly-available climate databases. However, the usefulness of interpolated/extrapolated data is limited by its availability and similarity between its origin(s) and the agricultural plot. Data about agricultural management practices is even less susceptible to extrapolation and/or interpolation, given its subjective nature. Moreover, even the most dedicated growers may not capture unstructured agricultural data, such as incidental observations discussing issues like the quality at which agricultural management practices are executed, the experience-informed state of crops, and so forth. Summary

[0003] Implementations are described herein for incorporating unstructured data into machinelearning based pipelines for inferring phenotypic traits of agricultural plots. More particularly, but not exclusively, implementations are described herein for encoding unstructured natural language textual snippets into semantically-rich embeddings in latent space. Those semantically-rich embeddings may then be processed, along with other structured agricultural data, using one or more machine learning models to predict phenotypic traits of agricultural plots, such as crop yield.

[0004] Techniques described herein give rise to various technical advantages. Capturing and using unstructured agricultural data as described herein may provide a less cumbersome and/or more practical alternative to methodically gathering comprehensive structured agricultural data. As an example, techniques described herein provide an alternative way to obtain data points that might not otherwise be measured or recorded (e.g., in a spreadsheet) by a grower. Moreover, regardless of how much or what type of structured agricultural data is available, incorporating unstructured agricultural data into phenotypic machine learning pipelines may bolster phenotypic predictions by accounting for additional types of data that might not otherwise be considered, such as grower expertise.

[0005] In various implementations, a method may be implemented using one or more processors and may include: obtaining one or more natural language textual snippets, each natural language textual snippet describing one or more environmental or managerial features of an agricultural plot that exist during a crop cycle; using a sequence encoder machine learning model, encoding the one or more natural language snippets into one or more embeddings in embedding space, wherein the one or more semantic embeddings semantically represent the one or more environmental or managerial features of the agricultural plot; using one or more phenotypic machine learning models, generating one or more phenotypic predictions about the agricultural plot based on the one or more semantic embeddings and additional structured data about the agricultural plot; and causing output to be provided at one or more computing devices, wherein the output is based on one or more of the phenotypic predictions.

[0006] In various implementations, the sequence encoder machine learning model may include at least part of a transformer network. In various implementations, one or more of the natural language snippets may be obtained from speech recognition output generated using a spoken utterance captured at a microphone. In various implementations, one or more of the natural language snippets may be obtained from electronic correspondence exchanged between two or more individuals associated with the agricultural plot. In various implementations, one or more of the phenotypic predictions may be crop yield.

[0007] In various implementations, the one or more phenotypic machine learning models may be a mixture of experts ensemble that includes: a first phenotypic expert model to encode the one or more natural language textual snippets into the one or more embeddings, and a second phenotypic expert model to process the structured data about the agricultural plot. In various implementations, the mixture of experts may include a third phenotypic expert model that is used to process outputs of the first and second phenotypic expert models. In various implementations, the third phenotypic expert model may be a gating network. In various implementations, the gating network may be trained to assign relative weights to outputs of the first and second phenotypic expert models.

[0008] In various implementations, the structured data may include sensor data gathered by one or more sensors carried through the agricultural plot by one or more agricultural vehicles. In various implementations, the sensor data may include image data captured by one or more vision sensors carried by one or more of the agricultural vehicles, and one or more of the phenotypic predictions may be generated by processing the image data using one or more convolutional neural networks.

[0009] In addition, some implementations include one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s), and/or tensor processing unit(s) (TPU(s)) of one or more computing devices, where the one or more processors are operable to execute instructions stored in associated memory, and where the instructions are configured to enable performance of any of the aforementioned methods. Some implementations also include one or more non-transitory computer readable storage media storing computer instructions executable by one or more processors to perform any of the aforementioned methods.

[0010] It should be appreciated that all combinations of the foregoing concepts and additional concepts described in greater detail herein are contemplated as being part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the subject matter disclosed herein. Brief Description of the Drawings

[0011] Fig. 1 schematically depicts an example environment in which selected aspects of the present disclosure may be employed in accordance with various implementations.

[0012] Fig. 2 schematically depicts an example of how techniques described herein may be implemented to make phenotypic predictions using both structured and unstructured data, in accordance with various implementations.

[0013] Fig. 3 A and Fig. 3B schematically depict different examples of how structured and unstructured data may be processed to make phenotypic predictions, in accordance with various implementations.

[0014] Fig. 4 is a flowchart of an example method in accordance with various implementations described herein.

[0015] Fig. 5 schematically depicts an example architecture of a computer system.

Detailed Description

[0016] Implementations are described herein for incorporating unstructured data into machinelearning based pipelines for inferring phenotypic traits of agricultural plots. More particularly, but not exclusively, implementations are described herein for encoding unstructured natural language textual snippets into semantically-rich embeddings in latent space. Those semantically-rich embeddings may then be processed, along with other structured agricultural data, using one or more machine learning models to predict phenotypic traits of agricultural plots, such as crop yield.

[0017] Techniques described herein give rise to various technical advantages. Capturing and using unstructured agricultural data as described herein may provide a less cumbersome and/or more practical alternative to methodically gathering comprehensive structured agricultural data. As an example, techniques described herein provide an alternative way to obtain data points that might not otherwise be measured or recorded (e.g., in a spreadsheet) by a grower. Moreover, regardless of how much or what type of structured agricultural data is available, incorporating unstructured agricultural data into phenotypic machine learning pipelines may bolster phenotypic predictions by accounting for additional types of data that might not otherwise be considered, such as grower expertise.

[0018] Unstructured agricultural data may include agricultural data that is not in a predictable or consistent form, at least natively. A primary example includes natural language textual snippets that are generated and/or captured from, for instance, spoken utterances, electronic correspondence, contracts, or other sources of natural language that are relevant to an agricultural plot. In various implementations, unstructured agricultural data may be encoded into semantically-rich embeddings using one or more sequence encoder machine learning models. In some cases, these semantically-rich embeddings may have known dimensions. Consequently, they can be processed, along with structured agricultural data about an agricultural plot, using phenotypic machine learning models to make phenotypic predictions. [0019] In some implementations, the sequence encoder machine learning model may be a sequence-to-sequence model such as an encoder-decoder (sometimes referred to as an "autoencoder"). Once trained, the encoder portion may be used subsequently to generate the semantically-rich embeddings. Some examples of sequence encoder machine learning models include recurrent neural networks, long short-term memory (LSTM) networks, residual neural networks, and/or gated recurrent unit (GRU) networks, to name a few. More recently, large language models such as transformer networks have become increasingly popular for performing natural language processing, and may be used to generate semantic embeddings as described herein.

[0020] Transformer networks were designed in part to mitigate a variety of shortcomings of prior natural language processing models, such as overfitting, the vanishing gradient problem, and exceedingly high computational costs, to name a few. A transformer network may take the form of, for instance, a BERT (Bidirectional Encoder Representations from Transformers) transformer and/or a GPT (Generative Pre-trained Transformer). In various implementation, such a transformer model may be trained (e.g., "conditioned" or "bootstrapped") using one or more corpuses of documents and other data that is relevant to the agriculture domain generally (e.g., worldwide), or to subdomains of the agricultural domain (e.g., regions having homogenous climates). These documents may include, for instance, academic papers, agricultural textbooks, agricultural presentations, scientific studies, historic agricultural narratives, and so forth.

[0021] In various implementations, a machine learning-based phenotypic pipeline configured with selected aspects of the present disclosure may be used as follows. Natural language textual snippets generated during a crop cycle of crops grown in an agricultural plot of interest may be obtained from various sources. These sources may include, for instance, spoken utterances of agricultural personnel, electronic correspondence (e.g., emails, text messages, social media posts, direct messages, etc. to or from agricultural personnel, contracts, invoices, or other documents pertaining to agricultural management practices (e.g., contracts to perform agricultural or ecosystem services), and so forth. In some implementations, snippets may be organized based on, and/or flagged with, the date and/or time (e.g., timestamp) of their creation. These natural language snippets may then be encoded into semantically-rich embeddings using the aforementioned sequence encoder machine learning model.

[0022] In some implementations where there are multiple semantic embeddings (e.g., because there were multiple different natural language textual snippets), the multiple semantic embeddings may be combined into a unified semantic embedding, e.g., via concatenation, averaging, addition, etc. The unified semantic embedding may semantically and collectively represent the unstructured agricultural data contained across each of the natural language textual snippets.

[0023] Additionally or alternatively, in some implementations, the unified embedding may be created using another sequence encoder machine learning model (e.g., various types of RNNs, transformers, etc.). For example, multiple semantically-rich embeddings (each representing an encoding of a different natural language textual snippet) may be iteratively processed using such a model as a sequence of inputs, e.g., in temporal order. Various mechanisms such as internal memory or state, or self-attention, may ensure that all semantic embeddings of the sequence are accounted for in the resulting unified embedding.

[0024] In some implementations where the textual snippets are flagged with timestamps, the textual snippets and/or their corresponding embeddings may be organized into temporal chunks or bins, e.g., along with temporally-correspondent structured agricultural data. The number of temporal chunks or bins may depend on factors such as the temporal frequency at which the natural language textual snippets were generated/captured, the granularity of other structured agricultural data, etc. For example, if all or a significant portion of structured agricultural data available for the agricultural plot takes the form of daily time series data, then the textual snippets and/or their corresponding embeddings may be grouped into days. Structured and unstructured agricultural data contained in the same temporal bin may then be processed together, e.g., so that the unstructured agricultural data can provide temporally-relevant context to the structured agricultural data.

[0025] In any case, once semantically-rich embedding(s) representing unstructured agricultural data are generated, one or more phenotypic machine learning models of the machine-learning based phenotypic pipeline may be used to process the semantically-rich embedding(s) and additional structured data about the agricultural plot to generate phenotypic predict! on(s) about the agricultural plot. While any number of phenotypic predictions are possible, examples described herein will largely refer to predicting crop yield.

[0026] In some implementations, a phenotypic model may be adapted to include — on top of inputs already provided for receiving structured data — additional inputs for receiving semantically-rich embedding(s). Additionally or alternatively, in some implementations, an ensemble of phenotypic machine learning models may be included in the phenotypic pipeline, some to process structured agricultural data and others to process unstructured agricultural data. Models of the ensemble may be trained individually and/or jointly.

[0027] For example, the phenotypic pipeline may include a "mixture of experts" ensemble of "expert" models and "gating" models. Some expert models may be trained to process structured agricultural data. As an example, a convolutional neural network (CNN) may be trained to process images captured by vision sensors carried through agricultural plots by agricultural vehicles and/or personnel, e.g., to annotate those images with inferred phenotypic traits and/or to make phenotypic predictions such as crop yield. Other machine learning models (e.g., neural networks, RNNs, LSTMs, etc. may be trained to process other types of structured (e.g., timeseries) data, such as data scraped from a spreadsheet or database, sensor data captured in situ, etc., to make phenotypic predictions.

[0028] Other expert models may be trained to process unstructured agricultural data, such as natural language textual snippets and/or their corresponding semantically-rich embeddings. For example, the aforementioned sequence encoder machine learning model (e.g., RNN, LSTM, transformer) may be trained to encode natural language textual snippets into semantically-rich embeddings of known dimensions. Another sequence encoder machine learning model may be provided to process sequences of semantically-rich embeddings into a unified embedding, as described previously.

[0029] Gating models (sometimes referred to as "gating networks") may be trained to select and/or assign relative weights to outputs generated by the various expert models. For example, a gating model may be trained to process both (a) a semantic embedding (unified or otherwise) generated using a first expert model and (b) output of other expert model(s) that process structured agricultural data. The gating model may determine which expert model(s) should be trusted to generate the most accurate output, how the outputs of the expert models should be combined, how much weight should be assigned to output of each expert model, whether predictions made based on structured agricultural data should be boosted based on corroborative unstructured agricultural data, etc.

[0030] In some implementations, the gating model may be trained to assign more weight to an embedding generated from unambiguous natural language input, such as "I watered the plot for 15 minutes every day last month," or "we applied 40 kg of pesticide across the northwest field this morning." Contrastly, the gating model may assign less weight to an embedding generated from ambiguous natural language input, such as "I kept the workers busy watering the field," or "we watered regularly last week." Ambiguity of natural language inputs — or unstructured data more generally — may be determined in various ways, such as via confidence measures generated by the sequence encoder machine learning models, distances of semantic embeddings in latent space to known concepts, presence/absence of numeric values in the natural language inputs, presence, absence, and/or scope of temporal identifiers (e.g., "yesterday" is less vague than "last week"), etc.

[0031] Additionally or alternatively, in some implementations, the gating model may be trained to provide, in effect, a sliding scale between structured and unstructured data. For example, if structured data for given agricultural practice (e.g., crop rotation) is available, natural language textual snippets related to that same agricultural practice may be weighted less heavily. On the other hand, if available structured data is sparse, unstructured data (e.g., embeddings generated from natural language textual snippets) may be weighted more heavily to make up for the sparseness of the structured data. If structured and unstructured data contradict each other, in some implementations, the gating model may be trained to favor structured data over unstructured data, or to assign them relative weights according to historical accuracy of their origins.

[0032] Fig. 1 schematically illustrates an environment in which one or more selected aspects of the present disclosure may be implemented, in accordance with various implementations. The example environment includes one or more agricultural plots 112 and various sensors that may be deployed at or near those areas, as well as other components that may be implemented elsewhere, in order to practice selected aspects of the present disclosure. Various components in the environment are in communication with each other over one or more networks 110. Network(s) 110 may take various forms, such as one or more local or wide area networks (e.g., the Internet), one or more personal area networks (“PANs”), one or more mesh networks (e.g., ZigBee, Z-Wave), etc. [0033] Agricultural plots(s) 112 may be used to grow various types of crops that may produce plant parts of economic and/or nutritional interest. Agricultural plots(s) 112 may have various shapes and/or sizes. In the United States, for instance, it is common to organize a larger field into smaller plots, each with two rows. In some implementations, phenotypic trait estimation models may be applied on a plot-by-plot basis to estimate aggregate trait values for individual plots.

[0034] An individual (which in the current context may also be referred to as a “user” or “grower”) may operate one or more client devices 106-1 to 106-X to interact with other components depicted in Fig. 1. A client device 106 may be, for example, a desktop computing device, a laptop computing device, a tablet computing device, a mobile phone computing device, a computing device of a vehicle of the participant (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker (with or without a display), or a wearable apparatus that includes a computing device, such as a head-mounted display (“HMD”) 106-X that provides an AR or VR immersive computing experience, a “smart” watch, and so forth. Additional and/or alternative client devices may be provided.

[0035] Plant knowledge system 104 is an example of an information system in which the techniques described herein may be implemented. Each of client devices 106 and plant knowledge system 104 may include one or more memories for storage of data and software applications, one or more processors for accessing data and executing applications, and other components that facilitate communication over a network. The operations performed by client device 106 and/or plant knowledge system 104 may be distributed across multiple computer systems.

[0036] Each client device 106 may operate a variety of different applications that may be used to perform various agricultural tasks, such as crop yield prediction. For example, a first client device 106-1 operates agricultural (“AG”) client 107 (e.g., which may be standalone or part of another application, such as part of a web browser). Another client device 106-X may take the form of a HMD that is configured to render 2D and/or 3D data to a wearer as part of a VR immersive computing experience. For example, the wearer of client device 106-X may be presented with 3D point clouds representing various aspects of objects of interest, such as fruits of crops, weeds, crop yield predictions, etc. The wearer may interact with the presented data, e.g., using HMD input techniques such as gaze directions, blinks, etc. [0037] In some implementations, one or more robots 108-1 to 108-M and/or other agricultural vehicles 109 may be deployed and/or operated to perform various agricultural tasks. These tasks may include, for instance, harvesting, irrigating, fertilizing, chemical application, trimming, pruning, sucker/bud removal, etc. An individual robot 108-1 to 108-M may take various forms, such as an unmanned aerial vehicle 108-1, a robot (not depicted) that is propelled along a wire, track, rail or other similar component that passes over and/or between crops, a wheeled robot 108-M, a rover that straddles a row of plants (e.g., so that the plant pass underneath the rover), or any other form of robot capable of being propelled or propelling itself past crops of interest. [0038] In some implementations, different robots may have different roles, e.g., depending on their capabilities. For example, in some implementations, one or more of robots 108-1 to 108-M may be designed to capture various types of sensor data (e.g., vision, temperature, moisture, soil characteristics), others may be designed to manipulate plants or perform physical agricultural tasks, and/or others may do both. Robots 108 may include various types of sensors, such as vision sensors (e.g., 2D digital cameras, 3D cameras, 2.5D cameras, infrared cameras), inertial measurement unit (“IMU”) sensors, Global Positioning System (“GPS”) sensors, X-ray sensors, moisture sensors, lasers, barometers (for local weather information), photodiodes (e.g., for sunlight), thermometers, soil sensors, etc. This sensor data may be organized as structured agricultural data, e.g., in database(s) in accordance with known/ consistent schemas, in spreadsheets, in organized textual files (e.g., comma-delimited, tab-delimited), etc.

[0039] In addition to or instead of robots, in some implementations, agricultural vehicles 109 such as the tractor depicted in Fig. 1, center pivots, boom sprayers (which may be affixed to tractors or other agricultural vehicles), threshers, etc. may be leveraged to acquire various sensor data. For example, one or more modular computing devices 111 (also referred to as "sensor packages") may be mounted to agricultural vehicle 109 and may be equipped with any number of sensors, such as one or more vision sensors that capture images of crops, or other sensors such as soil sensors, moisture sensors, thermometers, etc. These sensor data may be collected and, as described previously, organized as structured agricultural data.

[0040] In various implementations, plant knowledge system 104 may be implemented across one or more computing systems that may be referred to as the “cloud”. Plant knowledge system 104 may receive sensor data generated by robots 108-1 to 108-M, modular computing devices 111, and/or agricultural personnel and process it using various techniques to perform tasks such as making phenotypic predictions 122, e.g., on a plot-by-plot basis. In various implementations, plant knowledge system 104 may include a structured data module 114, an unstructured data module 116, an inference module 118, and a training module 124. In some implementations one or more of modules 114, 116, 118, 124 may be omitted, combined, and/or implemented in a component that is separate from plant knowledge system 104.

[0041] Structured data module 114 may be configured to obtain structured agricultural data from various sources, such as modular computing device(s) 111, robots 108-1 to 108-M, agricultural vehicle 109, databases of recorded agricultural data (e.g, logs), etc. Structured data module 114 may provide these data to inference module 118. Similarly, unstructured data module 116 may be configured to obtain unstructured agricultural data from various sources, and provide these data to inference module 118. In other implementations, structured data module 114 and/or unstructured data module 116 may be omitted and the functions described herein as being performed by structured data module 114 and/or unstructured data module 116 may be performed by other components of plant knowledge system 104, such as inference module 118.

[0042] Plant knowledge system 104 may also include one or more databases. For example, plant knowledge system 104 may include, in communication with structured data module 114, a structured database 115 for storing structured agricultural data. Structured agricultural data may include any data that is collected and organized, e.g, by structured data module 114, in a consistent and predictable manner. One example is sensor data collected by robots 108-1 to 108-M and/or other agricultural vehicles 109. Another example of structured agricultural data may be data that is input by agricultural personnel into spreadsheets, input forms, etc., such that the data is collected and organized, e.g., by structured data module 114, in a consistent and/or predictable manner. For example, growers may maintain logs of how and/or when various management practices (e.g., irrigation, pesticide application, herbicide application, tillage) were performed. Other examples of structured agricultural data may include, for instance, satellite data, climate data from publicly-available databases, and so forth.

[0043] Similarly, plant knowledge system 104 may include, in communication with unstructured data module 116, an unstructured database 117 for storing unstructured agricultural data. Unstructured agricultural data may include any data that is collected, e.g., by unstructured data module 116, from sources that are not organized in any consistent or predictable manner. These sources may include, for instance, natural language textual snippets obtained from a variety of sources. As one example, AG client 107 may provide an interface for a user 101 to record spoken utterances. These utterances may be stored as audio recordings, transcribed into text via a speech-to-text (STT) process and then stored, and/or encoded into embeddings and then stored. Other potential sources of natural language textual snippets include, but are not limited to, documents such as contracts and invoices, electronic correspondence (e.g., email, text messaging), periodicals such as newspapers (e.g., reporting floods or other weather events that can impact crops), and so forth. Documents may be obtained, e.g., by unstructured data module 116, from sources such as a client device 106.

[0044] Plant knowledge system 104 may also include a machine learning model database 120 that includes one or more machine learning models that are trained as described herein to make phenotypic predictions 122, such as crop yield. In this specification, the term “database” and “index” will be used broadly to refer to any collection of data. The data of the database and/or the index does not need to be structured in any particular way and it can be stored on storage devices in one or more geographic locations.

[0045] Inference module 118 may be configured to process structured agricultural data obtained by structured data module 114 and unstructured agricultural data obtained by unstructured data module 116 using various machine learning models stored in machine learning model database 120 to generate output indicative of phenotypic predictions 122. These phenotypic predictions may come in various forms, such as estimated aggregate traits of plots, crop yield, recommendations, and so forth. Various types of machine learning models may be trained for use in performing selected aspects of the present disclosure. For example, a sequence encoder machine learning model such as an encoding portion of a transformer language model may be trained to generate semantically-rich embeddings from unstructured agricultural data. Various types of phenotypic machine learning models, or ensembles of phenotypic models, may be trained to make phenotypic predictions based on structured agricultural data and semantically- rich embeddings generated from unstructured agricultural data.

[0046] During one or more training phases, training module 124 may be configured to train any of the aforementioned models (or portions thereof) using ground truth and/or observed phenotypic traits. For example, training module 124 may train the sequence encoder machine learning model initially using a corpus of agricultural documents and data, as described previously. In some implementations, training module 124 may train the sequence encoder machine learning model using similarity and/or metric learning techniques such as regression and/or classification similarity learning, ranking similarity learning, locality sensitive hashing, triplet loss, large margin nearest neighbor, etc.

[0047] In some implementations, training module 124 may also train phenotypic machine learning models to make phenotypic predictions using the semantically-rich embeddings generated using the sequence encoder machine learning model and structured agricultural data. Suppose a particular agricultural plot 112 yields 1,000 units of a plant-trait-of-interest. Images of crops in that particular agricultural plot, captured sometime in the crop cycle prior to harvest, may be processed using a crop yield estimation machine learning model to predict crop yield. The crop yield estimation machine learning model may also be used to process, as additional inputs, one or more semantically-rich embeddings generated by the sequence encoder machine learning model. This predicted crop yield may then be compared, e.g., by training module 124, to the ground truth crop yield to determine an error. Based on this error, training module 124 may train one or more of the machine learning models in database 120, e.g., using techniques such as back propagation and gradient descent.

[0048] Fig. 2 schematically depicts an example of how machine learning models may be applied to make phenotypic predictions about agricultural plots using both structured and unstructured agricultural data. One source of structured agricultural data in this example is an image 230 of a plant that includes four plant-parts-of-interest (strawberries in this example). Across the top of Fig. 2, image 230 and/or others like it may be processed using one phenotypic machine learning model 232, which in this example is a dense prediction model trained to annotate detected strawberries with bounding boxes. The result is an annotated image 234. Another source of structured agricultural data in this example is a spreadsheet 236 of agricultural data. Spreadsheet 236 may include various structured agricultural data, such as a log of agricultural managerial practices (e.g., fertilizer, irrigation, chemicals, tillage, etc.), soil sample records, record of detected pests or disease, etc. Other sources of structured agricultural data are contemplated.

[0049] Various sources of unstructured agricultural data are depicted in Fig. 2 as well. One example is a contract 238 to perform some agricultural task on the agricultural plot of interest. Such an agricultural task may include, for instance, an ecosystem service (e.g., carbon sequestration, cover crop, erosion prevention, etc.), harvesting of a crop, planting of a crop, tillage of the agricultural plot, treatment of a crop with fertilizer or other chemicals, weed remediation, pest remediation, disease remediation, etc. Contract 238 may include textual snippet(s) that describe aspect(s) of the agricultural task to be performed, conditions for satisfying the contract, etc.

[0050] Another source of unstructured agricultural data includes electronic correspondence, such as email 240. Email 240 (or other electronic correspondence) may convey information between relevant parties, such as between growers and employees, between growers and contractors, between employees of growers, etc. For example, employees may email growers with information such as incidental observations, reports on tasks performed and details of those tasks, requests for materials and/or chemicals, invoices, etc. Electronic correspondence need not be electronic natively; in some cases, paper correspondence may be processed using optical character recognition (OCR) to generate electronic correspondence.

[0051] Another source of unstructured agricultural data includes one or more utterances 242 by a person 244. In various implementations, these utterances 242 may be captured at one or more microphones 246 and processed using a STT component 248 to generate natural language textual snippets. Utterances 242 can vary widely in subject matter, level of detail, etc. Utterances 242 may include, for example, incidental observations about the agricultural plot of interest by agricultural personnel. Suppose an employee makes a statement such as "It looks like we might have some culm rot going on." This utterance may be considered by downstream component(s), e.g., along with other evidence, to make or boost a phenotypic prediction that culm rot is, in fact, present in the agricultural plot of interest.

[0052] The unstructured data collected, e.g., by unstructured data module 116 from various sources 238, 240, 242, may be processed by inference module 118 (not depicted in Fig. 2) using a sequence encoder machine learning model 250 to generate one or more semantically-rich embeddings 252. As mentioned previously, sequence encoder machine learning model 250 may take various forms, such as an encoder portion of a transformer network that is trained on agricultural documents and/or agricultural data. As shown by the arrows in Fig. 2, semantically- rich embedding(s) 252 may then be processed by inference module 118 using another phenotypic machine learning model 254, along with structured agricultural data, such as annotated image(s) 234 and/or spreadsheet 236, to make a phenotypic prediction 122. The phenotypic prediction 122 may take the form of, for instance, a crop yield prediction, a recommendation for which crop to plant next to maximize future yield (or for other purposes), etc. [0053] Figs. 3 A and 3B schematically depict examples of how different phenotypic pipelines may be implemented to practice selected aspects of the present disclosure. In Fig. 3 A, unstructured data 356 is processed by a transformer 358 (or at least an encoder portion thereof) that corresponds to sequence encoder machine learning model 250 of Fig. 2 to generate semantically-rich embeddings 366 and 368 (outlined in dashes to indicate that they originate as unstructured agricultural data). Inference module 118 (not depicted in Fig. 3) may then apply various pieces of structured agricultural data 360, 362, 364 across a phenotypic machine learning model 354, along with semantically-rich embeddings 366, 368, to make a phenotypic prediction 122. As indicated by the ellipses, phenotypic machine learning model 354 may include any number of layers 370-1 to 370-N. Notably, in this example, phenotypic machine learning model 354 may be a single model that is adapted to process both structured and unstructured agricultural data as inputs.

[0054] In other examples, by contrast, an ensemble of phenotypic machine learning models may be included in a phenotypic pipeline, some to process structured agricultural data and others to process unstructured agricultural data. Models of the ensemble may be trained individually and/or jointly. An example of this is depicted in Fig. 3B. As was the case in Fig. 3A, unstructured data 356 is processed by a transformer 358 corresponding to sequence encoder machine learning model 250 in Fig. 2 to generate semantically-rich embeddings 366, 368. [0055] However, the phenotypic pipeline in Fig. 3B features a mixture of experts 371 that includes a first expert machine learning model 372 to process pieces of structured data 360, 362, 364 and a separate, second expert machine learning model 374 to process semantically-rich embeddings 366, 368. The mixture of experts 371 may also include a gating network 376 (which can be a single layer or multiple layers) to process the outputs of models 372, 374. Gating network 376 may be trained to assign relative weights to the outputs of models 372, 374 in order to make the phenotypic prediction. As mentioned previously, gating network 376 may be trained to assign these relative weights based on, for instance, a measure of ambiguity of the unstructured data 356, and/or based on aspects of structured data 360, 362, 364, such as its temporal, spatial, or spectral (in the case of images) resolution.

[0056] Fig. 4 illustrates a flowchart of an example method 400 for practicing selected aspects of the present disclosure during an inference phase. The operations of Fig. 4 can be performed by one or more processors, such as one or more processors of the various computing devices/systems described herein, such as by plant knowledge system 104. For convenience, operations of method 400 will be described as being performed by a system configured with selected aspects of the present disclosure. Other implementations may include additional operations than those illustrated in Fig. 4, may perform step(s) of Fig. 4 in a different order and/or in parallel, and/or may omit one or more of the operations of Fig. 4.

[0057] At block 402, the system, e.g., by way of unstructured data module 116, may obtain one or more natural language textual snippets. Each natural language textual snippet may describe one or more environmental or managerial features of an agricultural plot that exist during a crop cycle. Environmental features of the agricultural plot may include, for instance, incidental observations about conditions of the crops, soil, weather, sunlight, pest infestation, disease, presence or absence of weeds, etc. Managerial features of the agricultural plot may include, for instance, statements or incidental observations about agricultural tasks performed in the field. With irrigation, for instance, an agricultural worker may comment on how much water was applied over the plot, when the water was applied, how frequently water is applied, etc.

Workers may make similar comments about application of other substances, such as fertilizers, pesticides, herbicides, etc. Other managerial practices may include, for instance, tillage, cover crops, crop rotation, etc. And as mentioned previously, natural language snippets may come from other sources as well, such as correspondence, contracts, invoices, reports, etc.

[0058] At block 404, the system, e.g., by way of inference module 118, may use a sequence encoder machine learning model to encode the one or more natural language snippets into one or more embeddings in embedding space. In various implementations, the one or more semantic embeddings may semantically represent the one or more environmental or managerial features of the agricultural plot. In some implementations where there are multiple natural language textual snippets (from a single source or from multiple sources), the individual semantic embeddings generated therefrom may be combined into a unified embedding, e.g., using techniques such as concatenation, averaging, a sequence-to-sequence model, etc.

[0059] At block 406, the system, e.g., by way of inference module 118, may use one or more phenotypic machine learning models to generate one or more phenotypic predictions about the agricultural plot based on the one or more semantic embeddings, as well as based on additional structured data about the agricultural plot. Examples of using both unstructured and structured data were depicted in Figs. 2 and 3 A-B.

[0060] At block 408, the system may cause output to be provided at one or more computing devices, such as at AG client 107 of client device 106-1. The output may be based on one or more of the phenotypic predictions. For example, a phenotypic prediction of crop yield may be presented to a user (e.g., 101) at AG client 107, e.g., as natural language output, as part of a larger report, on demand, etc. In some implementations, other phenotypic inferences may be augmented with one or more of the phenotypic predictions. For example, the annotated image 234 in Fig. 2 may be further annotated, e.g., with projected crop yield for the depicted plants or the entire agricultural plot in which the depicted plants grow.

[0061] Fig. 5 is a block diagram of an example computing device 510 that may optionally be utilized to perform one or more aspects of techniques described herein. Computing device 510 typically includes at least one processor 514 which communicates with a number of peripheral devices via bus subsystem 512. These peripheral devices may include a storage subsystem 524, including, for example, a memory subsystem 525 and a file storage subsystem 526, user interface output devices 520, user interface input devices 522, and a network interface subsystem 516. The input and output devices allow user interaction with computing device 510. Network interface subsystem 516 provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices.

[0062] User interface input devices 522 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touch screen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In some implementations in which computing device 510 takes the form of a HMD or smart glasses, a pose of a user’s eyes may be tracked for use, e.g., alone or in combination with other stimuli (e.g., blinking, pressing a button, etc.), as user input. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computing device 510 or onto a communication network.

[0063] User interface output devices 520 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, one or more displays forming part of a HMD, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computing device 510 to the user or to another machine or computing device. [0064] Storage subsystem 524 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 524 may include the logic to perform selected aspects of method 400 described herein, as well as to implement various components depicted in Figs. 1-3.

[0065] These software modules are generally executed by processor 514 alone or in combination with other processors. Memory 525 used in the storage subsystem 524 can include a number of memories including a main random access memory (RAM) 530 for storage of instructions and data during program execution and a read only memory (ROM) 532 in which fixed instructions are stored. A file storage subsystem 526 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 526 in the storage subsystem 524, or in other machines accessible by the processor(s) 514.

[0066] Bus subsystem 512 provides a mechanism for letting the various components and subsystems of computing device 510 communicate with each other as intended. Although bus subsystem 512 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple buses.

[0067] Computing device 510 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 510 depicted in Fig. 5 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device 510 are possible having more or fewer components than the computing device depicted in Fig. 5.

[0068] While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.