Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MOLECULAR EMBEDDING SYSTEMS AND METHODS
Document Type and Number:
WIPO Patent Application WO/2023/170641
Kind Code:
A1
Abstract:
Example molecular embedding systems and methods are described. In one implementation, a system includes a molecular embedder configured to receive structural and chemical information associated with a single-molecule ingredient from a plurality of single- molecule ingredients. The molecular embedder also generates a representation of the single- molecule ingredient. A preparation modeler receives multiple representations of single-molecule ingredients and preparation instructions. The molecular embedder generates a representation of the prepared ingredients. A predictor receives a representation of the prepared ingredients and generates predicted characteristics of the prepared ingredients.

Inventors:
BRONSTEIN ALEX (IL)
SILVER DAVID H (IL)
KNAFO TAL (IL)
HARPAZ ARIEL (IL)
DAHARY OMER (IL)
Application Number:
PCT/IB2023/052286
Publication Date:
September 14, 2023
Filing Date:
March 10, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AKA FOODS LTD (IL)
International Classes:
G16C20/30; G06F30/25; G06F30/27; G06F30/3323; G16C20/50; A23L27/20; G01N21/77; G06F30/20; G06F30/33; G06N20/10
Foreign References:
US11373107B12022-06-28
US6182016B12001-01-30
US10984145B12021-04-20
US20150088803A12015-03-26
Other References:
ZUBATYUK ROMAN, SMITH JUSTIN S., LESZCZYNSKI JERZY, ISAYEV OLEXANDR: "Accurate and transferable multitask prediction of chemical properties with an atoms-in-molecules neural network", SCIENCE ADVANCES, vol. 5, no. 8, 2 August 2019 (2019-08-02), XP093091420, DOI: 10.1126/sciadv.aav6490
Download PDF:
Claims:
CLAIMS

1. A system comprising: a molecular embedder configured to receive structural and chemical information associated with a single-molecule ingredient from a plurality of single-molecule ingredients, the molecular embedder further configured to generate a representation of the single-molecule ingredient; a preparation modeler coupled to the molecular embedder and configured to receive a plurality of representations of single-molecule ingredients and preparation instructions, the molecular embedder further configured to generate a representation of the prepared ingredients; and a predictor coupled to the preparation modeler and configured to receive a representation of the prepared ingredients and generate predicted characteristics of the prepared ingredients.

2. The system of claim 1 , wherein the predicted characteristics of the prepared ingredients are represented as a vector.

3. The system of claim 2, wherein the vector includes subjective information from sensory evaluation experiments.

4. The system of claim 2, wherein the vector includes objective information based on experimental measurements.

5. The system of claim 1, wherein first structural information and second structural information is associated with the representation of the prepared ingredients.

6. The system of claim 1, wherein first structural information and second structural information is associated with a topological representation of the prepared ingredients.

7. The system of claim 1, wherein first structural information and second structural information is associated with a geometric representation of the prepared ingredients.

8. The system of claim 1, wherein first structural information and second structural information is associated with a three-dimensional surface representation of the prepared ingredients.

9. A system comprising: a first system including a first molecular embedder; a second system including a second molecular embedder; a plurality of decoders, wherein each decoder is coupled to an output of the first molecular embedder and an output of the second molecular embedder, and wherein each decoder is configured to receive a representation of a plurality of single-molecule ingredients and generate a prediction vector containing a set of characteristics; a comparator coupled to the outputs of the first system and the second system, wherein the comparator is configured to generate a comparison vector of predicted characteristics; and a plurality of loss functions, wherein each loss function is coupled to one of the plurality of decoders and configured to receive a prediction vector and generate a number.

10. The system of claim 9, further comprising a plurality of loss functions coupled to the comparator and configured to receive a comparison vector and generate a number.

11. The system of claim 10, further comprising an aggregator coupled to the plurality of loss functions and generating a number.

12. The system of claim 9, further comprising a parameter manager storing current parameters of the system and the plurality of decoders.

13. The system of claim 10, further comprising an optimizer coupled to the plurality of loss functions and a parameter manager, the optimizer configured to receive parameters from the parameter manager and update the received parameters to decrease the value of at least one loss function.

14. A method comprising: receiving, by a molecular embedder, structural and chemical information associated with a single-molecule ingredient from a plurality of single-molecule ingredients; generating, by the molecular embedder, a representation of the single-molecule ingredient; receiving, by a preparation modeler, a plurality of representations of single-molecule ingredients and preparation instructions; generating, by the molecular embedder, a representation of the prepared ingredients; and receiving, by a predictor, a representation of the prepared ingredients and generating predicted characteristics of the prepared ingredients.

15. The method of claim 14, wherein the predicted characteristics of the prepared ingredients are represented as a vector.

16. The method of claim 15, wherein the vector includes subjective information from sensory evaluation experiments.

17. The method of claim 15, wherein the vector includes objective information based on experimental measurements.

18. The method of claim 14, wherein first structural information and second structural information is associated with a topological representation of the prepared ingredients.

19. The method of claim 14, wherein first structural information and second structural information is associated with a geometric representation of the prepared ingredients.

20. The method of claim 14, wherein first structural information and second structural information is associated with a three-dimensional surface representation of the prepared ingredients.

Description:
MOLECULAR EMBEDDING SYSTEMS AND METHODS

RELATED APPLICATIONS

[0001] This application is a Continuation in Part of United States Application Serial No. 17/691,662, entitled “Food Processing Systems and Methods,” filed March 10, 2022, the disclosure of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure relates to systems and methods to create and test food products using a variety of ingredients at the molecular level.

BACKGROUND

[0003] Existing techniques for creating new food products and associated recipes often require significant experimentation and considerable human tasting. Additionally, these existing techniques may require an experienced chef or other food product designer to create new combinations of ingredients that are likely to taste good to a human.

[0004] The techniques that require an experienced chef, significant experimentation, and many human tasting tests can be expensive and time-consuming. Further, those techniques can be limited to the chef s personal experience with different types of recipes and ingredients. The need exists for systems and methods that can create new food products and develop new recipes in a manner that accesses a wider universe of ingredients, is less expensive, and requires less trial-and-error to implement. BRIEF DESCRIPTION OF THE DRAWINGS

[0005] Non-limiting and non- exhaustive embodiments of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various figures unless otherwise specified.

[0006] FIG. 1 is a block diagram illustrating an environment within which an example embodiment may be implemented.

[0007] FIG. 2 is a flow diagram illustrating an embodiment of a process for preparing and testing new preparation instructions.

[0008] FIG. 3 is a block diagram illustrating an embodiment of a process flow for predicting characteristics of preparation instructions.

[0009] FIG. 4 is a block diagram illustrating an embodiment of a process flow for optimizing creation of new preparation instructions.

[0010] FIG. 5 is a block diagram illustrating an embodiment of a molecular embedder.

[0011] FIG. 6 is a flow diagram illustrating an embodiment of a process for predicting taste properties.

[0012] FIG. 7 is a diagram illustrating an embodiment of a process for training a molecular embedding system.

[0013] FIG. 8 is a block diagram illustrating an embodiment of a process for comparing results from multiple encoders.

[0014] FIG. 9 illustrates an example block diagram of a computing device. DETAILED DESCRIPTION

[0015] The perception of taste is a psychological experience based primarily on the structural and chemical molecular properties of various ingredients and their interactions with the taste and smell receptors and themselves.

[0016] In some embodiments, the systems and methods discussed herein identify the objective properties of ingredients and molecules. The objective properties of various ingredients and molecules are translated into a subjective tasting experience that may include, for example, its savor, smell, texture, and mouthfeel.

[0017] As discussed herein, the described systems and methods can evaluate actual human tasting results as well as the objective properties using a food processing unit (FPU) and other components or systems to generate reliable distributions of predicted user responses from a small number of actual human tastings. Thus, the systems and methods may provide alternate materials or ingredients that can be mixed and prepared to provide tasting experiences similar to traditional foods with minimal human tasting activities.

[0018] In some embodiments, the described systems and methods may identify new ingredients and preparation instructions for traditional foods that eliminate animal products, eliminate certain food allergens, replace expensive ingredients, replace ingredients that are in short supply, and the like.

[0019] In the following disclosure, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

[0020] Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer- executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.

[0021] Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM’), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.

[0022] An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a computer network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a computer network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.

[0023] Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter is described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described herein. Rather, the described features and acts are disclosed as example forms of implementing the claims. [0024] Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, handheld devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a communication network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.

[0025] Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.

[0026] It should be noted that the sensor embodiments discussed herein may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein for purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).

[0027] At least some embodiments of the disclosure are directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.

[0028] Various terms are used in this specification to describe systems, methods, ingredients, molecular structures, processing steps, data, and the like. For example, the following terms are briefly described for a particular embodiment. It should be understood that the following descriptions are presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that different descriptions may be provided for these terms without departing from the spirit and scope of the disclosure.

[0029] In some embodiments, vectors are represented as bold, roman, lower case. Scalars may be represented as italics. Matrices may be represented as bold, roman, upper case. T may represent transpose.

[0030] The systems and methods define the fundamental and indivisible constituent of a food product as a simple (mono-molecular) ingredient - a substance containing only one type of molecules. Simple ingredients may be mixed in some proportions forming composite ingredients. Ingredients may be further subjected to various types of transformations such as heating or cooling. From this perspective, any food product can be described as a sequence of mixing and transformation operations applied initially to the raw simple ingredients, and to the intermediate products until the final product is obtained. We refer to such a sequence as “preparation instructions”, while the list of the initial ingredients and their quantities is referred to as the “formula” of the food product.

[0031] We henceforth refer to the set of characteristics of a food product pertaining to the flavor perception it generates as to its “flavor profile”. A flavor profile may contain objective characteristics characterizing the sensory response (for example, the binding affinities of the different molecular constituents of the food product to a set of known taste receptor proteins, or mechanical properties such as elasticity as a function of temperature), as well as subjective characteristics (for example, a verbal description of the food product’s taste and smell and its comparison to other reference products in the sense of some fixed flavor features such as sweetness, bitterness, sourness, texture, and mouthfeel).

[0032] In some embodiments, an ingredient is a natural or synthetic mixture of molecules in some concentrations (e.g., relative amounts). An ingredient can be simple (mono-molecular) or composite (comprising more than one molecule). Concentrations and the constituent molecules of an ingredient can be determined by chemical analytic methods such as liquid chromatography (LC) and mass spectrometry (MS).

[0033] A formula may include a list of ingredients with their quantities, which is different from a chemical formula. A mixture is the result of mixing various ingredients according to a formula. The chemical composition of a mixture may change based on chemical reactions between the constituent molecules.

[0034] A transformation is an operation or process applied to an ingredient, such as baking at 180 degrees Celsius for 5 minutes. [0035] A preparation instruction is a directed graph starting from a formula and applying a sequence of mixtures and transformations resulting in a single output food (prepared according to the preparation instructions). In some embodiments, a food may also be an ingredient.

[0036] A subjective flavor profile may include a description of how the taste/smell of an ingredient is perceived by a human taster. It may also include one or more keywords that approximate the evoked perception, a vector of scores numerically grading different flavor features (sweetness, bitterness, and the like), or a comparison of the above features to another ingredient (e.g., A is sweeter than B, A is more bitter than C, A is as sour as D, and the like).

[0037] An objective flavor profile may include measurable physical and chemical characteristics such as pH, viscosity, hardness, and the like.

[0038] A flavor profile may be a combination of the subjective flavor profile and the objective flavor profile.

[0039] In some embodiments, the systems and methods described herein may receive an ingredient list and a reference food. Based on the ingredient list and reference food, the systems and methods generate preparation instructions for a particular food item using one or more alternate ingredients than the traditional preparation instructions.

[0040] FIG. 1 is a block diagram illustrating an environment 100 within which an example embodiment may be implemented. As shown in FIG. 1, environment 100 includes a food processing unit (FPU) 102 that may implement one or more processes to replicate food characteristics using a mixture of food items, ingredients, and the like. In some embodiments, FPU 102 may replicate food characteristics based on information (e.g., data-driven elements) stored in one or more databases, as discussed herein. [0041] In some implementations, FPU 102 may contain or access a digitization of one or more food features using various combinations of subjective food tastings, mixture prediction, and molecule taste prediction. FPU 102 then generates new preparation instructions that are similar to the food to be created. A profile of the food to be created may be generated from the subjective food tastings, analytical data (e.g., liquid chromatography mass spectrometry), and other information discussed herein.

[0042] As shown in FIG. 1, FPU 102 includes a molecular embedder 104 capable of performing a molecular embedding process. The molecular embedding process can produce a representation of the chemical and structural information of a mono-molecular tastant substance from which its flavor profile can be predicted. A tastant substance is any substance capable of producing a taste sensation (e.g., eliciting gustatory and/or olfactory excitation).

[0043] In some embodiments, molecular embedder 104 is implemented as a learned model that conceptually follows an auto-encoder architecture. The input to the encoder model is a molecular profile that includes the molecular structure and its chemical and physical properties, which is collectively denoted by the vector m. The output of the encoder model is a latent vector z = E(m). A decoder D is a learned model that receives a latent vector z representing the mono- molecular tastant substance and predicting a property of the mono-molecular tastant substance.

[0044] In some embodiments, multiple decoding heads are used, such as:

[0045] 1. Dauto ~ E 1 - A model predicting the molecular profile vector m itself. The model ensures that Dauto o E ~ Id makes the latent representation complete about the input molecule.

[0046] 2. D sens A model predicting the sensory response of certain gustative and olfactory receptor cells. [0047] For simplicity of explanation, when describing the systems and methods herein, the explanation may refer to the encoder model as a deterministic one. A specific embodiment may instead represent, in some parametric form, the distribution of E(m) in the latent space.

[0048] As illustrated in FIG. 1, FPU 102 also includes a mixture modeler 106 capable of producing a representation of composite tastants that include multiple molecules. In some embodiments, mixture modeler 106 is a learned embedding model that receives an unordered collection Z = {zi, Z2, ... , z n } of a fixed arbitrary number n of molecular embeddings. Mixture modeler 106 also receives a vector a = (ai, 012, ... , a n ) on the probability simplex representing their relative quantities in the mixture. Mixture modeler 106 produces an output that is another latent vector w = M(Z, a). For simplicity and based on a simple transitivity property, the following discussion assumes n=2, such that w = M(zi, Z2, a, 1- a).

[0049] In some implementations, mixture modeler 106 is built to approximately satisfy homogeneity and additivity under the mixture, such as:

[0050] M(zi, Z2, a, 1- a) = a M(zi) + (1 - a) M(z2)

[0051] In some embodiments, for purposes of convenience, the coordinate system is defined such that water is represented as zero.

[0052] In particular implementations, using mixture modeler 106 and asserting one of the mixands to be a solvent (e.g., water), the systems and methods can define another decoder head operating on the mixture representation space:

[0053] Dsubj - A model predicting the subjective flavor profile. For example, in the case of a molecule m at concentration a in water, D SU bj(aM o E(m)) = f predicts the perceived flavor characteristics, such as flavor categories, flavor feature scores, and relations to reference flavors, which are collectively denoted by the (pseudo-) vector f. [0054] In some embodiments, the described systems and methods may assert the same space suiting both mono-molecular and mixture embeddings. In these implementations, the systems and methods use z and M(z) interchangeably (e.g., referring to both as z), such that the systems and methods may assume M o E in place of E.

[0055] In some embodiments, FPU 102 further includes a preparation process modeler 108 capable of representing the effect of cooking and preparation processes on the latent representation. In some situations, a preparation process model may also be referred to as a precision graph or cooking graph.

[0056] In particular implementations, preparation process modeler 108 models a single step of the preparation process as a transformation of the latent space T(w) = w’. Using these terms, preparation instructions can be thought of as the composition of binary mixture and unary preparation operations. For example:

[0057] T2(M(TI(M(ZI, Z2, a)), Z3, a’) = T2(a’Ti(azi + (1- a) Z2) + (1- a’)z3).

[0058] In some embodiments, such a sequence can be represented as a tree with basic mono-molecular ingredients as the leaf nodes and the final food product at the root. The ingredients themselves, Z = (zi, Z2, ... , z n ) and their relative quantities a = (ai, ai, ... , a n ) can be referred to as the formula of the food, which is different from the chemical formula. In some implementations, preparation instructions may be represented using a shorthand notation T(Z, a).

[0059] As shown in FIG. 1, FPU 102 also includes a virtual tasting system 110 capable of providing a virtual tasting room for testing new food products, food ingredients, and the like. In some embodiments, virtual tasting system 110 can predict which users may like a particular food product and which users may be the best testers of new food products or testing two or more similar food products. [0060] Virtual tasting system 110 may support food testing and obtaining feedback on new food products using a smaller group of human testers. Instead of testing food products with a large number of random people, virtual tasting system 110 can provide valuable feedback on new food products using a smaller number of human testers. For example, the human testers for a new food product may be selected based on the human testers’ food preferences, previous tasting event results, and the like.

[0061] In some embodiments, a tasting event produces various results that may include data related to taster preferences for one or more food products or compounds. Based on these results, each taster’s profile may be updated based on their tasting preferences, and each food product’s profile may be updated based on the tasting results from multiple tasters.

[0062] In some embodiments, virtual tasting system 110 may implement graph learning methods by, for example, predicting a taster’s response to a substance. Based on sparse data collected from multiple tasters related to multiple substances, a deep neural network may be trained that recreates the geometry of the taster’s space (e.g., the intra tasters relations) and the geometry of the substance space (e.g., the intra substance relations). Additionally, the deep neural network may be trained to recreate the interrelation between the tasters’ graph and the substances’ graph. In some embodiments, virtual tasting system 110 also supports the generation of new tasters, based on the required demographic and other background questionnaires, and prediction of the new tasters’ response to a variety of substances.

[0063] In some embodiments, FPU 102 further includes a food model trainer 112 capable of training food models using a multi-task learning approach. In some implementations, individual models (e.g., molecular embedding models, mixture models, and preparation process models) can be pre-trained using individual sets of tasks followed by joint fine-tuning. Example learning tasks may include the following:

[0064] 1. Homogeneity: asserting that given mixtures of ingredients zi, Z2 in concentrations a, 1- a, and the flavor profile f of the mixture:

[0065] D S ubj(azi + (1- a)z2) = f

[0066] 2. Transformed flavor profile: given pairs of flavor profiles (f, f ) of ingredients before and after a certain preparation process (e.g., heating to 180 degrees Celsius for 15 minutes), the transformed model can be trained by minimizing the discrepancy of the predicted taste profiles, Dsubj o T(f) and Dsubj(P).

[0067] 3. Transformed chemistry: given pairs of chemical compositions ((Z, a), (Z’, a’)) of ingredients before and after a certain preparation process, the transformation model can be trained by minimizing the discrepancy of the predicted molecular profiles, T(aizi + ... + a n z n ) and a’iz’1 + ... + a’n’z’n’ .

[0068] As shown in FIG. 1, FPU 102 also includes an inverse modeler 114, which takes the approach of designing a food product in an inverse manner. For example, inverse modeler 114 may solve an inverse problem that includes finding a formula having n molecular ingredients M = (mi, m2, ... , m n ), their quantities a, and the preparation process T(E(M), a). Inverse modeler 114 attempts to satisfy the following list of constraints. In particular embodiments, some or all of the constraints can be equivalently cast as optimization objectives.

[0069] 1. Number of ingredients

[0070] 2. Similarity to a target flavor profile Dsubj o T(E(M), a) = ftarget, where ftarget denotes the target flavor profile

[0071] 3. Nutritional values of the molecular ingredients [0072] 4. Product cost including the sum of the cost of each mt weighted by at and, in some situations, by the cost of all preparation stages comprising T.

[0073] The solution of the inverse problem can be carried out using regular backpropagation techniques.

[0074] In the case where the encoder E is stochastic, rather than getting a single solution, the systems and methods produce a posterior distribution from which multiple solution candidates can be sampled.

[0075] In some embodiments, FPU 102 further includes a preparation instruction manager 116 capable of storing and managing various preparation instructions. For example, preparation instruction manager 116 may track various ingredients, mixture ratios, and processing steps for different preparation instructions. Additionally, preparation instruction manager 116 may record tasting results (both subjective and objective) for various preparation instructions so the data can be used for creating different preparation instructions in the future. Preparation instruction manager 116 may also monitor and record visual, mechanical, and chemical properties of the prepared food.

[0076] In some embodiments, environment 100 further includes subjective flavor measurement data 118, objective flavor measurement data 120, ingredient data 122, and preparation instruction data 124. Subjective flavor measurement data 118 may include subjective results associated with an ingredient or preparation instruction by a human user. For example, subjective flavor measurement data 118 may include human user opinions regarding taste, texture, odor, and the like for a particular preparation instruction or ingredient.

[0077] In some embodiments, objective flavor measurement data 120 includes objective results associated with an ingredient or preparation instruction by a human user. For example, objective flavor measurement data 120 may include objective flavor profile data that is created or predicted using the systems and methods described herein. The objective flavor profile data may include predicted data regarding taste, texture, odor, and the like for a particular preparation instruction or ingredient.

[0078] Ingredient data 122 may include information associated with particular ingredients, such as an ingredient flavor profile, taste testing results associated with the ingredient, preparation instructions that include the ingredients, and the like. Preparation instruction data 124 may include information associated with various preparation instructions. In some embodiments, preparation instruction data 124 includes preparation instruction ingredients, preparation instruction mixing instructions, preparation instruction process, preparation instruction flavor profiles, preparation instruction taste testing results, and the like.

[0079] In some embodiments various ingredient data and preparation instruction data may be accessed or received from public databases combined with a measured outcome (e.g., objective or subjective features). In some implementations, the systems and methods described herein may perform pairwise comparisons or absolute taste grades with respect to different features, flavor keywords, and the like. In the case of absolute taste grades, the systems and methods may add heads that predict those characteristics.

[0080] It will be appreciated that the embodiment of FIG. 1 is given by way of example only. Other embodiments may include fewer or additional components without departing from the scope of the disclosure. Additionally, illustrated components may be combined or included within other components without limitation.

[0081] FIG. 2 is a flow diagram illustrating an embodiment of a process 200 for preparing and testing new preparation instructions. Initially, process 200 obtains 202 samples of a target food. In some embodiments, a target food may be a traditional food that is being copied by creating new preparation instructions with different ingredients, but a similar flavor profile. For example, the target food may be a traditional food that includes one or more animal-based ingredients. The systems and methods described herein are used to prepare a new version of the traditional food without animal-based ingredients, but maintaining the traditional food’s flavor profile.

[0082] Process 200 continues by identifying 204 subjective flavor measurements associated with the target food. For example, the subjective flavor measurements may include taste, texture, smell, and the like. In particular implementations, the subjective flavor measurements are based on responses from human users who tasted the target food.

[0083] Process 200 then identifies 206 objective flavor measurements associated with the target food. For example, the objective flavor measurements may include physical and chemical information that may be used to predict taste, texture, smell, and the like. In some embodiments, the objective flavor measurements may be obtained as predictions from virtual tasting system 110 and other components of FPU 102.

[0084] The process continues by determining 208 a target flavor profile based on the subjective flavor measurements and the objective flavor measurements. This target flavor profile is used to create new preparation instructions with the same, or similar, flavor profiles as the existing food product. Process 200 then proposes 210 one or more candidate preparation instructions with predicted candidate flavor profiles based on the target flavor profile. In some embodiments, the candidate preparation instructions are expected to have predicted candidate profiles that are close to the target flavor profile. [0085] The process continues by preparing 212 the one or more candidate preparation instructions and measuring the actual flavor profiles of the candidate preparation instructions. The process then compares the actual flavor profiles of the candidate preparation instructions to the target flavor profile. Process 200 continues by determining 214 whether the actual flavor profiles of the candidate preparation instructions are close to the target flavor profile. If the actual flavor profiles of the candidate preparation instructions are close to the target flavor profile, the process ends at 218. In some embodiments, the candidate preparation instructions that are close to the target flavor profile may be tested by one or more human users to determine whether the flavor of the food product created with one or more candidate preparation instructions is a viable replacement for the target food.

[0086] If the actual flavor profiles of the candidate preparation instructions are not close to the target flavor profile, process 200 updates 216 the candidate flavor profile based on the measured actual flavor profiles. The process then returns to 212, where the updated candidate preparation instructions are prepared and their actual flavor profiles are measured. The process further determines whether the actual flavor profiles of the updated candidate preparation instructions are close to the target flavor profile. This process of updating candidate preparation instructions and determining updated actual flavor profiles is repeated until the flavor profile of one or more candidate preparation instructions is close to the target flavor profile.

[0087] FIG. 3 is a block diagram illustrating an embodiment of a process flow 300 for predicting characteristics of a particular preparation instruction. As shown in FIG. 3, process flow 300 receives multiple molecular profiles 302, 304, and 306. Each molecular profile 302- 306 defines various properties of a molecule or molecular structure that may be included in the results of a preparation instruction or other mixture. The molecular profiles 302-306 are provided to a molecular embedder 308, 310, 312, respectively. Molecular embedders 308-312 may be similar to molecular embedder 104 shown in FIG. 1 and discussed herein.

[0088] In process flow 300, each molecular embedder 308, 310, 312 generates a representation 314, 316, 318, respectively. Representations 314-318 of each molecule are vectors created via a (trainable) non-linear map of the input data. Each representation 314-318 contains enough dimensions such that the corresponding decoder heads can extract the required information with sufficient precision.

[0089] In some embodiments, the representations 314-318 are provided to a preparation process modeler 320. Preparation process modeler 320 may be similar to preparation process modeler 108 shown in FIG. 1 and discussed herein. Preparation process modeler 320 also receives preparation instructions 322, which may describe how the multiple molecular profiles 302-306 are mixed and processed.

[0090] Preparation process modeler 320 receives the representations of the input ingredients and generates a representation 324 of the prepared ingredient.

[0091] In some embodiments, representation 324 is provided to a predictor 326.

Predictor 326 represents decoder heads that extract different objective and subjective characteristics from the representation vector regarding the food product being represented. In some embodiments, predictor 326 generates any number of predicted characteristics 328 related to the food product associated with representation 324. For example, predicted characteristics 328 may include a flavor profile associated with the food product identified in representation 324.

[0092] FIG. 4 is a block diagram illustrating an embodiment of a process flow 400 for optimizing creation of new preparation instructions. In the example of FIG. 4, one or more molecular profiles 404, 406, and 408 are selected from a universe of ingredients 402. Each molecular profile 404-408 defines various properties of a molecule or molecular structure that may be included in a preparation instruction or other mixture. The molecular profiles 404-408 are provided to a system 410 of the type shown in FIG. 3.

[0093] In some embodiments, system 410 receives a list of ingredients 404-408 and instructions about their preparation 412 (e.g., preparation instructions), then predicts a set of characteristics 414 of the final food product. Optimizer 416 may decide how to modify the candidate preparation instructions 412 to better match the objective or constraints. In some implementations, system 410 is the forward model that is inverted in the inverse modeler.

[0094] As shown in FIG. 4, system 410 generates candidate preparation instructions 412. System 410 also communicates predicted characteristics 414 to an optimizer 416. Optimizer 416 may also receive candidate preparation instructions 412. Optimizer 416 also receives objective information 418 and constraints information 420. In some embodiments, objective information 418 and constraints information 420 may be used by optimizer 416 to optimize a particular recipe. In particular implementations, optimizer 416 is part of FPU 102 working in the inverse mode (e.g., proposing new preparation instructions). For example, optimizer 416 may optimize the preparation instructions.

[0095] In the forward mode, given preparation instructions, the systems and methods described herein predict the characteristics of the preparation instructions. In the inverse mode, given particular target characteristics, the systems and methods find preparation instructions that satisfy the target characteristics.

[0096] FIG. 5 is a block diagram illustrating an embodiment of molecular embedder 104. As discussed above, some embodiments of molecular embedder 104 are capable of performing a molecular embedding process. In particular implementations, molecular embedder 104 can produce a representation of the chemical and structural information of a mono-molecular tastant substance from which its flavor profile can be predicted. A tastant substance is any substance capable of producing a taste sensation (e.g., eliciting gustatory and/or olfactory excitation). In some embodiments, a single representation is provided (e.g., a representation of chemical information, a representation of structural information, etc.).

[0097] In particular implementations, molecular embedder 104 is implemented as a learned model that conceptually follows an auto-encoder architecture. The input to the encoder model is a molecular profile that includes the molecular structure and its chemical and physical properties, which is collectively denoted by the vector m. The output of the encoder model is a latent vector z = s(m). A decoder D is a learned model that receives a latent vector z representing the mono-molecular tastant substance and predicting a property of the mono- molecular tastant substance. The main task of molecular embedder 104 is predicting a flavor profile.

[0098] In some embodiments, multiple decoding heads are used, such as:

[0099] 1. Dauto ~ E 1 - A model predicting the molecular profile vector m itself. The model ensures that Dauto o E ~ Id makes the latent representation complete about the input molecule.

[00100] 2. D sens A model predicting the sensory response of certain gustative and olfactory receptor cells.

[00101] For simplicity of explanation, when describing the systems and methods herein, the explanation may refer to the encoder model as a deterministic one. A specific embodiment may instead represent, in some parametric form, the distribution of E(m) in the latent space. [00102] As shown in the example of FIG. 5, molecular embedder 104 may include an encoder 502, a decoder 504, a vector generator 506, a deep learning model 508, predefined features 510, and a rotation translation invariant model 512. In some embodiments, encoder 502 may receive a molecular profile and may output a vector. In some implementations, decoder 504 may receive a vector and predict a property of a mono-molecular tastant associated with the vector. In some embodiments, vector generator 506 generates a latent vector as discussed herein.

[00103] In particular embodiments, deep learning model 508 may be applied to unordered sets, as discussed herein. In some embodiments, predefined features 510 are based on the presence or absence of certain substructures and fragments. These predefined features 510 may be applied to construct a latent vector, as discussed herein.

[00104] In some implementations, rotation translation invariant model 512 is based on a graph neural network, as discussed herein.

[00105] As shown in FIG. 5, molecular embedder 104 may receive data and other inputs in the form of a string representation 514, a topological representation 516, a geometric representation 518, or a 3D (three-dimensional) surface representation 520. In some embodiments, the input to molecular embedder 104 is a molecular profile that encapsulates the molecular structure, which can be represented as one of the representations 514-520.

[00106] String representation 514 represents molecules by their simplified molecular- input line-entry system (SMILES), which is a one-line notation using strings that compactly describe the two-dimensional molecular graphs.

[00107] Topological representation 516 represents molecules as a graph with atoms represented as (attributed) nodes and bonds represented by (attributed) edges. [00108] Geometric representation 518 represents molecules using topological representation 516 with geometric information, such as atom coordinates relative to a reference coordinate system.

[00109] 3D surface representation 520 represents molecules based on a three-dimensional molecular surface. The surface portrays the charge distributions generated from the molecule’s electrons and nuclei, and is expressed by a point cloud (i.e., a set of points R 3 ), a mesh, or other similar surface representation. The surface can be further endowed with property fields, such as the electrostatic potential measured on the surface.

[00110] In some embodiments, the output of molecular embedder 104 includes both subjective and objective molecular information. For example, the subjective molecular information may be based on reports or data in the form of predefined keywords provided by subjects who participated in sensory evaluation experiments. The objective molecular information may be based on data obtained through experimental measurements and theoretical calculations. In some embodiments, the objective molecular information may be in the form of discrete or continuous vectors. In other embodiments, the objective molecular information may instead be used as additional input to the model concatenated with the structural information described above.

[00111] In particular implementations, the subjective molecular information can be represented as a bag-of-words, such as a discrete set of words taken from a vocabulary describing olfactory and gustatory sensations experienced through tastings (e.g., sweet, fruity, or nutty). To make flavor characteristics amenable to learning, a metric is defined.

[00112] One immediate measure is based on Jaccard similarity: [00114] where Fi, F2 are two distinct flavor profiles. The Jaccard distance is then defined as tZj(Fi, F2) = 1 - J(Fi, F2). In some implementations, the flavor profile is not directly used as the model output. Instead, an embedding is learned and flavor profiles are represented as vectors in some euclidean space.

[00115] In particular implementations, subjective molecular information is the primary model output. However, to enhance performance, a multitask model may be learned and multiple decoding heads can be used. In these scenarios, objective data is incorporated and may include, for example:

[00116] (1) physicochemical properties, which are the intrinsic physical and chemical characteristics of a molecule (e.g., boiling point or volatility)

[00117] (2) gustative and olfactory receptor response to a substance (e.g., protein-ligand binding affinity).

[00118] FIG. 6 is a flow diagram illustrating an embodiment of a process 600 for predicting taste properties. Initially, a molecular embedder receives 602 a first item of structural information associated with a first molecule. Process 600 continues as the molecular embedder receives 604 a second item of structural information associated with a second molecule. The molecular embedder then receives 606 a first item of chemical information associated with the first molecule. The process continues as the molecular embedder receives 608 a second item of chemical information associated with the second molecule.

[00119] The molecular embedder then predicts 610 taste properties of a compound that contains the first molecule and the second molecule based on at least a portion of the first item of structural information, the second item of structural information, the first item of chemical information, and the second item of chemical information. In some embodiments, the first molecule and the second molecule are single-molecule ingredients.

[00120] In some embodiments, structural information may include a list of atoms and the ways the atoms are connected to each other. This structural information may be represented as a graph with nodes bearing atom type attributes (e.g., carbon, nitrogen, hydrogen, etc.) and edges connecting the nodes bearing the bond type (e.g., single covalent bond, double covalent bond, hydrogen bond, etc.). In some situations, coordinates of the atoms may be specified as well. In these situations, the bond lengths and angles can be inferred. Another way of representing structural information is the SMILES sequence (discussed herein), which captures the topology of the above graph but not the exact geometry. Chemical and physical information associated with a molecule includes molecular weight, charge, melting/boiling point, and the like.

[00121] In some embodiments, the molecular embedding model is based on deep learning models and can be described as consisting of two sub-models, an encoder model which yields a latent vector and is then provided to a single or multiple decoder models. The encoder may be implemented using different architectures and variants thereof, such as:

[00122] 1. A collection of predefined features based on presence or absence of certain substructures and fragments (e.g., functional groups or rings) are applied to construct a latent vector. This yields a binary vector with length equal to the number of features, and is then used as an input to a fully connected neural network.

[00123] 2. A deep learning model on unordered sets. This type of model takes, for example, as input a point cloud in a three-dimensional space and is defined by a set function [00125] The set function, is learned and is invariant to input permutation. For example, implementation may be based on architectures similar to Pointnet, dMaSIF, and variants thereof.

[00126] 3. A rotation and translation invariant model. This model is based on a graph neural network, such that a graph representation is constructed for each molecule. For example, when the input is a point cloud, each point is a node in a graph (with node features) and the connections between neighbor points are edges. A spatial encoding is learned for each point and then all individual points are aggregated into a global fingerprint. For example, possible implementations are based on the SE(3)-Transformers architecture, EGNN (Equivariant Graph Neural Network), Tensor Flow Network (TFN), SchNet, and the like.

[00127] In some embodiments, for simplicity, the encoder model is referred to as a deterministic model. However, the encoder model may also be represented in a parametric form, the distribution of s(m) in the latent space.

[00128] In some implementations, the latent representation learned is not only used as input to the downstream decoders, but also serves as a unique molecular fingerprint, which enables molecule comparisons (e.g., the similarity of two molecules measured as the euclidean distance between their latent representations).

[00129] In some embodiments, the encoder is concatenated with multiple decoding heads, including:

[00130] - a model predicting the molecular profile vector in itself. Ensuring that makes the latent representation complete about the input molecule.

[00131] Dfiavor - a model predicting flavor profile. The model can be a classification model. So, in this scenario predicting flavor profiles directly such that it outputs a vector of probabilities and may also include a threshold detector. In other implementations, the model predicts an embedding vector in some euclidean space. This embedding vector is the outcome of a dimension reduction process on the high dimensional flavor profiles. So, in this context, the model performs a regression task. The predicted embedding vector can then be transformed back to the original flavor profile representation using for example a KNN (k-nearest neighbors) algorithm or similar approach. Other implementations may include contrastive learning, in which the model learns a representation of flavor profiles in some embedding space such that similar tastants are close to each other and dissimilar ones are far apart.

[00132] D pc - a model predicting physicochemical properties of substances. The model predicts a vector concatenating various physicochemical features (e.g., molecular weight or boiling point). These physicochemical features typically take continuous values.

[00133] Dsens - a model predicting the sensory response of certain gustative and olfactory receptors. Tastants interact with taste and odor receptors. For example, they may bind at specific sites and the strength of the binding is characterized in terms of binding affinity (i.e., high affinity means greater attractive forces between the tastant and the receptor).

[00134] FIG. 7 is a diagram illustrating an embodiment of a process 700 for training a molecular embedding system. As shown in FIG. 7, an inference mode of operation is shown by a broken line 702. Within inference mode 702 is an encoder 704 that receives various inputs of the types described herein (structural information, chemical information, etc.) and generates a latent vector. The latent vector is provided to multiple decoders 706, 708, and 710. Each decoder 706, 708, 710 sends an output to a corresponding loss function 712, 714, and 716, respectively. Each loss function 712, 714, 716 generates an output W that is provided to a generalized loss function 718. The output of generalized loss function 718 is provided to an optimizer 720, which updates parameters (0) of the encoder 704. As shown in FIG. 7, the updated parameters (0) are also provided to decoders 706, 708, and 710. In some embodiments, each decoder 706, 708, and 710 may use different sets (or subsets) of parameters in 0.

[00135] The multiple decoders 706, 708, and 710 may generate different types of outputs, such as chemical properties, structural properties, physical properties, and the like. For example, the multiple decoders 706, 708, and 710 may generate outputs with different sets of properties that are provided to corresponding loss functions 712, 714, and 716. Each loss function 712, 714, and 716 indicates how well a particular property was predicted (e.g., as compared to the ground truth data associated with particular molecules). The generalized loss function 718 may be a weighted sum of the outputs of loss functions 712, 714, and 716. The optimizer 720 then tunes encoder 704 and the weights of decoders 706, 708, and 710 to generate outputs that are close to the ground truth information. The ground truth information may also be referred to as a training target.

[00136] In the example of FIG. 7, encoder 704 may receive chemical and structural information associated with a mono-molecular tastant substance. Encoder 704 may then produce multiple corresponding representations based on the received chemical and structural information. Decoders 706, 708, 710 receive a representation of a mono-molecular tastant substance and output features relating to subjective and objective molecular information. The features may include, for example, a flavor profile, physicochemical properties, and gustative and olfactory receptor responses.

[00137] In some embodiments, when operating in a training mode, a system may include encoder 704 receiving chemical and structural information of a mono-molecular tastant substance and producing multiple corresponding representations. Multiple decoders 706, 708, 710 receive a representation of a mono-molecular tastant substance and output features relating to subjective and objective molecular.

[00138] Loss functions 712, 714, 716 receives training molecular structural information definitions and corresponding subjective or objective properties. The individual loss functions 712, 714, 716 are combined according to predefined weights to yield generalized loss function 718 and produce a number. Optimizer 720 sets multiple parameters (0) of the system to minimize the value of the generalized loss function.

[00139] FIG. 8 is a block diagram illustrating an embodiment of a process 800 for comparing results from multiple encoders. As shown in FIG. 8, encoders 802 and 804 receive molecular information M. Each encoder 802, 804 produces a representation R that is provided to a comparator 806. In some embodiments, encoders 802 and 804 share parameters (0). Comparator 806 compares the multiple representations R received from encoders 802, 804. Based on the comparison, comparator 806 generates a comparison result CR.

[00140] FIG. 8 is a variant of the system shown in FIG. 7 and may be useful when trying to predict a property for something that is difficult to provide ground truth data. For example, if a system is trying to predict the sweetness of a molecule, it is difficult to quantify sweetness on a numerical scale. But, when comparing two molecules, the system of FIG. 8 can determine if a first molecule is sweeter than a second molecule, or determine that the two molecules have approximately the same sweetness. In the example of FIG. 8, the two encoders 802 and 804 share the same parameters, but receive two different molecules Mi and M2. Encoders 802 and 804 produce two different representations Ri and R2. Comparator 806 then produces a comparison result based on the two representations Ri and R2. Thus, the system of FIG. 8 is not trying to predict a specific property. Instead, the system of FIG. 8 is predicting a property of the pair (e.g., which molecule in the pair is sweeter or are both molecules in the pair about the same sweetness).

[00141] In some embodiments, both the system of FIG. 7 and the system of FIG. 8 may be used at the same time to predict or model various molecular properties and molecular representations.

[00142] FIG. 9 illustrates an example block diagram of a computing device 900 suitable for implementing the systems and methods described herein. In some embodiments, a cluster of computing devices interconnected by a communication network may be used to implement any one or more components of the systems discussed herein.

[00143] Computing device 900 may be used to perform various procedures, such as those discussed herein. Computing device 900 can function as a server, a client, or any other computing entity. Computing device can perform various functions as discussed herein, and can execute one or more application programs, such as the application programs described herein. Computing device 900 can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.

[00144] Computing device 900 includes one or more processor(s) 902, one or more memory device(s) 904, one or more interface(s) 906, one or more mass storage device(s) 908, one or more Input/Output (I/O) device(s) 910, and a display device 930 all of which are coupled to a bus 912. Processor(s) 902 include one or more processors or controllers that execute instructions stored in memory device(s) 904 and/or mass storage device(s) 908. Processor(s) 902 may also include various types of computer-readable media, such as cache memory. [00145] Memory device(s) 904 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 914) and/or nonvolatile memory (e.g., read-only memory (ROM) 916). Memory device(s) 904 may also include rewritable ROM, such as Flash memory.

[00146] Mass storage device(s) 908 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in FIG. 9, a particular mass storage device is a hard disk drive 924. Various drives may also be included in mass storage device(s) 908 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 908 include removable media 926 and/or non-removable media.

[00147] I/O device(s) 910 include various devices that allow data and/or other information to be input to or retrieved from computing device 900. Example I/O device(s) 910 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.

[00148] Display device 930 includes any type of device capable of displaying information to one or more users of computing device 900. Examples of display device 930 include a monitor, display terminal, video projection device, and the like.

[00149] Interface(s) 906 include various interfaces that allow computing device 900 to interact with other systems, devices, or computing environments. Example interface(s) 906 include any number of different network interfaces 920, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 918 and peripheral device interface 922. The interface(s) 906 may also include one or more user interface elements 918. The interface(s) 906 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.

[00150] Bus 912 allows processor(s) 902, memory device(s) 904, interface(s) 906, mass storage device(s) 908, and I/O device(s) 910 to communicate with one another, as well as other devices or components coupled to bus 912. Bus 912 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.

[00151] For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 900, and are executed by processor(s) 902. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.

[00152] While various embodiments of the present disclosure are described herein, it should be understood that they are presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The description herein is presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the disclosed teaching. Further, it should be noted that any or all of the alternate implementations discussed herein may be used in any combination desired to form additional hybrid implementations of the disclosure.