Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATED KNOWLEDGE EXTRACTION AND REPRESENTATION FOR COMPLEX ENGINEERING SYSTEMS
Document Type and Number:
WIPO Patent Application WO/2021/247831
Kind Code:
A1
Abstract:
Engineering design is a complex and time-consuming process that can be characterized by a series of decisions. An engineering computing system that includes a design application can train or otherwise help engineers in place of an expert. The systems described herein can improve design cycle times, among other technical improvements. In particular, schemas for systems states are dynamically generated and manifolds defining the designs are adaptively learned through online refinement of state embedding models. Patterns in design decisions can be extracted to automate design decisions and predict next decisions that an engineer may take. Such predictions can be achieved by generating feature-based vectorization of designs and processes, which makes the gathered knowledge utilizable in imitation learning. Furthermore, the learning process can be contextualized by encoding the requirements associated with the engineering process, thereby enabling the generation of real-time in-product contextual decision recommendations.

Inventors:
RAMAMURTHY ARUN (US)
SRIVASTAVA SANJEEV (US)
MIRABELLA LUCIA (US)
GRUENEWALD THOMAS (US)
JIN HYUNJEE (US)
SUNG WOONGJE (US)
PINON-FISCHER OLIVIA (US)
Application Number:
PCT/US2021/035655
Publication Date:
December 09, 2021
Filing Date:
June 03, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS CORP (US)
GEORGIA TECH RES INST (US)
International Classes:
G06N5/02; G06N3/04; G06N3/08; G06N7/00
Domestic Patent References:
WO2018183275A12018-10-04
Other References:
RAMAMURTHY ARUN: "A REINFORCEMENT LEARNING FRAMEWORK FOR THE AUTOMATION OF ENGINEERING DECISIONS IN COMPLEX SYSTEMS COPYRIGHT © 2019 BY ARUN RAMAMURTHY", 15 January 2019 (2019-01-15), XP055855480, Retrieved from the Internet [retrieved on 20211027]
CHHABRA JASKANWAL P ET AL: "A method for model selection using reinforcement learning when viewing design as a sequential decision process", STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, SPRINGER BERLIN HEIDELBERG, BERLIN/HEIDELBERG, vol. 59, no. 5, 15 December 2018 (2018-12-15), pages 1521 - 1542, XP036757724, ISSN: 1615-147X, [retrieved on 20181215], DOI: 10.1007/S00158-018-2145-6
AHMED HUSSEIN ET AL: "Imitation Learning", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 50, no. 2, 6 April 2017 (2017-04-06), pages 1 - 35, XP058327577, ISSN: 0360-0300, DOI: 10.1145/3054912
Attorney, Agent or Firm:
BRAUN, Mark E. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An engineering computing system comprising: a memory having a plurality of application modules stored thereon; and a processor for executing the application modules, the application modules comprising: a knowledge refiner configured to monitor an engineering design application and extract data from the engineering design application, the data indicative of a plurality of states of the engineering design application and a plurality of actions associated with the plurality of states; a representation learner configured to, based on the data extracted from the engineering design application, generate a vectorized representation of the plurality of states and actions of the engineering design application; and a knowledge utilization module configured to, based on the vectorized representation, predict an action for a user of the engineering design application to take so as to define a recommended action associated with a design, wherein the engineering design application can display the recommended action to the user.

2. The engineering computing system as recited in claim 1, the engineering computing system further comprising: an adaptive schema learning module configured to generate a schema for each state of the plurality of states that is unique, so as to generate a plurality of schemas.

3. The engineering computing system as recited in claim 2, wherein the representation leaner is further configured to generate the vectorized representation from the plurality of schemas.

4. The engineering computing system as recited in claim 3, wherein the knowledge utilization module is further configured to train a decision behavior learning model based on the vectorized representation, such that the decision behavior learning model learns mappings between the plurality of states and the plurality of actions.

5. The engineering computing system as recited in claim 1, wherein the knowledge utilization module is further configured to update the decision behavior learning model each time that data is extracted from the engineering design application.

6. The engineering computing system as recited in claim 5, the knowledge utilization module further configured to: receive the recommend action as a first input; responsive to the user of the engineering design application executing an action so as to define an executed action, receive the executed action as a second input; and based on the first input and second input, generate an update to the decision behavior learning model.

7. The engineering computing system as recited in claim 1, wherein the knowledge refiner is specific to the engineering design application.

8. The engineering computing system as recited in claim 1, wherein the knowledge utilization module comprises a user preference learner configured to: learn one or more preferences of the user, the one or more preferences defining parameters associated with the design without affecting a performance of the design.

9. The engineering computing system as recited in claim 8, wherein the knowledge utilization module is further configured to send the engineering design application the one or more preferences of the user, such that the engineering design application applies the one or more preferences to the design.

10. The engineering computing system as recited in claim 1, wherein the knowledge utilization module is configured to generate the recommended action for the user responsive to a request from the user.

11. The engineering computing system as recited in claim 1, wherein the knowledge refiner is further configured to append context information to the data such that the context information is also appended to the vectorized representation, the context information associated with one or more requirements of the design.

12. A method performed by an engineering computing system, the method comprising: monitor an engineering design application; extracting data from the engineering design application, the data indicative of a plurality of states of the engineering design application and a plurality of actions associated with the plurality of states; based on the data extracted from the engineering design application, generating a vectorized representation of the plurality of states and actions of the engineering design application; and based on the vectorized representation, predicting an action for a user of the engineering design application to take, so as to define a recommended action associated with a design.

13. The method as recited in claim 12, the method further comprising: displaying the recommended action to the user.

14. The method as recited in claim 12, the method further comprising: generating a schema for each state of the plurality of states that is unique, so as to generate a plurality of schemas.

15. The method as recited in claim 14, the method further comprising: generating the vectorized representation from the plurality of schemas.

16. The method as recited in claim 15, the method further comprising: training a decision behavior learning model based on the vectorized representation, such that the decision behavior learning model learns mappings between the plurality of states and the plurality of actions.

17. The method as recited in claim 12, the method further comprising: learning one or more preferences of the user, the one or more preferences defining parameters associated with the design without affecting a performance of the design.

18. The method as recited in claim 12, the method further comprising: generating the recommended action for the user responsive to a request from the user.

19. The method as recited in claim 12, the method further comprising: appending context information to the data such that the context information is also appended to the vectorized representation, the context information associated with one or more requirements of the design.

20. A non-transitory computer-readable storage medium including instructions that, when processed by a computing system cause the computing system to perform the method according to any one of claims 12 to 19.

Description:
AUTOMATED KNOWLEDGE EXTRACTION AND REPRESENTATION FOR COMPLEX

ENGINEERING SYSTEMS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application Serial No. 62/033,867 filed on June 3, 2020, the disclosure of which is incorporated herein by reference in its entirety.

BACKGROUND

[0002] Engineering design can be generally characterized as a series of decisions that lead to a final prototype. In some cases, throughout the design process, engineers make decisions concerning the type of model(s) to be used, the appropriate parameter settings, the system architecture, etc. These decisions, which are made at different levels of abstraction, are often undertaken with a desired goal. For example, the decisions can define an exploratory search to identify a feasible and viable concept architecture, or a detailed analysis for the purpose of estimating performance metrics.

[0003] One of the primary challenges experienced by design engineers, before the mastery of a design application, is that the expertise gained by one engineer is often not easily transferrable to another. While engineers can collaboratively gain experience (e.g., an expert engineer trains a novice in the use of a design system in the context of the problem), continued access to sufficient expert engineers is often limited, such that design processes are often inefficient or result in suboptimal designs.

SUMMARY

[0004] Methods and systems are disclosed for a design system that can computationally represent engineering expertise. In accordance with various embodiments described herein, an engineering computing system that includes a design application can train or otherwise help engineers in place of an expert. The engineering systems described herein can improve design cycle times, among other technical improvements. [0005] In an example aspect, an engineering computing system includes one or more processors and a memory having a plurality of application modules stored thereon. The modules can include a knowledge refiner configured to monitor an engineering design application. The knowledge refiner can be further configured to extract data from the engineering design application. The data can indicate a plurality of states of the engineering design application and a plurality of actions associated with the plurality of states. The modules can further include a representation learner configured to, based on the data extracted from the engineering design application, generate a vectorized representation of the plurality of states and actions of the design application. The engineering computing system can also define a knowledge utilization module configured to, based on the vectorized representation, predict an action for a user of the design application to take so as to define a recommended action associated with a design. The recommended action can be provided to the engineering design application, such that the engineering design application can display the recommended action to the user. In an example aspect, the engineering computing system further includes an adaptive schema learning module configured to generate a schema for each state of the plurality of states that is unique, so as to generate a plurality of schemas. The representation learner can be further configured to generate the vectorized representation from the plurality of schemas. Further, the knowledge utilization module can be further configured to train a decision behavior learning model based on the vectorized representation, such that the decision behavior learning model learns mappings between the plurality of states and the plurality of actions.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0007] FIG. 1 is a block diagram of an example engineering computing system according to an example embodiment. [0008] FIG. 2 depicts a requirements-based contextualization of an example design, in accordance with an example embodiment.

[0009] FIG. 3 illustrates an example hierarchical decision behavior learning models that included as part of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.

[0010] FIG. 4 is a call flow that depicts example operations that can be performed by a knowledge extraction and representation module of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.

[0011] FIG. 5 is a call flow that depicts example operations that can be performed by a knowledge utilization module of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.

[0012] FIG. 6 shows an example of a computing environment within which embodiments of this disclosure may be implemented.

DETAILED DESCRIPTION

[0013] As an initial matter, it is recognized herein that various engineering designs can be characterized as a series of decisions or sequence of actions. It is further recognized herein that the sequence of actions can be triggered by observations of a current design state or can result from instantaneous behaviors that are derived from design requirements or domain expertise. In a modern digital setting, the design process is often carried out through the use of design tools that are typically software applications, such that design at any instant of time can be perceived or defined through the abstract framework of product design, modeling, and simulation. In various embodiments described herein, the underlying logic behind the decisions mentioned above can be identified, captured, and abstracted into an application or system, such that the application or system can intelligently support and guide an engineer’s decision making.

[0014] Given a design system, the task of decision making can be mathematically represented by a modification to the Markov Decision Process (MDP) as (5, A, Pa, Ra, c ). Here the term S corresponds to the set of states that a design system can take at any instant of time, A corresponds to the set of actions that can be performed, Pa represents the state transition probability matrix, Ra is the reward that is perceived by the engineer resulting from the transition, and the new term c represents the context (e.g., design requirements) associated with the design process. For a design system, the states of the MDP and the actions can be extracted through intelligent automation. It is recognized herein that reward can be a measure of the conformance of a designed part (or portion of a design system) to the specified requirements, but it can be hard to measure as it is often implicitly assessed by the engineer.

[0015] It is further recognized herein that a technical challenge in predicting design actions and states is determining the set of possible states apriori, as states can be perceived during run time from a set of past designs. Although, in some cases, the set of states perceived from a set of past designs might not capture all possible variations of the design. Furthermore, the set of possible actions associated with a design tool or application may be enumerable during the design of the system. Further still, changes to the system (e.g., version updates) can imply a reconfiguration of the actions and thus, an enumeration of the possible set of actions might not result in an adaptive solution.

[0016] Thus, various technical problems are associated with the generating contextual recommendations for design actions. Example technical problems include, among others: extracting or perceiving the state of a design system in a such a manner that it can be consumed by machine learning algorithms; evaluating context associated with a design; identifying actions carried out by a user; and generating state-to-state transition modules that can generate appropriate recommendations.

[0017] Referring initially to FIG. 1, in accordance with various embodiments, an engineering computing system 100 can be configured to extract and represent knowledge from an engineering design system, such that an autonomous agent can learn decision behavior of one or more designers and, based on its learning, generate contextual recommendations that are integrated within the engineering design system. The engineering computing system 100 can include one or more processors and memory having stored thereon applications, agents, and computer program modules including, for example, an engineering design system or application 102 and an autonomous agent module 104. Similarly, the autonomous agent module 104 can include one or more processors and memory having stored thereon applications, agents, and computer program modules including, for example, a knowledge extraction and representation module 106 and a knowledge utilization module 108.

[0018] It will be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 1 are merely illustrative and not exhaustive, and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 1 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 1 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program modules depicted in FIG. 1 may be implemented, at least partially, in hardware and/or firmware across any number of devices.

[0019] Still referring to FIG. 1, in various examples, the autonomous agent module 104 can be adapted to any black-box or open-source design system or application 102, which can be extended with custom knowledge refinement plugins. For example and without limitation, the design system 102 can define various design applications such as Simcenter 3D, HEEDS, NX CAM, Amesim, Solidworks, CAHA, ModelCenter, iSight, ANSYS Mechanical, ANSYS FLUENT, FreeCAD, OpenSCAD, MeshLAB, Slic3r, or the like. As described further herein, the autonomous agent module 104 can use reinforcement learning to learn decision behavior of a human designer. Further, in an example, the autonomous agent module 104 can interface with external databases, such as a knowledge database 110, to abstract learned decision policies across multiple designers.

[0020] The knowledge extraction and representation module 106 can be configured to extract representations of the engineering design system 102. In particular, the knowledge extraction and representation module 106 can extract representations of application states and user actions associated with the design system 102. The knowledge extraction and representation module 106 can also covert the representations into a form suitable for machine learning. In some cases, to extract the representations, the knowledge extraction and representation module 106 can monitor the design application 102 for user actions and associated changes to the state of the design application 102, if any. As used herein, unless otherwise specified, design system 102 and design application 102 can be used interchangeably, without limitation. During the extraction, the knowledge extraction and representation module 106 can compile the state of the system 102 by inspecting the instantaneous state of the system 100 in the form specific to the application under consideration. For example, if the design system 102 defines a computer-aided design (CAD) application, the knowledge extraction and representation module 106 can inspect a design representation tree of the CAD application. Furthermore, during the extraction, the knowledge extraction and representation module 106 can retrieve information associated with the action performed by a user (or an automated agent) that results in the system state. The knowledge extraction and representation module 106 can retrieve such information by inspecting the design application 102, for instance using a signal event from the design application 102, or by inspecting an application log file 101 from the design application 102. In some cases, autonomous agent 104 can define a plugin of the design application 102, and the signal event can be triggered by the design application 102. For example, the design application 102 can inform the autonomous agent 104 when there is any change to the state of the system. In some examples, a change to the state occurs when the user performs an operation that changes any parameter within the design application 102.

[0021] The application log file 101 can define an application-specific representation of the associated action. By way of example, if the design system 102 defines a CAD application, the application log file 101 can define a hash-table representation of the associated action. Thus, in some cases, the knowledge extraction and representation module 106 can extract a state-action pair that can be inserted into a process graph that represents transitions that have been made by a user in reaching the current state of the system 102. In various examples, the knowledge extraction and representation module 106 can include a knowledge refiner 112 configured to generate the process graphs. The knowledge refiner 112 can insert state-action pairs into a process graph so as to represent the transitions made by a user in reaching the current state of the system 102.

[0022] To further illustrate by way of an example, suppose a designer or engineer is in the process of creating a geometry (e.g., an impeller blade) in a CAD tool. The designer has created the impeller base and added a sketch of the impeller blade, thereby defining the current state of the system. The designer then performs an action of extruding the sketch of the blade. In this example, the feature tree associated with the incomplete impeller blade design defines the current state of the system, and the extrude operation and its parameters defines the action. Continuing with the example, the modified feature tree that incorporates the extrude operation defines the resultant (next) state of the system.

[0023] In various examples, the knowledge refiner 112 is specific to the design application (e.g., design application 102) under consideration, while also being configured to be applicable to any design process within the application. Thus, in accordance with the example described above in which the design application 102 defines a CAD application, the knowledge refiner 112 can generate the design representation tree that represents the feature tree of the CAD model, the hash-table representation, and the process graph. In some cases, the process graph that is generated can be modeled as a Markov Decision process in which the nodes represent the states encountered by the design application 102 and the edges capture the associated actions that result in the given system state. Thus, in some cases, extracting data from the design system 102 begins when the knowledge refiner 112 parses the state of the system in any supported format, for example, a state tree. The state extracted by the knowledge refiner 112 can be handled by a data management service on the agent module 104 to create an entry in a long-term storage, for instance the knowledge database 110, which can stores the raw large file representation of the state.

[0024] In some cases, to each state-action pair in the process graph, the knowledge refiner 112 can append context information. Example context information can define design requirements in natural language. To illustrate by way of example, context information can indicate that a UAV has to cover a distance of 100 meters in two minutes, or that an impeller must have 10 blades and an outer diameter of 120 mm. In some cases, design requirements can be pre-specified and fixed for a given design process.

[0025] Thus, the knowledge from a design process can be represented in various state, action, and process representations, as described above. The knowledge extraction and representation module 106 can also generate representations that can be used by machine learning algorithms, for instance machine learning performed by the knowledge utilization module 108. In particular, the knowledge extraction and representation module 106 can further include an adaptive schema learning module 114 and a manifold or representation learner 116 coupled with the adaptive scheme learning module 114. The adaptive schema learning module 114 and the representation learner 116 can be configured to learn and generate vectorized representations of the knowledge extracted from the design application 102.

[0026] In some cases, differences between the various states of the design system 102 coupled with the design context (e.g., design requirements) drives the accuracy of the relationships that are learned. In particular, for example, learning of the relationships between the state of the design application 102 and the actions performed by an engineer can be affected by the various states of the design application 102 and the design requirements. Given sufficient states of design applications, an encoding model can converge to a static manifold that is unique to each design application, for instance the design application 102. To illustrate further by way of example, consider a requirement to create a beam in which the final shape and size of the beam are dictated by the requirements that are imposed. Here, final state of the system, for instance the output of a CAD model, can be coupled with the requirements that are imposed on the design to understand why certain design decisions were taken. For example, in the case of the beam, the requirements may indicate why certain parameter values were set to 30 (as an example) instead of 50.

[0027] In some cases, the adaptive schema learning module 114 can learn an adaptive schema such that the minimum amount of information required to uniquely describe the design application state is retained for each unique state of the system 102. Because the states of the system 102 can be parametrized by a varying number of parameters, the schema learning module 114 can be adapted to learn different representations. In some examples, even when the same action set is utilized, the resultant states can be different, such that an extracted state’s schema representation may need to be updated when new information about the design process is gathered. In an example, the result of the schema learning performed by the adaptive schema learning module 114 is that the agent 104 learns a representation for the design application 102, such that each state of the design application 102 can be uniquely described with the minimum amount of information. By way of example, given a tree-like state representation of the state of the design application 102, the adaptive schema learning module 114 can implement a real-time tree differencing algorithm to identify the schema coupled with an automated abstraction to generate the minimum schema for the states. In parallel, the representation learner 116 can perform a manifold learning that utilizes instantiations of the states stored in the process graph (generated by the knowledge refiner 112) and the corresponding schemas (generated by the adaptive schema learning module 114) such that a vectorized representation can be realized for these states. As new knowledge is gathered that updates previously learned states, the vectorization can also adapt. Further, any updates to states, and thus to the vectorized representation of the states, can be provided to the knowledge utilization module 108, as the knowledge utilization module 108 can operate in parallel with the knowledge extraction routine performed by the knowledge extraction and representation module 106. The vectorization of the states can be performed using various implementations.

[0028] Thus, the data management service of the agent module 104 can extract a schema out of the generated state representation using the adaptive schema learning module 114. The extracted schema can be stored on a schema storage, for instance within the knowledge database 110, wherein the data associated with the states can be serialized. A threaded assessment of the state’s vectorized representation can be carried out based on the extracted schema for each datapoint using the representation learner 116. For example, the representation learner 116 can adapt the manifold of the state in an online fashion, such that the observed design states can be encoded in the common latent manifold. As described herein, the knowledge extraction and representation module 106 can identify actions carried out by an engineer, such that state and action pairs can also be are added to the semantic storage, where the respective design process graph can be updated with a new node and edge. In some examples, the above-described process can repeat when an engineer executes an action in the design system 102 with additional data populating the appropriate storage, for instance the knowledge database 110. In some cases, context information associated with a given design is predetermined or prespecified in terms of a vector of requirements associated with the design. The vector can be stored with each node of a given design, as illustrated in FIG. 2. Referring to FIG. 2, requirements 202 can be inserted into a design process 200. In particular, different states 204 can be reached during the design process 200, and different actions and parameters 206 can be implemented so as to reach the different states 204, and ultimately a final state 204a of the system.

[0029] In one example, the representation learner 116 can generate a natural language representation from the schema that is generated by the adaptive schema learning module 114. The natural language representation can represent the state of the system 102 as a document. For example, the tree representation of the state can be transformed to a natural language representation for the purpose of vector izati on. The document can then be vectorized using natural language processing. Due to the domain specific nature of the document, in some cases, pre-trained models are fine-tuned to learn appropriate manifold embeddings for the specific design application. Thus, with each embedding step, the representation learner 116 can perform a fine-tuning process such that the manifold learning algorithm can find embeddings that are valid and relevant to the design application under consideration.

[0030] In another example, given the tree-like representation of the state of the system 102, the representation learner 116 can utilize information propagation using a tree-LSTM encoder. Similar to the natural language encoder, the tree-LSTM can be trained in parallel to the encoding discovery in order to account for the discovery of new states and the potential of change in the design application manifold.

[0031] In various examples, each knowledge state that is extracted can be fed to the adaptive schema learning module 114 and the representation (manifold) learner 116 of the module 106. When a new state is extracted, the representation learner 116 can train a manifold learning algorithm online for a fixed number of iterations (e.g., 10). Thus, the representation learner 116 can generate a vectorized representation that associated with each state in the associated process graph, such that a machine learning system or algorithm, for instance a machine learning algorithm performed by the knowledge utilization module 108, can retrieve the information during knowledge utilization.

[0032] With respect to the actions, in some cases, an action defines an action type and a set of action parameters. By way of example, the action type can be indexed using an integer representation according to the sequence of observation. Each integer index can be further attributed with the set of action parameters, which can be adaptively learned based on the operations performed. By way of example, in a CAD system, if the extrude operation is performed without any draft angle, the design application 102 might not log any information about draft angles. Consequently, the agent 104 can be unaware of the presence of the draft angle as a possible parameter that can be specified by the designer. As with the adaptation of the state schema, each action type’s schema can be adapted based on real-time knowledge gathered by the knowledge refiner 112, with updates made to any previously observed instances of the action type under consideration. Default values such as, for example, NaN or infeasible values, can be utilized for such updates. As with the vectorized state representation, each vectorized action can also associated with the corresponding entry in the process graph to be used later by the knowledge utilization module 108.

[0033] As described above, the knowledge extraction and representation module 106 can also extract and consider design context. In an example, the design context can be represented by requirements that are pre-specified by the user. In some cases, the requirements are in natural language. In such cases, the knowledge extraction and representation module 106 applies natural language encoding (e.g., using models such as LSTM, BERT or a TF-IDF embedder) to generate the vectorized representation associated with the specified requirements. These vectorized representations can be appended to each state of the design process in the process chain, so as to incorporate information regarding the context of the design.

[0034] With continuing reference to FIG. 1, the knowledge utilization module 108 can leverage the extracted and vectorized knowledge from the knowledge extraction and representation module 106, so as to train a mapping between the states (which can be augmented with design context) of the design application 102 and the actions performed by the design engineer. When the mapping is learned, the knowledge utilization module 108 can provide real time contextual recommendations to the design engineer. The real-time context recommendations can be provided by request, or in response to certain time or event triggers. By way of example, the autonomous agent 104 detecting that the designer is deviating from a traditional design process may define an event trigger. Additionally, the real-time context recommendations can be provided in the design application 102, so as to define in-product recommendations.

[0035] It is recognized herein that design processes are often inherently Markovian or semi- Markovian in nature, such that decisions affect the state of a given designed product. Thus, the knowledge utilization module 108 can perform or utilize reinforcement learning, imitation learning, and the like to learn contextual relationships between actions and states so as to provide accurate recommendations to a design engineer, thereby enabling the transfer of knowledge from one person to another in an automated and streamlined manner. In some cases, the knowledge transfer is realized through the recommendations that are provided by the autonomous agent 104. [0036] For example, referring also to FIG. 3, hierarchical decision behavior learning models, for instance a hierarchical model 300, can be trained to learn mappings between design application states and recommended actions. In particular, the knowledge utilization module 108 can include a decision behavior learner 120 configured to learn and generate recommendations, based on the vectorized representations for the states and actions provided by the knowledge extraction and representation module 106. The example hierarchical model 300 is an example conceptual illustration of a probabilistic model that can be used. In some cases, the hierarchical model 300 defines two-levels of hierarchy, although it will be understood that models can generated so as to define an alternative number of levels, and all such models are contemplated as being within the scope of this disclosure. In an example, the hierarchical model 300 can define a first level 302 that can predict action types performed by the user. Based on the predicted action types, the decision behavior learner 120 can identify a second level 304 of the model 300 predicts one or more parameters associated with the action to be performed.

[0037] In some cases, parameter prediction models, such as the example model 200, are trained using supervised learning in which the contextual state represents the input parameters, and the action parameters represent the output parameters. Thus, the decision behavior learner 120 can define an action type prediction model that is trained using imitation learning in which the contextual state-action pair is extracted from the process graph that is constructed via the knowledge extraction process performed by the knowledge extraction and representation module 106. In various examples, the models are updated in an online manner such that each time new information is gathered, an updated action policy and parameter mapping is learned.

[0038] Thus, with each update to the semantic storage, for instance in the knowledge database 110, the decision behavior learner 120 can update a context-sensitive model. The model can be built specific to a particular type of design component. To illustrate by way of example, and without limitation, a model can learn all of the bolts within a given system, so as to generate recommendations specific to the design of bolts. In various examples, the decision behavior learner 120 generates recommendations to users. For example, a model can learn design components across any users, for instance all users or a subset of users, that publish data to the database 110.

[0039] Referring again to FIG. 1, the knowledge utilization module 108 can further include a user preference learner 122 configured to learn and generate user preferences, based on the vectorized representations for the states and actions provided by the knowledge extraction and representation module 106. The user preference learner 122 can be trained in parallel with the decision behavior learner 120. The user preference learner 122 can updates the model based on the recommendation provided to the user and the corresponding action taken by the user. In various examples, the user preference learner 122 defines a Bayesian user preference learning model that is trained based on the design system states to capture the unobserved and unquantifiable preferences of design engineers. Example user preferences include, without limitation, colors of various objects, positioning of various elements, and the like. In some examples, the user preference learner automatically applies user preferences to improve the user experience in utilization of the design application 102. A sequential Bayesian learning model that represents user preferences as a latent factor can be trained based on a set of non-essential parameters in the design application 102. The non-essential parameters, as indicated above, can refer to those parameters that do not affect the performance of the design.

[0040] In some cases, the knowledge utilization module 108 is integrated as a recommendation application within the design application 102, such that the utilization module 108 can request the extraction of the instantaneous state of the design application 102 to generate a vectorized representation. The vectorized representation can then used by the trained machine learning models in order to generate recommendations of the best possible actions that yield the desired outcome as dictated by the requirements specified. In an example, recommendations are triggered when a user requests a recommendation during a design, for instance during an interactive mode of execution. Based on a request, the agent 102 can automatically generate designs in batch or can automate executions.

[0041] Referring now to FIG. 4, example operations 400 that can be performed by the knowledge extraction and representation module 106 are shown. At 402, a user is tailed or monitored for updates to the state of the design system, for instance the design application 102. Thus, when the user performs an interaction with the design application, the interaction can be monitored, and information associated with the interaction can be extracted. Further, at 404, design requirements can be parsed or otherwise obtained from the file system of the design application 102, as to identify the context associated with the particular design process. In some examples, an infinite tail can launch the design system 102 with configurations for the data management service that can be used to communicate with a server that defines the autonomous agent module 104. By way of example, when the design system 102 defines a CAD application, the infinite tailing thread can poll the design system 102 for its state periodically, for instance every second, so that the knowledge refiner 112 can create a representation of the state of the system 102 that is processed by the knowledge extraction and representation module 106. By way of another example, when the design system 102 defines product lifecycle management (PLM) software (e.g., Siemens NX), the design system 102 can update its log file each time the engineer performs an action. In either example, an interaction can occur on a client machine that includes the design application 102, and the associated output can include a serialized state of the system with related design context. At 406, the serialized state can be stored in the long-term storage, for instance the knowledge database 110, for retrieval of state and object information.

At 408, the serialized state can be passed to the adaptive schema learning module 114 and representation learner 116.

[0042] With continuing reference to FIG. 4, the adaptive schema learning module 114 can perform a schema extraction routine that leverages a threaded implementation of a tree- isomorphic state difference calculation, so as to compute incremental differences between each object type within the new state and existing instances in the database 130. Thus, at 410, the schema can be updated with these differences so as to define an updated schema of the constituent objects that can stored in the schema (meta-data) storage database, for instance the knowledge database 110. At 412, these object schemas can be assembled along the tree to generate the schema for the entire state, which can also be stored in the same database (e.g., database 110) under a different collection.

[0043] Having updated the schema, at 414, the representation learner 116 can initiate the update to the vectorization, in a separate thread, of the state by loading pre-saved models or by creating new encoding models for each object type (e.g., origin, parameters) associated with a state of the system 102. In some cases, the representation learner 116 can execute the update to the vectorized representation based on a single batch and single epoch based on a prioritized set of samples.

[0044] When a new state is encountered, the representation leaner 116 can update the design process graph with a new node and edge, at 416, so as to complete the vectorization of a state. When the vectorization of the state is completed, the node can be populated with the vector representation of the state along with the related contextual information. At 418, having completed the state vectorization process, the representation learner 116 can generate an embedded representation of the action by computing the difference between two adjacent states. In some cases, this is carried out for each triple and a clustering algorithm is initiated. A sequential clustering algorithm, for example, can be executed so as to identity a number of clusters. The edges of the graph can be updated with the action indices. At 420, the knowledge extraction and representation module 106 can execute an action schema extraction in which the parameters of the actions are identified by determining a list of parameters that vary for the cluster of actions, which can be stored in the schema storage database (e.g., knowledge database 110).

[0045] Referring now to FIG. 5, example operations 500 that can be performed by the knowledge utilization module 108 are shown. At 502, the user learner 122 can generate reward metrics for each edge in the design process graph so as to construct a dataset for the decision learner 120. At 504, the action parameters can be obtained from the knowledge extraction and representation module 106, for instance via a context memory 124 of the agent module 104, to train individual action models. The decision learner 120 can perform a behavior learning algorithm that relies on prioritized sampling of the stored data to train an imitation learning model in an incremental and continuous manner. For each action type, the decision learner 120 can also train a regression model in parallel.

[0046] Still referring to FIG. 5, at 506, the knowledge utilization module 108 can receive a recommendation request from a user, via the design application 102. Based on the recommendation request, at 508, the decision learner 120 can retrieve one or more models from the knowledge database 110. At 510, the decision learner 120 can compute or generate a recommendation, for instance based on one or more action parameter models. In particular, based on the predicted action type, the associated parameters can be predicted and returned to the design system 102 as a recommendation. At 512, the knowledge utilization module 108 can provide the recommendation to the design application 102. At 514, the design application 102 can display the recommendation to a user, for instance a design engineer. At 516, based on the decision the user makes in response to the recommendation, the user learner 122 can receive the user preference, compute sample preferences, and update a behavior model. Thus, user learner 122 can take as input the recommendations generated and the action taken by the user. Based on those two parameters, for example, an active update to the models is applied. At 518, the updated model can be stored in the knowledge database 110 for future use.

[0047] As described above, to illustrate further, the knowledge extraction and representation module 106 can identify a schema for the state of the system such that a vectorized representation can be generated for each state. For example, a tree-based representation of the design state can be generated, but alternate state representations are possible due to the extensible nature of the framework, and all such representations are contemplated as being within the scope of this disclosure. States can thus be represented by a tree that stores information about CAD features, can mathematically be given as: S = ({A}, JToi } ), where {A} forms the set of attributes associated with the state of the system, and {Toi} represents a set of tree structured features associated with the state forming the first level branches of the state tree. Each tree structured features can further be defined as: Toi = ({A}, {Tom}), with {A} and {To/} being defined as before.

[0048] With respect to schema extraction, for each state that is stored in the database, a schema can be automatically composed by extracting schemas for the branches of the state tree.

A recursive branch processing algorithm can be employed until all the branches of the state tree are processed, employing a minimum edit formulation at each level. Mathematically, given two states, Si and S j , the minimum edit formulation can dictate that the schema of either state, i and j, can be written as: s = Si S j , where s represents the schema of the state of the system and the difference between the states is computed as a tree-isomorphic difference. In some cases, the use of the minimum edit identifies the bare minimum information that has to be stored as part of the schema to reconstruct the state uniquely over any given baseline state. Thus, it can minimize the amount of information that has to be stored, enabling the development of a scalable solution. The schema for each branch can be constructed by computing a tree difference between branches of similar type on other state. A branch hashing algorithm can be performed in order to determine the isomorphic equality of the branch, thereby enabling rapid computation of the differences.

[0049] Continuing with the above-described tree-based representation example, having generated a schema for each branch and thus, the entire state, an encoded representation of the state can be generated by composing the encoded representation of the branches. In some cases, variational autoencoders are trained incrementally and online to generate the encoded representations of the branches. Single-batch updates on the VAEs can be executed to ensure availability of real-time encodings. In some cases, without being bound by theory, when exposed to sufficient number of state branches of a given type, the states attain a stable encoding state when trained incrementally provided there is sufficient diversity of the samples. In an example, a weighted sampling routine is performed to ensure diversity in the branches. The weighting can be dictated by the reconstruction error akin to that of prioritized experience replay. The generated encoding of the branches can be accumulated to generate the parent branch’s encoded representation. In some examples, as a final step in the vectorization of the state, the generated encoding for the state can be appended to the vectorized representation of the requirement.

[0050] Turing now to action vectorization, an action in the discovery routine can be implemented so as to automatically extract actions performed by a user from a sequence of states. In some cases, action discovery can be characterized by encoding generation; cluster identification; and a cluster schema extraction. The action encoding can be realized in a similar manner as the state encoding. An action can be the cause for the transition of the design system from one state to another. Thus, by computing the difference between any two sequential states, the action can be identified. Thus, given two sequential states, Si and S +i , the actions, aSi Si +i is given as: asi Si +i = S m - Si. In order to generate the embedded representation of the action, the encodings can be accumulated along the difference tree to generate the action encoding. [0051] Having generated an embedded representation, a k-means clustering algorithm can be executed to identify the number of clusters, so as to identify the number of actions, thereby automatically discovering the actions performed by the user. In some examples, the Davies- Bouldin index (DBI) can identify the best number of clusters in the encoded space to determine the number of actions stored in the database. In an example, if the number of triples stored in the graph are \N\, then the number of actions can be N A = \N\. Thus, a sequential clustering can be executed with different number of clusters to compute the associated DBI score for each, picking the least scoring cluster as the number of actions.

[0052] Continuing with the example, once the number of actions is determined, the actions belonging to the cluster can be gathered to extract the schema for the action. Actions can include an action type and the parameters of the action, which can differentiate each of the action types. For example, an extrusion operation can be parameterized by the length of the extrusion, the sweep angle, and the resultant body operation (union, intersection, etc.). Thus, the recommendation module (e.g., knowledge utilization module 108) can predict not only the type of action that is to be performed, but also its parameters. The attributes of the actions within a cluster that change in value across the different instances of the action can be evaluated, so as to result in a flat list of attributes that are used as the parameters of the action.

[0053] With respect to training the knowledge utilization module 108, in particular a decision behavior learning model (e.g., decision learner 120), the model can define a first level that predicts the type of action to be performed, and a second level that predicts the parameters of the action. The number of levels of the hierarchy and the complexity of the model can change with different model extensions, and all such models are contemplated as being within the scope of this disclosure. In an example, the action type predictor can utilize an imitation learning portion of the Deep Q-Learning from Demonstration algorithm. For each action type, a feed forward regression model that predicts the parameters of the actions can be trained. The mean- squared error loss metric can be used to train each parameter predictor. In the case of training the imitation learning agent, the reward can be formulated by processing the structure of the stored design process graph. As an example, and without limitation, transitions that result in a backtrack of state (the state of the system is reverted to a previous setting) can be penalized with a reward of -1, and others are given a reward of +1. Such rewards can minimize the number of actions that are to be taken to reach the final state. As with the generation of the state embedding, both the imitation learning agent and the parameter prediction model can be trained online and incrementally as data is populated in the database. Prioritized sampling can be used to prevent bias in the trained models.

[0054] As described above, the user learner 122 can provides an active learning perspective to the trained models. When a recommendation is provided to the engineer, the associated decision made by the engineer is tracked. If the designer performs an operation that is different from the generated recommendation, an immediate update to the model can be enforced. Active learning loss coupled with reinforced sampling can be leveraged to guide the model toward the engineer’s most recent choice. The sampling weights can be updated automatically due to the higher reconstruction error for the new sample. This active learning routine can be applied to the imitation learning agent and the parameter prediction models of the user learner 120, and thus to the knowledge utilization module 108.

[0055] FIG. 6 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented. A computing environment 600 includes a computer system 610 that may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610. The computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information.

[0056] The processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device. [0057] The system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610. The system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.

[0058] Continuing with reference to FIG. 6, the computer system 610 may also include a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620. The system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random access memory (RAM) 632. The RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620. A basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631. RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620. System memory 630 may additionally include, for example, operating system 634, application modules 635, and other program modules 636. Application modules 635 may include aforementioned modules described for FIG. 1 and may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.

[0059] The operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640. The operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.

[0060] The computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 641, 642 may be external to the computer system 610. [0061] The computer system 610 may include a user input interface or graphical user interface (GUI) 661, which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620.

[0062] The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium of storage 640, such as the magnetic hard disk 641 or the removable media drive 642. The magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security. The processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630. In alternative embodiments, hard- wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. [0063] As stated above, the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642. Non-limiting examples of volatile media include dynamic memory, such as system memory 630. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0064] Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. [0065] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions.

[0066] The computing environment 600 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680. The network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671. Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610. When used in a networking environment, computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.

[0067] Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680). The network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.

[0068] It should be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 6 as being stored in the system memory 630 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 610, the remote device 680, and/or hosted on other computing device(s) accessible via one or more of the network(s) 671, may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 6 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 6 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program modules depicted in FIG. 6 may be implemented, at least partially, in hardware and/or firmware across any number of devices.

[0069] It should further be appreciated that the computer system 610 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 610 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 630, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality.

Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.

[0070] Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure.

In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”

[0071] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.