Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ML-BASED DISCREPANCY SURROGATE FOR DIGITAL TWIN SIMULATION
Document Type and Number:
WIPO Patent Application WO/2022/212313
Kind Code:
A1
Abstract:
A computer-implemented method and system establishes a digital twin simulation for a physical system. The method includes running a simulation on coarse low-fidelity input data to simulate the physical system, a machine learning network, predicting a correction field representative of an error associated with a solution of the coarse low- fidelity simulation, and summing the correction field and the solution of the coarse low- fidelity simulation to produce a predicted solution of a high-fidelity simulation of the physical system. The machine learning network may be trained over a pre-determined number of timesteps and may be trained online while a simulation is running. Training the machine learning network may be performed by running a fine high-fidelity simulation of the physical system, running the coarse low-fidelity simulation, computing a difference between the fine high-fidelity simulation and the coarse low-fidelity simulation, and providing the computed difference to the machine learning network as training data.

Inventors:
MIRABELLA LUCIA (US)
CHEN WEI (US)
CHI HENG (US)
XU HUIJUAN (US)
ARVANITIS ELENA (US)
BROWN ALISTAIR (GB)
Application Number:
PCT/US2022/022260
Publication Date:
October 06, 2022
Filing Date:
March 29, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS CORP (US)
International Classes:
G06F30/27
Other References:
HANNA BOTROS N ET AL: "Coarse-Grid Computational Fluid Dynamic (CG-CFD) Error Prediction using Machine Learning", JOURNAL OF FLUIDS ENGINEERING, 25 October 2017 (2017-10-25), XP055935248, Retrieved from the Internet [retrieved on 20220624]
HAN SEONGJI ET AL: "A DNN-based data-driven modeling employing coarse sample data for real-time flexible multibody dynamics simulations", COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, NORTH-HOLLAND, AMSTERDAM, NL, vol. 373, 21 October 2020 (2020-10-21), XP086401415, ISSN: 0045-7825, [retrieved on 20201021], DOI: 10.1016/J.CMA.2020.113480
VIANA FELIPE A C ET AL: "Estimating model inadequacy in ordinary differential equations with physics-informed neural networks", COMPUTERS AND STRUCTURES, PERGAMON PRESS, GB, vol. 245, 15 December 2020 (2020-12-15), XP086472199, ISSN: 0045-7949, [retrieved on 20201215], DOI: 10.1016/J.COMPSTRUC.2020.106458
KONTAXOGLOU ANASTASIOS ET AL: "Towards a Digital Twin Enabled Multi-Fidelity Framework for Small Satellites", PROCEEDINGS OF THE EUROPEAN CONFERENCE OF THE PHM SOCIETY 2021, 29 June 2021 (2021-06-29), XP055936476, Retrieved from the Internet
Attorney, Agent or Firm:
BRINK JR., John D. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for establishing a digital twin simulation for a physical system comprising: running a simulation on coarse low-fidelity input data to simulate the physical system; in a machine learning network, predicting a correction field representative of an error associated with a solution of the coarse low-fidelity simulation; and summing the correction field and the solution of the coarse low-fidelity simulation to produce a predicted solution of a high-fidelity simulation of the physical system.

2. The computer-implemented method of Claim 1 , further comprising: training the machine learning network over a pre-determined number of timesteps.

3. The computer-implemented method of Claim 2, wherein the machine learning network is trained online while a simulation is running.

4. The computer-implemented method of Claim 2, wherein the training of the machine learning network comprises: running a fine high-fidelity simulation of the physical system; running the coarse low-fidelity simulation; computing a difference between the fine high-fidelity simulation and the coarse low-fidelity simulation; and providing the computed difference to the machine learning network as training data.

5. The computer-implemented method of Claim 1 , wherein the machine learning network is a recurrent neural network.

6. The computer-implemented method of Claim 5 wherein input to train the recurrent neural network comprises: a vector containing system data values for a plurality of locations in the physical system; and a time value corresponding to the vector of system data values.

7. The computer-implemented method of Claim 1 , further comprising: generating the coarse low-fidelity input data from sensor data received from the physical system.

8. The computer-implemented method of Claim 7, wherein the coarse low-fidelity input data is spatially sparse and temporally dense data.

9. The computer-implemented method of Claim 1 further comprising: training the machine learning network using historical system data from the physical system.

10. The computer-implemented method of Claim 1 , further comprising: during operation of the physical system, periodically updating the machine learning network using current system data from the system to update training of the machine learning network.

11 . The computer-implemented method of Claim 2 further comprising: continuously updating training of the machine learning network during operation of the physical system using newly collected data from the physical system.

12. A system for simulating a complex physical system comprising: a computer processor in communication with a memory, the memory having stored thereon instructions that when executed by the computer processor cause the processor to: run a simulation on coarse low-fidelity input data to simulate the physical system; in a machine learning network, predict a correction field representative of an error associated with a solution of the coarse low-fidelity simulation; and sum the correction field and the solution of the coarse low-fidelity simulation to produce a predicted solution of a high-fidelity simulation of the physical system.

13. The system of Claim 12, further comprising instructions that when executed by the computer processor cause the computer processor to: train the machine learning network over a pre-determined number of timesteps.

14. The system of Claim 12, further comprising instructions that when executed by the computer processor cause the computer processor to: train the machine learning network online while a simulation is running.

15. The system of Claim 12, further comprising instructions that when executed by the computer processor cause the computer processor to: run a fine high-fidelity simulation of the physical system; run the coarse low-fidelity simulation; compute a difference between the fine high-fidelity simulation and the coarse low- fidelity simulation; and provide the computed difference to the machine learning network as training data.

16. The system of Claim 12, wherein the machine learning network is a recurrent neural network.

17. The system of Claim 16, further comprising instructions that when executed by the computer processor cause the computer processor to: input a vector containing system data values for a plurality of locations in the physical system to the machine learning network; and input to the machine learning network, a time value corresponding to the vector of system data values.

18. The system of Claim 12, further comprising instructions that when executed by the computer processor cause the computer processor to: generating the coarse low-fidelity input data from sensor data received from the physical system.

19. The system of Claim 12, wherein the coarse low-fidelity input data is spatially sparse and temporally dense data.

20. The system of Claim 12, further comprising instructions that when executed by the computer processor cause the computer processor to: train the machine learning network using historical system data from the physical system.

Description:
ML-BASED DISCREPANCY SURROGATE FOR DIGITAL TWIN SIMULATION

TECHNICAL FIELD

[0001] This application relates to simulation of systems. More particularly, this application relates to the design and implementation of digital twins for real world systems.

BACKGROUND

[0002] Establishing digital twins (DTs) of physical systems is an important consideration for modern engineering systems. For example, having a digital twin for a building allows engineers to better monitor the building’s operational conditions as well as better predict and prevent catastrophic damages during extreme hazards.

[0003] However, construction of a digital twin is a significant challenge due to computational requirements based on several factors. Establishing a useful digital twin requires intensive computational resources. For this reason, designing a digital twin creates a tradeoff between accuracy and efficiency. This tradeoff must be considered carefully and requires deep domain-specific knowledge to carefully establish a balance between accuracy and computational economy. In one respect, sophisticated computational models may be configured to accurately model physical systems, but this accuracy comes at high computational costs, which prevents highly accurate models from running in real time. Further, although some very fast simulation methodologies exist that approach real time processing, this speed is achieved at the expense of accuracy. Typically, these high-speed methods sacrifice the level of physics represented through computation. As a result, these methods may not be reliable for monitoring and predicting states of their physical counterparts. Other simulation techniques augmented with machine learning (ML) models are not well suited for real-world systems which typically provide sparse data for training ML models.

[0004] There are primarily three directions that address the problem of reducing the computational time for high-fidelity simulations: reduced-order modeling (ROM), physics- informed neural networks (PINNs), and data-driven approaches. ROM reduces the computational cost of high-fidelity simulation by trading-off the accuracy of the solution. PINNs obtain the solution by training a neural network to approximate the primal field that may minimize the residual of the governing equations. However, these results are highly dependent on the hyperparameter setting of the neural network and the optimization process. Also, based on observations, although PINNs demonstrate promising results on academic toy problems, PINNs do not perform well on complex real-world use cases. Data-driven approaches either with or without incorporating physics, usually require a considerable amount of prior data which must be collected in advance. These approaches cannot address the case where the goal is to accelerate simulation when no prior data are available. Additional potential approaches to solve the abovementioned problem include resorting to hardware accelerated solutions (such as employing GPUs) or devising specific numerical schemes. These two approaches contribute some improvement but do not provide a stand-alone solution. Solutions providing alternatives and improvements to existing methodologies are desired. SUMMARY

[0005] Embodiments described in this disclosure include a computer-implemented method for establishing a digital twin simulation for a physical system. The method includes running a simulation on coarse low-fidelity input data to simulate the physical system, in a machine learning network, predicting a correction field representative of an error associated with a solution of the coarse low-fidelity simulation, and summing the correction field and the solution of the coarse low-fidelity simulation to produce a predicted solution of a high-fidelity simulation of the physical system. The machine learning network may be trained over a pre-determined number of timesteps and may be trained online while a simulation is running. Training the machine learning network may be performed by running a fine high-fidelity simulation of the physical system, running the coarse low- fidelity simulation, computing a difference between the fine high-fidelity simulation and the coarse low-fidelity simulation, and providing the computed difference to the machine learning network as training data. According to embodiments, the machine learning network is a recurrent neural network. Inputs to the machine learning network may include a vector containing system data values for a plurality of locations in the physical system and a time value corresponding to the vector of system data values. The coarse low- fidelity input data may be acquired from sensor data received from the physical system, which may be spatially sparse and temporally dense data. According to some embodiments, the machine learning network may use historical system data from the physical system as training data. During operation of the physical system, the machine learning network may be periodically updated using current system data from the system to update training of the machine learning network. Accordingly, the machine learning network may be continuously updated during operation of the physical system using newly collected data from the physical system.

[0006] A system for simulating a complex physical system includes a computer processor in communication with a memory storing instructions that when executed by the computer processor cause the processor to run a simulation on coarse low-fidelity input data to simulate the physical system, in a machine learning network, predict a correction field representative of an error associated with a solution of the coarse low- fidelity simulation, and sum the correction field and the solution of the coarse low-fidelity simulation to produce a predicted solution of a high-fidelity simulation of the physical system. The memory may include instructions that when executed by the computer processor cause the computer processor to train the machine learning network over a pre-determined number of timesteps. The instructions may cause the computer processor to train the machine learning network online while a simulation is running. To train the machine learning network, the computer processor may be configured to run a fine high- fidelity simulation of the physical system run the coarse low-fidelity simulation and compute a difference between the fine high-fidelity simulation and the coarse low-fidelity simulation. The computed difference to the machine learning network as training data.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0008] FIG. 1 is a block diagram for utilizing a hybrid simulation model according to aspects of embodiments described in this disclosure.

[0009] FIG. 2 is a timeline diagram for establishing a digital twin based on a hybrid simulation model according to aspects of embodiments described in this disclosure.

[0010] FIG. 3 is a timeline showing the application of a hybrid simulation model according to aspects of embodiments described in this disclosure.

[0011] FIG. 4 is an illustration of a neural network that may be used to implement hybrid simulation models according to aspects of embodiments described in this disclosure.

[0012] FIG. 5 is a process flow diagram for a method of training and using a hybrid simulation model according to aspects of embodiments described in this disclosure.

[0013] FIG. 6 is a block diagram of a computer-based system that may be used to implement aspects of embodiments of a hybrid simulation model described in this disclosure.

DETAILED DESCRIPTION

[0014] An efficient computational approach for establishing a digital twin using a hybridized approach using conventional simulation techniques and machine learning will now be described. A multi-fidelity model is presented that treats a solution to a coarse- scale simulation as low fidelity data and physical data from a physical system to be high- fidelity data. To bridge the fidelity gap, a machine learning (ML) model is trained to learn a corrector field that upgrades the low fidelity data to produce a predicted fine fidelity solution. A general conceptual framework will be described that is adaptable to many real- world applications including, but not limited to, digital twin for building infrastructure, digital twin for mechanical components and digital twin of biological systems. Conventional solutions for bridging the gap between low fidelity and high-fidelity data has concentrated on using highly accurate simulations as the high-fidelity model rather than data from the physical system via sensor measurements. These types of data are intrinsically different in that high fidelity simulation data is typically dense in space but sparse in time making running high fidelity simulation computationally expensive. In contrast, physical system data measurements are typically dense in time, but sparse in space as the measurements cannot be observed in some physical locations in the system.

[0015] The problem of high computational cost/time of transient 3D simulations is mitigated by leveraging the advantages provided by Artificial Intelligence (Al) and data- driven approaches and focusing on the following aspects:

[0016] 1) Developing an algorithm that combines numerical simulation-based techniques and a Machine Learning (ML) model to accelerate a transient numerical simulation. This is accomplished by reducing the number of time-steps where the full solution of the numerical model is needed and further utilizing hybrid simulation and ML- based surrogate models in the prediction;

[0017] 2) Limiting or eliminating the need of prior data acquisition for the execution of the developed algorithm, leveraging “online” training while the simulation progresses;

[0018] 3) certain embodiments were conceived and tested on a 3D aerodynamic case, but the methods may be adapted to any type of transient simulations. [0019] FIG. 1 is a block diagram depicting an overall framework 100 for performing simulations using a hybrid simulation model according to aspects of embodiments described in this disclosure. Initially, a training process involving the first N timesteps is implemented 101. It is determined if the training period is in progress, or alternatively if the hybrid model may be applied 103. If still training, the decision to apply a hybrid model 103 is "NO" 105 and a full simulation 107 and a simplified model 109 (e.g., a machine learning surrogate model or a coarse-grained simulation) are solved in parallel for the first N timesteps of the transient process. The difference in fidelity between the full model 107 and the simplified model 109 will produce a difference in the outputs of the models representing the difference in fidelity input to the two models. The difference is used to train a machine learning model 120 to learn from those two simulation results (across a number of time steps, N) how to correct the error of the simplified model 109 and the timestep is advanced 111. As the ML model serves to correct the error of the simplified model 109, this machine learning model will be referred to as the correction model 125. It is then determined if the all the timesteps in the present simulation have been performed 113. If so, then the process ends 170. If not, 115 it is again determined if the next timestep is a training step or if training is complete. When training is completed, the next timestep will apply the hybrid model 121. In the hybrid model simulation process 120 over the next M timesteps, the simplified model 109 is continually used to produce a low fidelity solution. The trained correction model 125 is combined with the low fidelity solution from the simplified model 109 to reduce the error produced by the low fidelity data available to the simplified model 109. By using this correction model 125 and the simplified model 109, running the expensive full simulation is avoided for the M timesteps following the training steps. After each hybrid simulation step 120, the timestep is advanced 111 and it is determined if the end of the simulation has been reached 113. Once all simulation timesteps are complete, the process ends 170.

[0020] The purpose of using a simplified model 109 is to reduce the computational cost of the full simulation 107 while retaining a connection with the physics-based behavior of the system or other process. Some options for implementing the simplified model in this workflow may include: 1) physics informed neural networks (PINNs), 2) coarse-grained (using a coarse mesh) simulation via the STAR-CCM+ simulation solver 140 available from Siemens Industry Software of Plano Texas or another physics-based simulation solver, 3) ML-based interpolation from the coarse-grained simulation onto a fine mesh, and 4) nearest neighbor interpolation approach as available via STAR-CCM+ software 140, from the coarse-grained simulation. In one embodiment the STAR-CCM+ nearest neighbor interpolation from the coarse-grained simulation provides good performance in terms of the prediction accuracy. Specifically, this step may be achieved by running STAR-CCM+ 140 on a coarse mesh obtained using an Adaptive Mesh Refinement (AMR) and interpolating this solution onto the fine mesh using STAR-CCM+.

[0021] However, experimental data shows that by itself the simplified model cannot achieve sufficient accuracy (as high as 10%~70% relative errors). Therefore, a correction model 125 to is needed to bridge the accuracy of the simplified solution towards the fine simulation’s solution.

[0022] When addressing a transient simulation problem, in some embodiments the preferred machine learning model selected to predict such solution is a recurrent neural network (RNN) 150. The RNN 150 predicts the solution at the current timestep based on solutions of previous timesteps. Therefore, an RNN captures both the long-term and short-term behavior of the solution over the time domain. With respect to the correction model 125, instead of predicting the fine solution itself, the RNN 150 is used to predict the error (or the difference) between the fine and the coarse solutions.

[0023] FIG. 2 shows a timeline showing the architecture utilizing the RNN 150. The input data to the RNN 150a - 150e is the difference between the fine solution 210 and the coarse solution 220 at timesteps t to t+N-1 , where N is the number of timesteps used for training 250. In a particular embodiment, the coarse solution 220 refers to the STAR- CCM+ interpolation on the coarse-grained simulation. The RNN 150 predicts the difference 211 between the fine solution 210 and the coarse solution 220 at timestep t+N (i.e. , following the training phase 250 of RNN 150). During the training phase 250, timesteps t=1000 to t=1004 (251 - 259) occur. At timesteps 251-259, both the fine simulation solution 210 and the coarse simulation solution 220 are computed. At each timestep 251-259 of the training phase 250 the difference 211 is determined and provided to the RNN 150 as training data at each timestep 150a-150e.

[0024] When the training phase 250 is complete, only the coarse simulation 220 is performed to produce a coarse data simulation solution at t=1005 (221). Based on the trained RNN 150, an estimate for a correction error at t=1005 (239) is produced by the RNN 150. The predicted correction 239 and the solution from the coarse data simulation 221 are summed 240. The resulting sum is the final prediction for the fine data simulation solution 241 at t=1005. While there exist different ways of using RNN 150 to predict the fine data simulation solutions, this technique is preferred having showed improved performance compared to other techniques. Nevertheless, other techniques using the RNN 150 to produce a predicted fine data simulation solution may be used and will fall within the scope of embodiments described in this disclosure.

[0025] Embodiments of hybrid simulation models described herein may assume no prior data (e.g., from other full simulation runs) are available. Embodiments herein may use online data to provide the training data for the RNN 150. This represents an improvement over most data-driven approaches. As a machine learning model is used to correct the simplified model simulation, embodiments of this disclosure lead to simulations with higher accuracy compared to approaches like ROM or PINNs.

[0026] FIG. 3 is a timeline diagram illustrating the overall concept of a proposed hybrid simulation system and method. Sparse sensory data 330 are observed from the physical systems 310 throughout the system life cycle 311 . Those data 330 are fed to the digital twin 320. The digital twin 320 comprises a fast simulation solver 323 and a spatial- temporal corrector 321 field running over the DT’s life cycle 325. The fast simulation solver 323 will be extremely efficient, achieving close-to-real time performance, but this speed is at the cost of the solver’s accuracy. The spatial-temporal corrector field 321 on the other hand, will be learned from the sensory data 330 obtained from the physical system 310 to correct the fast simulation results 323 to better match actual performance of the physical system 310.

[0027] Typically, the knowledge of the physical systems is in the form of data coming from sensors at selected locations of the physical system. Those data reflect local states of aspects of the physical system u x t , tj), e.g., temperature, strain, or acceleration fields at selective locations x t = 1, ..., N X and selective time t ; - = 1, ..., N t . Those data may be dense in time (e.g., real-time monitoring), but they often are sparse in space and do not infer the complete spatial response of the physical systems.

[0028] Classical computational simulation methods can capture most physics quite accurately, but require high computational cost, which makes them unsuitable for DTs of evolving physical systems. Fast simulators, for examples, the ones used in the gaming industry, can achieve close-to-real-time speed with the state-of-the-art computing hardware, but cannot capture the physics fully. As a result, the prediction of behavior from the fast simulators, denoted as u(x, t ), may deviate from the true behavior significantly and are unreliable when taken alone.

[0029] Making use of the physical observation data, machine learning may be used to learn a corrector field that corrects the fast simulation solution to better match the true behavior of the physical system. Mathematically, the learned corrector field denoted as d(c, t) is combined with the fast simulation solution u(x, t) to better express the digital twin prediction of the physical response according to: u(x, t ) = u(x, t ) + d(c, t).

[0030] The question arises of how to obtain the corrector field d(c, ί)). In some embodiments, developments in deep learning may be used to allow a deep neural network to learn this corrector field from physical data. Leveraging the universal function approximation capability of the deep neural network, the following deep learning architecture may be used according to some embodiments of this disclosure.

[0031] FIG. 4 is an illustration of an exemplary deep neural network 400 for predicting an error field 221 from and input location 401 and time 403. The input to the deep neural network 400 will be location vector x 401 and time variable t 403, and the output of the deep neural network is the corrector field 221 at given location x and time t. To train this neural network, the difference between the observed behavior and the fast simulation- predicted response of the physical system, t j ), at location x t and time t \ is provided to the deep neural network 400 as training data. Bias units 405, 407 provide each Iayer4 of the neural network with bias units that enable the weights associated with constant portions of a linear transformation to be learned by the neural network.

[0032] Notice that, typically, the observation data are dense in time, but sparse in location. Thus, to be able to train an accurate predictor field, the positions x where the sensors are placed, can be optimized to maximize the predication capability of the deep neural network.

[0033] Referring now to FIG. 5 a process flow diagram for a method of constructing and using a hybrid simulation model digital twin is shown. To start 501 a training phase is defined. The training phase will include a predetermined number of timesteps. During training a high-fidelity data simulation is run along with a coarse or low fidelity data simulation. The fine data simulation is run 503 along with a coarse data simulation 505. The fine data simulation 503 and the coarse data simulation 505 may be run simultaneously. The fine data simulation is computationally expensive but will provide a solution prediction that is truer to the system being simulated than the coarse data simulation 505 which executes faster but is not as accurate. During the training phase, the difference between the fine data simulation solution and the coarse data simulation solution is computed 507 and the computed difference is provided to a machine learning network as training data 509. The machine learning network is trained to predict an error field representative of the inaccuracy of the coarse data simulation solution. [0034] If the training phase is complete 513 that is, the pre-determined number of training timesteps have elapsed 521 , a coarse data simulation is run 505 but the fine data simulation is not. Along with the coarse data simulation 505, a predicted error field representative of the inaccuracy of the coarse simulation is calculated 523. The predicted error field is summed with the solution from the coarse data simulation to produce a predicted solution for a fine data simulation where the process ends 527.

[0035] To establish the online DT, the deep neural network must be initially trained before it may be used to make a prediction. Further, the generalization capability of the trained neural network is another consideration. To address those challenges, the history data of the physical system may be used exclusively to train the deep neural network. In this way, the constructed digital twin will be specific to that particular physical system. More specifically, consider a desire to establish a digital twin to predict the entire life cycle performance of a building. Sensor data from the building as well as the fast simulation data, obtained from say first 1% of its lifetime are taken as initial training data to train a deep neural network which may predict the building’s behavior for the remaining 99% of its lifetime. Additionally, new information generated during the operation stage of that building may be incorporated into the hybrid simulation model by continuously updating (with certain frequency) the corrector field during the prediction stage with newly collected data. The update frequency (i.e. , time interval) may be determined such that the resulting digital twin achieves a good balance between real-time performance and prediction accuracy. In other words, the time interval may not be too long such that the corrector field may be inaccurate due to missing information, nor too short such that the training takes too long a time impacting the efficiency of the digital twin model. [0036] The embodiments described in this disclosure present a balance between accuracy and speed. They are faster than conventional high-fidelity simulation approaches and are more accurate than conventional fast simulation methods. Unlike many existing approaches, which utilize machine learning merely to contrast the DTs, the embodiment described introduce a hybrid approach to construct the DTs incorporating the key physics while requiring less data. The hybridization between fast simulation and machine learning further addresses the challenge associated with having only spatially sparse data available. With the incorporation of fast simulation results, the ML method become much less prone to experience overfitting.

[0037] The concept of initial training and online updates practiced according to embodiments of this disclosure address the generalizability issue commonly associated with machine learning methods. Most existing work has focused on using highly accurate simulation results as the high-fidelity data. This is computationally expensive. In contrast, the embodiments herein utilize spatially sparse measurements from physical systems as high-fidelity data.

[0038] FIG. 6 illustrates an exemplary computing environment 600 within which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 610 and computing environment 600, are known to those of skill in the art and thus are described briefly here.

[0039] As shown in FIG. 6, the computer system 610 may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610. The computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information.

[0040] The processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting, or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller, or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general-purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.

[0041] Continuing with reference to FIG. 6, the computer system 610 also includes a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620. The system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random-access memory (RAM) 632. The RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620. A basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631. RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620. System memory 630 may additionally include, for example, operating system 634, application programs 635, other program modules 636 and program data 637.

[0042] The computer system 610 also includes a disk controller 640 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive). Storage devices may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).

[0043] The computer system 610 may also include a display controller 665 coupled to the system bus 621 to control a display or monitor 666, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 660 and one or more input devices, such as a keyboard 662 and a pointing device 661 , for interacting with a computer user and providing information to the processors 620. The pointing device 661, for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processors 620 and for controlling cursor movement on the display 666. The display 666 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 661. In some embodiments, an augmented reality device 667 that is wearable by a user, may provide input/output functionality allowing a user to interact with both a physical and virtual world. The augmented reality device 667 is in communication with the display controller 665 and the user input interface 660 allowing a user to interact with virtual items generated in the augmented reality device 667 by the display controller 665. The user may also provide gestures that are detected by the augmented reality device 667 and transmitted to the user input interface 660 as input signals.

[0044] The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium, such as a magnetic hard disk 641 or a removable media drive 642. The magnetic hard disk 641 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

[0045] As stated above, the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution. A computer readable medium may take many forms including, but not limited to, non- transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642. Non-limiting examples of volatile media include dynamic memory, such as system memory 630. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0046] The computing environment 600 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680. Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to computer system 610. When used in a networking environment, computer system 610 may include modem 672 for establishing communications over a network 671 , such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.

[0047] Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680). The network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite, or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.An executable application, as used herein, comprises code or machine-readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine- readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.

[0048] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.

[0049] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.

[0050] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers, and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for.”