Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TREATMENT PARAMETER ESTIMATION
Document Type and Number:
WIPO Patent Application WO/2023/164007
Kind Code:
A1
Abstract:
Aspects of the disclosed technology provide solutions of simulating patient treatment outcomes and in particular, for estimating treatment parameters that can be optimally modified to achieve a desired treatment outcome. In some aspects, a process of the disclosed technology includes steps for receiving patient data, receiving a plurality of treatment parameters, and generating an output state of the patient based on the initial state of the patient and the treatment goal for the patient. In some aspects, the process can further include steps for calculating an objective function based on the output state of the patient and the treatment goal of the patient, and updating one or more of the treatment parameters based on the objective function. Systems and machine-readable media are also provided.

Inventors:
LOBSIGER JANIK (CH)
THOMASZEWSKI BERNHARD STEFFEN (CH)
PETER DANIEL MICHEL (CH)
HUBER NIKO BENJAMIN (CH)
Application Number:
PCT/US2023/013642
Publication Date:
August 31, 2023
Filing Date:
February 22, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ALIGN TECHNOLOGY INC (US)
International Classes:
G16H20/40; G16H50/20; G16H50/50; A61C7/36
Foreign References:
CN113408174A2021-09-17
US20100054411A12010-03-04
US20210339049A12021-11-04
Other References:
DORDA D. ET AL: "Differentiable Simulation for Outcome-Driven Orthognathic Surgery Planning", COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS, vol. 41, no. 8, 1 December 2022 (2022-12-01), Oxford, pages 53 - 61, XP093052558, ISSN: 0167-7055, Retrieved from the Internet DOI: 10.1111/cgf.14623
KADLECEK PETR ET AL: "Building Accurate Physics-based Face Models from Data", PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, vol. 2, no. 2, 26 July 2019 (2019-07-26), pages 1 - 16, XP093052497, Retrieved from the Internet DOI: 10.1145/3340256
Attorney, Agent or Firm:
KIMES, Benjamin A. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A computer-implemented method comprising: receiving patientdata, wherein the patientdata comprises information representing an initial state of a patient; receiving a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generating an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculating an objective function based on the output state of the patient and the treatment goal of the patient; and updating one or more of the treatment parameters based on the objective function.

2. The computer-implemented method of claim 1 , wherein updating the one or more treatment parameters further comprises: identifying the one or more treatment parameters based on a derivative of the objective function.

3. The computer-implemented method of claim 1 , wherein generating the output state of the patient further comprises: providing the patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model.

4. The computer-implemented method of claim 3, wherein the numerical model comprises a biomechanical model.

5. The computer-implemented method of claim 1 , wherein the output state of the patient comprises a three-dimensional (3D) mesh.

6. The computer-implemented method of claim 1 , wherein the patient data comprises an inter-oral scan.

7. The computer-implemented method of claim 1 , wherein the treatment goal for the patient comprises a three-dimensional (3D) mesh.

8. A system, comprising: at least one memory; and at least one processor coupled to the at least one memory, the at least one processor configured to: receive patient data, wherein the patient data comprises information representing an initial state of a patient; receive a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generate an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculate an objective function based on the output state of the patient and the treatment goal of the patient; and update one or more of the treatment parameters based on the objective function.

9. The system of claim 8, wherein to update the one or more treatment parameters the at least one processor is further configured to: identify the one or more treatment parameters based on a derivative of the objective function.

10. The system of claim 8, wherein to generate the output state of the patient the at least one processor is further configured to: provide the patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model.

11 . The system of claim 10, wherein the numerical model comprises a bio-mechanical model.

12. The system of claim 8, wherein to generate the output state of the patient the at least one processor is further configured to: provide the patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a machine-learning (ML) model.

13. The system of claim 8, wherein the patient data comprises an inter-oral scan.

14. The system of claim 8, wherein the treatment goal for the patient comprises a three-dimensional mesh.

15. A non-transitory computer-readable storage medium comprising at least one instruction for causing a computer or processor to: receive patient data, wherein the patient data comprises information representing an initial state of a patient; receive a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generate an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculate an objective function based on the output state of the patient and the treatment goal of the patient; and update one or more of the treatment parameters based on the objective function.

16. The non-transitory computer-readable storage medium of claim 15, wherein to update the one or more treatment parameters the at least one instruction is further configured to cause the processor to: identify the one or more treatment parameters based on a derivative of the objective function.

17. The non-transitory computer-readable storage medium of claim 15, wherein to generate the output state of the patient the atleast one instruction is further configured to cause the processor to: provide the patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model.

18. The non-transitory computer-readable storage medium of claim 17, wherein the numerical model comprises a bio-mechanical model.

19. The non-transitory computer-readable storage medium of claim 15, wherein to generate the output state of the patient the at least one processor is further configured to: provide the patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a machine-learning (ML) model.

20. The non-transitory computer-readable storage medium of claim 15, wherein the patient data comprises an inter-oral scan.

21 . A computer-implemented method comprising: receiving, by a machine-learning (ML) model, patient data associated with a patient; receiving, by the ML model, a treatment goal associated with the patient; predicting, by the ML model, one or more estimated treatment parameters based on the patient data and the treatment goal; providing the one or more estimated treatment parameters to a simulation model; generating, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculating an objective function based on the output state of the patient and the treatment goal of the patient.

22. The computer-implemented method of claim 21 , further comprising: updating one or more of the treatment parameters based on the objective function.

23. The computer-implemented method of claim 22, wherein updating the one or more treatment parameters further comprises: identifying the one or more treatment parameters based on a derivative of the objective function.

24. The computer-implemented method of claim 21 , wherein the simulation model comprises a numerical model.

25. The computer-implemented method of claim 24, wherein the numerical model comprises a biomechanical model.

26. The computer-implemented method of claim 21 , wherein the treatment goal comprises a three- dimensional (3D) model.

27. The computer-implemented method of claim 21 , wherein the patient data comprises an inter-oral scan.

28. A system, comprising: at least one memory; and at least one processor coupled to the at least one memory, the at least one processor configured to: receive, by a machine-learning (ML) model, patient data associated with a patient; receive, by the ML model, a treatment goal associated with the patient; predict, by the ML model, one or more estimated treatment parameters based on the patient data and the treatment goal; provide the one or more estimated treatment parameters to a simulation model; generate, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculate an objective function based on the output state of the patient and the treatment goal of the patient.

29. The system of claim 28, wherein the at least one processor is further configured to: update one or more of the treatment parameters based on the objective function.

30. The system of claim 29, wherein to update the one or more treatment parameters, the at least one processor is further configured to: identify the one or more treatment parameters based on a derivative of the objective function.

31 . The system of claim 28, wherein the simulation model comprises a numerical model.

32. The system of claim 31 , wherein the numerical model comprises a bio-mechanical model.

33. The system of claim 28, wherein the treatment goal comprises a three-dimensional (3D) model.

34. The system of claim 28, wherein the patient data comprises an inter-oral scan.

35. A non-transitory computer-readable storage medium comprising at least one instruction for causing a computer or processor to: receive, by a machine-learning (ML) model, patient data associated with a patient; receive, by the ML model, a treatment goal associated with the patient; predict, by the ML model, one or more estimated treatment parameters based on the patientdata and the treatment goal; provide the one or more estimated treatment parameters to a simulation model; generate, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculate an objective function based on the output state of the patient and the treatment goal of the patient.

36. The non-transitory computer-readable storage medium of claim 35, wherein the at least one instruction is further configured to cause the computer or processor to: update one or more of the treatment parameters based on the objective function.

37. The non-transitory computer-readable storage medium of claim 36, wherein to update the one or more treatment parameters, wherein the at least one instruction is further configured to cause the computer or processor to: identify the one or more treatment parameters based on a derivative of the objective function.

38. The non-transitory computer-readable storage medium of claim 35, wherein the simulation model comprises a numerical model.

39. The non-transitory computer-readable storage medium of claim 38, wherein the numerical model comprises a bio-mechanical model.

40. The non-transitory computer-readable storage medium of claim 35, wherein the treatment goal comprises a three-dimensional (3D) model.

41 . A computer-implemented method comprising: receiving orthognathic patient data, wherein the patient data comprises information representing an initial state of a patient; receiving a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generating an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculating an objective function based on the output state of the patient and the treatment goal of the patient; and updating one or more of the treatment parameters based on the objective function.

42. The computer-implemented method of claim 41 , wherein the orthognathic patient data comprises one or more surface images of the patients face.

43. The computer-implemented method of claim 41 , wherein the orthognathic patient data comprises one or more three-dimensional models of the patients face.

44. The computer-implemented method of claim 41 , wherein the orthognathic patient data comprises one or more three-dimensional models of the patients jaw bone.

45. The computer-implemented method of claim 41 , wherein the treatment parameters comprises one or more facial landmarks associated with the patient.

46. The computer-implemented method of claim 41 , wherein updating the one or more treatment parameters further comprises: identifying the one or more treatment parameters based on a derivative of the objective function.

47. The computer-implemented method of claim 41 , wherein generating the output state of the patient further comprises: providing the orthognathic patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model.

48. The computer-implemented method of claim 47, wherein the numerical model comprises a biomechanical model.

49. The computer-implemented method of claim 41 , wherein the output state of the patient comprises a three-dimensional (3D) mesh.

50. A system comprising: a memory; and a processor operatively coupled with the memory, the processor to: receive orthognathic patient data comprising information representing an initial state of a patient; receive a plurality of treatment parameters that are based on a treatment goal for the patient; generate an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculate an objective function based on the output state of the patient and the treatment goal of the patient; and update one or more of the treatment parameters based on the objective function.

51 . The system of claim 50, wherein the orthognathic patient data comprises one or more surface images of the patients face.

52. The system of claim 50, wherein the orthognathic patient data comprises one or more three- dimensional models of the patients face.

53. The system of claim 50, wherein the orthognathic patient data comprises one or more three- dimensional models of the patients jaw bone.

54. The system of claim 50, wherein the treatment parameters comprises one or more facial landmarks associated with the patient.

55. The system of claim 50, wherein the processor is further to: identify the one or more treatment parameters based on a derivative of the objective function.

56. The system of claim 50, wherein the processor is further to: provide the orthognathic patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model.

57. The system of claim 56, wherein the numerical model comprises a bio-mechanical model.

58. The system of claim 50, wherein the output state of the patient comprises a three-dimensional (3D) mesh.

59. A computer readable media comprising instructions that, when executed by a processor, cause the processorto perform operations comprising: receiving orthognathic patient data comprising information representing an initial state of a patient; receiving a plurality of treatment parameters that are based on a treatment goal for the patient; generating an output state of the patient based on the initial state of the patient and the plurality of treatment parameters; calculating an objective function based on the output state of the patient and the treatment goal of the patient; and updating one or more of the treatment parameters based on the objective function.

60. The computer readable media of claim 59, wherein generating the output state of the patient comprises: providing the orthognathic patient data and the plurality of treatment parameters to a simulation model, wherein the simulation model comprises a numerical model that generates the output state of the patient.

Description:
TREATMENT PARAMETER ESTIMATION

TECHNICAL FIELD

[0001] The present technology pertains to patient treatment modeling and in particular, to the use of treatment simulation models for predicting treatment outcomes. Additionally, the disclosed technology encompasses solutions for modeling how changes to specific treatment parameters can affect the ability for a given treatment plan to achieve patient goals.

BACKGROUND

[0002] A common objective of clinical interventions is to modify various structures of a patients body, for example, to achieve improved performance and/or aesthetic appearance. In such instances, the goal of the clinician (e.g., doctor) is to take the patient from their current condition (initial state/condition) to a final condition (treatment outcome/goal). Depending on the type of procedure, there may be many ways to achieve the goal, e.g., through different implementations of atreatment plan. However, as the potential treatment plans become increasingly complex, it becomes difficult for an enterprise to sufficiently address communications with product providers.

BRIEF DESCRIPTION OF THE DRAWINGS

[0003] In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which: [0004] FIG. 1 illustrates a conceptual block diagram of an example differentiable simulator system that is configured to estimate treatment parameters based on a clinical goal, in accordance with some examples.

[0005] FIG. 2 is a flow diagram of an example process for performing treatment parameter estimation and tuning, in accordance with some examples.

[0006] FIG. 3 illustrates a conceptual block diagram of an example differentiable simulator system in which treatment parameter initialization is performed using a machine-learning (ML) model, in accordance with some examples.

[0007] FIG. 4 is a flow diagram of an example process for training a machine-learning (ML) model to estimate treatment parameters in accordance with some examples.

[0008] FIG. 5 illustrates an example of a CT scan of patient teeth, in accordance with some examples. [0009] FIG. 6 illustrates an example of a deep learning neural network that can be implemented to identify initial treatment parameters used by a differentiable simulator, in accordance with some examples.

[0010] FIG. 7 illustrates an example computing system, in accordance with some examples.

DESCRIPTION OF EXAMPLE EMBODIMENTS

[0011] Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in orderto avoid obscuring the description. References to one oran embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.

[0012] Reference to “one embodiment’ or “an embodiment’ means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment’ in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.

[0013] The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.

[0014] Without intent to limitthe scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.

[0015] Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.

Overview

[0016] In some aspects, a computer-implemented method of the disclosed technology can include receiving patient data that includes information representing an initial state of a patient; receiving a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generating an output state of the patient based on the initial state of the patient and the treatment goal for the patient; calculating a loss function (or objective function) based on the output state of the patient and the treatment goal of the patient; and updating one or more of the treatment parameters based on the objective function.

[0017] A system of the disclosed technology can include at least one memory, and at least one processor coupled to the at least one memory, the at leastone processor configured to: receive patient data, wherein the patient data includes information representing an initial state of a patient; receive a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generate an output state of the patient based on the initial state of the patient and the treatment goal for the patient; calculate an objective function (or loss function) based on the output state of the patient and the treatment goal of the patient; and update one or more of the treatment parameters based on the objective function.

[0018] A non-transitory computer-readable storage medium can include at least one instruction for causing a computer or processor to receive patient data, wherein the patient data includes information representing an initial state of a patient; receive a plurality of treatment parameters, wherein the plurality of treatment parameters are based on a treatment goal for the patient; generate an output state of the patient based on the initial state of the patient and the treatment goal for the patient; calculate an objective function based on the output state of the patient and the treatment goal of the patient; and update one or more of the treatment parameters based on the objective function.

[0019] In some aspects, a computer-implemented method of the disclosed technology can include steps for receiving, by a machine-learning (ML) model, patient data associated with a patient; receiving, by the ML model, a treatment goal associated with the patient; predicting, by the ML model, one or more estimated treatment parameters based on the patient data and the treatment goal; providing the one or more estimated treatment parameters to a simulation model; generating, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculating an objective function based on the output state of the patient and the treatment goal of the patient.

[0020] A system of the disclosed technology can include at least one memory, and at least one processor coupled to the at least one memory, the at least one processor configured to: receive, by a machine-learning (ML) model, patient data associated with a patient; receive, by the ML model, a treatment goal associated with the patient; predict, by the ML model, one or more estimated treatment parameters based on the patient data and the treatment goal; provide the one or more estimated treatment parameters to a simulation model; generate, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculate an objective function based on the output state of the patient and the treatment goal of the patient.

[0021] A non-transitory computer-readable storage medium (e.g., a program product) of the disclosed technology can include comprising at least one instruction for causing a computer or processor to: receive, by a machine-learning (ML) model, patient data associated with a patient; receive, by the ML model, a treatment goal associated with the patient; predict, by the ML model, one or more estimated treatment parameters based on the patient data and the treatment goal; provide the one or more estimated treatment parameters to a simulation model; generate, by the simulation model, an output state of the patient based on the one or more estimated treatment parameters; and calculate an objective function based on the output state of the patient and the treatment goal of the patient.

Description

[0022] As discussed previously, patient treatment plans can be used to achieve desired treatment outcomes or patient treatment goals. Depending on the treatment plan, different treatment parameters can be applied, and modified during the course of treatment to affect patient outcomes. Treatment parameters can include virtually any condition, variable and/or procedure that can affect treatment outcomes. By way of example, treatment parameters can include material parameters, including the composition, size and/or shape of implements used for administering treatment In some approaches, treatment parameters may identify specific loci or regions (landmarks) of patient anatomy that are to be altered targeted by the applied treatment. Treatment parameters may also describe treatment techniques or procedures, such as the timing and/or orderof applied procedures, etc. By way of example, orthodontic procedures can involve repositioning a patient's teeth to a desired arrangement in order to correct malocclusions and/or improve aesthetics. To achieve these objectives, orthodontic appliances such as braces, shell aligners, and the like can be applied to the patient's teeth by an orthodontic practitioner. The appliance can be configured to exert force on one or more teeth in order to effect desired tooth movements according to a treatment plan. Treatment parameters in this context can includethe type of appliances used (e.g., braces or shell aligners), material properties ofthe selected appliance (which affect forces exerted on each tooth), and/or treatment protocols, such as when the appliance is introduced into treatment, and/or a duration of use, etc. Treatment parameters are therefore crucial to achieving the intended treatment goals.

[0023] With the growing use of three-dimensional (3D) computer graphics and software in clinical applications it would be useful to use computational approaches to model the impact of various treatment parameters on clinical outcomes. In a similar manner, it would be useful to reverse-model treatment outcomes, for example, to ascertain what treatment parameters may be implemented to achieve a desired goal. However, due to the large number of potential treatment parameters attendant in even routine procedures, modeling approaches (e.g., that utilize naive optimization strategies) are computationally intractable in most real-world scenarios.

[0024] Aspects of the disclosed technology provide solutions modeling treatment outcomes, and in particular for identifying and modifying specific treatment parameters for the purpose of achieving a clinical objective. Treatment outcomes can be predicted (modeled) using a differentiable simulator configured to receive input data representing a current patient state and treatment plan parameters, and to model the corresponding treatment outcome/s. In some aspects, the patient data can be representative of physical patient attributes. For example, patientdata can include bio-mechanical models that represent the geometry of specific anatomical structures. In the orthodontic context, such models may include but are not limited to image data (e.g., photographs), cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data intra-oral scans, and/or face scans, etc.

[0025] The modeled patient outcomes can be used to compute an error function (or objective function) representing the difference between the desired goal and the modeled patient outcome. As such, in some approaches, the best patient treatment approach (e.g., the optimal combination of treatment parameters) are those that minimize variations between the modeled patient outcome and treatment goal, e.g., by minimizing the error function. As discussed in further detail below, differentiable simulation approaches (e.g., using the first derivative of the error function), can be used to identify treatment parameters that can be tuned/updated to most effectively minimize the error function.

[0026] In some implementations, machine-learning approaches may be used to identify/initialize treatment parameters, e.g., for the differentiable simulator. In such approaches, a machine-learning (ML) model trained to identify target treatment parameters (e.g., based on an intended goal) can be used to select/i nitialize treatment parameters for the differentiable simulator. As discussed in further detail below, parameter update techniques in which parameters are further tuned based on a derivative (e.g., a first derivative, or second derivative, etc.) of the error function (or objective function) can be used to further tune/update the ML selected treatment parameters.

[0027] Several of the provided examples relate to clinical interventions in the context of orthodontic treatment, however it is understood that the concepts disclosed herein may be applied in other clinical applications, without departing from the scope of the disclosed technology. For example, aspects of the disclosed technology may be applicable to various patient treatments, including surgical interventions that are used to modify various aspects of a patients anatomy. By way of example, the disclosed approaches may be used in the context of orthognathic surgical interventions that are designed to alter patient jaw and/or face shapes, and the like.

[0028] FIG. 1 illustrates a conceptual block diagram of an example differentiable simulator system 100 that is configured to estimate treatment parameters based on a clinical goal, in accordance with some examples. Simulator system 100 includes a treatment plan 102 that includes treatment parameters 103, and treatment goals 105. As discussed above, treatment parameters 103 can include any metric or variable that can describe any aspect or characteristic of a patient-administered clinical intervention. Treatment goals 105 can include data describing an intended (or target) patient outcome. By way of example with respect to an orthodontic treatment context, treatment parameters 103 can include parameters describing the material composition and/or shape of aligner trays configured to reposition a patients teeth in a predetermined manner. T reatment parameters 103 may also describe forces incident on different portions or locations of patient anatomy, such as forces imparted on individual teeth, or groups of teeth, as a result of tray configuration properties. In turn, treatment goals 105 can include data indicating the target dentition of the patient. By way of example, treatment goals 105 may include photographs and/or three-dimensional (3D) meshes (or other 3D model representations) of an intended or desired arrangement of the patients teeth, and/or a desired aesthetic outcome.

[0029] In practice, treatment parameters 103 are provided to a simulator 108, in conjunction with patient data 106. The patient data 106 received by the simulator 108 can include any data representing a current or recent condition of the patient. For example, patient data can include measurements, or images representing a portion of the patient anatomy that is to be altered by the treatment intervention. In some aspects, the patient data 106 may be represented as a bio-mechanical model of the patients anatomy. In the orthodontic context, for example, patient data can include images of teeth and/or facial features, including but not limited to digital images, inter-oral scans, cone beam computed tomography (CBCT) scans, x-ray images, and the like.

[0030] Using the received treatment parameters 103 and patient data 106, the simulator 108 can model expected patient outcomes (or output states), e.g., as represented by output data 110. In some aspects, the simulator 108 can be (or can utilize) a physics-based model, such as a bio-mechanical model that is configured to predict the outcome of treatment parameters on a provided patient state, as indicated by the patient data 106. By way of example, the simulator 108 can be configured to utilize a numerical method (e.g., such as the Finite Element Method (FEM)) to model the structural mechanics of a patients anatomy when treated using interventions reflected by treatment parameters 103. Numerical modeling may be used to simulate the behavior of a patients anatomy when treated using interventions reflected by treatment parameters 103. Examples of numerical methods that may be used in such modeling include finite element method (FEM), boundary element method (BEM), finite difference method (FDM), and discrete element method (DEM). Embodiments are discussed with reference to application of the finite element method (FEM), to finite element models, and to finite element analyses. However, it should be understood that any of the other possible types of numerical analysis and modeling may be performed in embodiments instead of FEM. Using the orthodontic example, the simulator 108 can be configured to model (predict) the movement of a patients teeth in response to the application of an orthodontic liner. In this example, the treatment parameters 103 may include data reflecting the size, shape, and/or material characteristics of the liner, whereas patient data 106 may include data, such as images or 3D mesh models, reflecting the current positions and orientations of the patients teeth and other oral structures.

[0031] The predicted impact of the applied treatment parameters 103 is reflected in output data 110 (or output state) generated by the simulator 108. Depending on the desired implementation, output data 1 10 can be (or can include) simulated post-treatment data in the form of a 3D mesh, or image data, for example, that reflects structural and/or aesthetic changes to the patient after intervention using the treatment parameters 103. The output data 110 can then be compared to the treatment goals 105 to determine/calculate an objective function 112 that represents a similarity between the post-treatment outcome (e.g., as output data 110) and the clinical goal of the patient (e.g., goals 105).

[0032] In some aspects, the objective function 112 can be used to identify specifictreatment parameters (e.g., from among all the treatment parameters 103), that most significantly impact the post-treatment outcome. Forexample, a first derivative of the objective function 112 can be used to identify one or more treatment parameters that maximally contribute to a reduction in a magnitude associated with the objective function 112. T reatment parameters identified from the first derivative of the objective function (or loss function) 1 12 can be selected for modification/update, e.g., using a parameter update operation 114. In such instances, the previously applied treatment parameters 103 used by the simulator 108 can be modified based on the parameter selection performed using the objective function 112. Updated or tuned parameters can then be propagated back into the treatment plan 102, e.g., as an update to the treatment parameters 103 that are consumed by the simulator 108. The process of forward modeling the patient outcome (e.g., as reflected in the output data 110), and tuning the treatment parameters 103 consumed by the simulator 108 can be iterated until a satisfactory patient outcome is reached, i.e., until the treatment goal 105 is achieved.

[0033] In the context of an orthodontic intervention, for example, the simulator 108 can be used to predict the resulting geometry of a patients teeth (e.g., as reflected in the output data 110), subsequent to an intervention utilizing treatment parameters (103), and based on the patients current dentition (e.g., as reflected in the patient data 106). The output data 110 can then be used to determine (or compute) an objective function 1 12, to which a differential analysis can be applied. By way of example, differential analysis of the resulting objective function 112 may indicate that certain treatment parameters, such as aligner composition, may be more material to the achievement of the treatment goal 105, as compared with other/different treatment parameters, such as aligner geometry. Additionally, the derivatives can indicate, for each treatment parameter, a magnitude and a direction of update, e.g., to maximally reduce the loss (error). As such, by utilizing differential analysis ofthe objective (error) function 112, the treatment parameters 103 may be more efficiently tuned/updated, thereby improving the patient treatment outcome. [0034] FIG. 2 is a flow diagram of an example process 200 for performing treatment parameter estimation and tuning, in accordance with some examples. At step 202, patient data can be received, e.g., by a differentiable simulator, such as simulator 108, discussed above with respect to FIG. 1. The received patient data can represent an initial (or current) state of a patient, such as information describing a portion of the patients anatomy. As discussed above, patient data can include bio-mechanical models that represent the geometry of specific anatomical structures. In the orthodontic context, such models may include but are not limited to image data (e.g., photographs), cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data, intra-oral scans, and/or face scans, etc.

[0035] At step 204, a plurality of treatment parameters can be received, e.g., by the differentiable simulator. As discussed above, treatment parameters can include material parameters, including the composition, size and/or shape of implements used for administering treatment. Treatment parameters may also describe treatment techniques or procedures, such as the timing and/or order of applied procedures, etc. By way of example, orthodontic procedures can involve repositioning a patient's teeth to a desired arrangement in order to correct malocclusions and/or improve aesthetics. To achieve these objectives, orthodontic appliances such as braces, shell aligners, and the like can be applied to the patient's teeth by an orthodontic practitioner. The appliance can be configured to exert force on one or more teeth in order to effect desired tooth movements according to a treatment plan. Treatment parameters in this context can include the type of appliances used (e.g., braces or shell aligners), material properties of the selected appliance (which affect forces exerted on each tooth), and/or treatment protocols, such as when the appliance is introduced into treatment, and/or a duration of use, etc. [0036] At step 206, the process 200 includes generating an output state of the patient based on the initial state of the patient (e.g., based on the patient data), and the received treatment parameters. As discussed above with respect to FIG. 1 , the output state can represent a simulation of the post-treatment outcome, e.g., that results from application of the treatment parameters. As such, the output state (or output data) can be (or can include) simulated post-treatment data in the form of a 3D mesh, or image data, for example, that reflects structural and/or aesthetic changes to the patient after applied intervention using the treatment parameters.

[0037] As discussed above, the output state can be an output generated by a simulation model, such as a differentiable simulator that is (or that includes) a physics-based model. By way of example, the simulator (or differentiable simulator) can be configured to utilize a finite element method (FEM) to model the structural mechanicsof a patients anatomy when treated using interventions reflected by the received treatment parameters.

[0038] At step 208, the process 200 includes calculating (or determining) an objective function based on the output state of the patient and the treatment goal. The objective function can represent a degree of similarity between the post-treatment outcome (e.g., as indicated by the output state/data) and the clinical goal for the patient. As discussed above, the loss function can be used to identify specific treatment parameters that most meaningfully impact the treatment outcome. For example, a first derivative of the objective function can be used to identify treatment parameters for mod ification/upd ate, e.g., to better attain the patients treatment goal.

[0039] At step 210, the process 200 includes updating one or more of the treatment parameters based on the objective function. In such approaches, certain (selected) previously applied treatment parameters can be modified based on the determined contribution to the objective function. For example, treatment parameters that contribute more meaningfully to the overall loss (or error) may be preferentially selected for modification/update, i.e., to improve outcomes of the administered treatment. The updated/tuned treatment parameters can then be propagated as inputs to the simulator.

[0040] In some aspects, treatment plan parameters may be initialized using machine-learning (ML) approaches. For example, treatment parameters may be provided as an output by an ML model trained to estimate/predict optimal treatment parameters given paired treatment goal and patient data inputs. Further details regarding the use of ML approaches to perform parameter estimation/initialization are provided with respect to FIGs. 3 and 4, below.

[0041] FIG. 3 illustrates a conceptual block diagram of an example differentiable simulator system 300 in which treatment parameter initialization is performed using a machine-learning (ML) model 302, in accordance with some examples. In the example of FIG. 3, ML model 302 is configured to predict/estimate treatment parameters 303, e.g., based on a paired input of treatment goals 305, and patient data 306. Similar to the examples discussed above with respect to FIG. 2, treatment goals can include any information representing a desired or intended patient outcome. By way of example, treatment goals may include representations of a target post-treatment outcome, including but not limited to images, and/or 3D mesh data, etc. Additionally, treatment goal data 305 may include other types of imaging data, including but not limited to cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data intra-oral scans, and/or face scans, etc. Similarly, the patient data 306 can include data representing a current (or most recent) condition of the patient, such as one or more regions of the patients anatomy to be altered by the treatment plan. As such, the patient data may also include image data (e.g., photographs), cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data intra-oral scans, and/or face scans, etc.

[0042] Based on the received patient goal data 305 and the patient data 306, the ML model can predict/estimate initial treatment parameters 303 that are then provided to a simulator (e.g., a differentiable simulator 308). Similarto the process discussed above with respect to FIG. 1 , the simulator 308 can use the received treatment parameters to model the impact of the applied treatment plan, and to output the predicted post-treatment results as output data 310. Depending on the desired implementation, the simulator 308 can be (or can utilize) a physics-based model, such as a biomechanical model configured to predict the outcome of treatment parameters on a provided patient state, as indicated by the patient data 306.

[0043] In some aspects, the output data 310 can be used to evaluate the simulated post-treatment result based on a similarity to the treatment objective, i.e., as defined by the treatment goals 305. As discussed above, this evaluation can be performed using an objective (error) function 312 that defines a disparity between the objective treatment goal 305, and the post-treatment patient result reflected in the output data 310. In some implementations, the objective error function 312 can be used to identify and update/tune selected parameters, as discussed above with respect to FIGs. 1 and 2. For example, using a differential analysis (e.g., by calculating a first derivative of the objective error function 312), treatment parameters that most impact the output data 310 can be identified. By updating/tuning these select parameters (e.g., in parameter update step 314), the input parameters 303 to the simulator 308 can be modified to better achieve the intended clinical outcome.

[0044] FIG. 4 is a flow diagram of an example process 400 for training a machine-learning (ML) model to estimate treatment parameters in accordance with some examples. The process 400 begins with step 402 in which patientdata is received. The patient data can be representative of physical patient attributes. For example, patient data can include bio-mechanical models representing specific anatomical structures of the patient. By way of example, patient data may include, but is not limited to, image data (e.g.,

-1 fl photographs), cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data intra-oral scans, and/or face scans, etc.

[0045] At step 404, the process 400 includes receiving a treatment goal associated with the patient As discussed above, the treatment goal can include data specifying a desired/target outcome for the treatment plan. Similar to the patient data, the treatment goal data can include, but is not limited to, image data (e.g., photographs), cone beam computed tomography (CBCT) scan data, x-ray images, magnetic resonance imaging (MRI) data intra-oral scans, and/or face scans, etc.

[0046] At step 406, the process 400 can include predicting, using a ML model, one or more (estimated) treatment parameters, e.g., based on the patient data and the treatment goal. As discussed above, the ML model can be configured to predict optimal treatment parameters to achieve a desired patient objective.

[0047] At step 408, the process 400 can include providing the estimated treatment parameters to a simulation model, for example, that is configured to simulate post-treatment data for the patient, based on the applied treatment parameters. In some aspects, the simulation model can be (or can utilize) a physics-based model, such as a bio-mechanical model configured to predict the outcome of treatment parameters on a provided patient state, as indicated by the patient data received at step 402. By way of example, the simulation model can be configured to utilize a finite element method (FEM) to model the structural mechanics of a patients anatomy when treated using interventions reflected by treatment parameters 103.

[0048] Using an orthodontic example, the simulation model can be configured to model (predict) the movement of a patients teeth in response to the application of an orthodontic liner. In this example, the treatment parameters can include the size, shape, and/or material characteristics of the liner, whereas patient data may include images data and/or 3D mesh models reflecting the current positions and orientations of the patients teeth and/or soft tissue structures.

[0049] In step 410, the process 400 includes generating an output state of the patient based on the initial state of the patient (e.g., based on the patient data), and the received treatment parameters. As discussed above, the output state can represent a simulation of the post-treatment outcome, e.g., that results from application ofthe treatment parameters. As such, the output state (oroutput data) can include simulated post-treatment data in the form of a 3D mesh, or image data, for example, that reflects structural and/or aesthetic changes to the patient after applied intervention using the treatment parameters.

[0050] At step 412, the process 400 includes calculating an objective function (also: loss function) based on the output state ofthe patient and the treatment goal ofthe patient. The objective function can be used to determine a success (or failure) of the applied treatment, for example, by representing a disparity between the post-treatment patient state and the treatment goal. In some aspects, the objective loss can be used to update one or more weights of the machine-learning model in order to perform training. In some aspects, a derivative of the objective function (loss function) may be used to determine a magnitude and direction (sign) of ML model weights to be updated. Through training, the computed objective function can be used to improve the ML model predictions about future treatment parameters. Once trained and deployed into production, in some aspects, the loss function may be used to further update treatment parameters that are provided to the simulation model, e.g., in instances where additional simulation iterations are desired. By way of example, differential analysis techniques may be used to identify/select specific treatment parameters for updating/tuning, which can then be used to perform additional treatment simulation.

[0051] FIG. 5 shows an exemplary diagram 500 of a CT scan of teeth. In this embodiment, the roots are derived directly from a high-resolution CBCT scan of the patient. Scanned roots can then be applied to crowns derived from an impression, or used with the existing crowns extracted from Cone Beam Computed Tomography (CBCT) data. A CBCT single scan gives 3D data and multiple forms of X-ray-like data. PVS impressions are avoided.

[0052] In one embodiment, a cone beam x-ray source and a 2D area detector scans the patient's dental anatomy, preferably over a 360 degree angular range and along its entire length, by any one of various methods wherein the position of the area detector is fixed relative to the source, and relative rotational and translational movement between the source and object provides the scanning (irradiation of the object by radiation energy). As a result of the relative movement of the cone beam source to a plurality of source positions (i.e., “views”) along the scan path, the detector acquires a corresponding plurality of sequential sets of cone beam projection data (also referred to herein as cone beam data or projection data), each set of cone beam data being representative of x-ray attenuation caused by the object at a respective one of the source positions.

[0053] The disclosure now turns to a further discussion of models that can be used through the environments and techniques described herein. Specifically, FIG. 6 is an illustrative example of a deep learning neural network 600 that can be implemented to perform treatment parameter estimation. As discussed above, an input layer 620 can be configured to receive patient data and/or data relating to target treatment goals. The neural network 600 includes multiple hidden layers 622a, 622b, through 622 n . The hidden layers 622a, 622b, through 622n include “n” number of hidden layers, where “n” is an integer greater than or equal to one. The number of hidden layers can be made to include as many layers as needed for the given application. The neural network 600 further includes an output layer 621 that provides an output resulting from the processing performed by the hidden layers 622a, 622b, through 622n. In one illustrative example, the output layer 621 can provide estimated treatment parameters (e.g., estimated parameters 303), that can be used/ingested by a differential simulator to estimate a patient treatment outcome.

[0054] The neural network 600 is a multi-layer neural network of interconnected nodes. Each node can represent a piece of information. Information associated with the nodes is shared among the different layers and each layer retains information as information is processed. In some cases, the neural network 600 can include a feed-forward network, in which case there are no feedback connectionswhere outputs of the network are fed back into itself. In some cases, the neural network 600 can include a recurrent neural network, which can have loops that allow information to be carried across nodes while reading in input.

[0055] Information can be exchanged between nodes through node-to-node interconnections between the various layers. Nodes of the input layer 620 can activate a set of nodes in the first hidden layer 622a. For example, as shown, each of the input nodes of the input layer 620 is connected to each of the nodes of the first hidden layer 622a. The nodes of the first hidden layer 622a can transform the information of each input node by applying activation functions to the input node information. The information derived from the transformation can then be passed to and can activate the nodes of the next hidden layer 622b, which can perform theirown designated functions. Example functions include convolutional, up-sampling, data transformation, and/or any other suitable functions. The output of the hidden layer 622b can then activate nodes of the next hidden layer, and so on. The output of the last hidden layer 622n can activate one or more nodes of the output layer 621 , at which an output is provided. In some cases, while nodes (e.g., node 626) in the neural network 600 are shown as having multiple output lines, a node can have a single output and all lines shown as being output from a node represent the same output value.

[0056] In some cases, each node or interconnection between nodes can have a weight that is a set of parameters derived from the training of the neural network 600. Once the neural network 600 is trained, it can be referred to as a trained neural network, which can be used to classify one or more activities. For example, an interconnection between nodes can represent a piece of information learned about the interconnected nodes. The interconnection can have a tunable numeric weight that can be tuned (e.g., based on a training dataset), allowing the neural network 600 to be adaptive to inputs and able to learn as more and more data is processed.

[0057] The neural network 600 is pre-trained to process the features from the data in the input layer 620 using the different hidden layers 622a, 622b, through 622n in order to provide the output through the output layer 621.

[0058] In some cases, the neural network 600 can adjust the weights of the nodes using a training process called backpropagation. As noted above, a backpropagation process can include a forward pass, a loss function, a backward pass, and a weight update. The forward pass, loss function, backward pass, and parameter update is performed for one training iteration. The process can be repeated for a certain number of iterations for each set of training data until the neural network 600 is trained well enough so that the weights of the layers are accurately tuned.

[0059] As noted above, for a first training iteration for the neural network 600, the output will likely include values that do not give preference to any particular class due to the weights being randomly selected at initialization. For example, if the output is a vector with probabilities that the object includes different classes, the probability value for each of the different classes may be equal or at least very similar (e.g., for ten possible classes, each class may have a probability value of 0.1). With the initial weights, the neural network 600 is unable to determine low level features and thus cannot make an accurate determination of what the classification of the object might be. A loss function can be used to analyze error in the output. Any suitable loss function definition can be used, such as a Cross-Entropy loss. Another example of a loss function includes the mean squared error (MSE), defined as E_total = £(- target - output) 2 ). The loss can be set to be equal to the value of E_total.

[0060] The loss (or error) will be high for the first training images since the actual values will be much different than the predicted output. The goal of training is to minimize the amount of loss so that the predicted output is the same as the training label. The neural network 600 can perform a backward pass by determining which inputs (weights) most contributed to the loss of the network, and can adjust the weights so that the loss decreases and is eventually minimized. A derivative of the loss with respect to the weights (denoted as dL/dW, where W are the weights at a particular layer) can be computed to determine the weights that contributed most to the loss of the network. After the derivative is computed, a weight update can be performed by updating all the weights of the filters. For example, the weightscan be updated so that they change in the opposite direction of the gradient. The weight update can be denoted as w=w_i-q dL/dW, where w denotes a weight, wi denotes the initial weight, and q denotes a learning rate. The learning rate can be set to any suitable value, with a high learning rate including larger weight updates and a lower value indicating smaller weight updates.

[0061] The neural network 600 can include any suitable deep network. One example includes a convolutional neural network (CNN), which includes an input layer and an output layer, with multiple hidden layers between the input and out layers. The hidden layers of a CNN include a series of convolutional, nonlinear, pooling (for downsampling), and fully connected layers. The neural network 600 can include any other deep network other than a CNN, such as an autoencoder, a deep belief nets (DBNs), a Recurrent Neural Networks (RNNs), among others.

[0062] As understood by those of skill in the art, machine-learning based classification techniques can vary depending on the desired implementation. For example, machine-learning classification schemes can utilize one or more of the following, alone or in combination: hidden Markov models; recurrent neural networks; convolutional neural networks (CNNs); deep learning; Bayesian symbolic methods; generative adversarial networks (GANs); support vector machines; image registration methods; applicable rulebased system. Where regression algorithms are used, they may include but are not limited to: a Stochastic Gradient Descent Regressor, and/or a Passive Aggressive Regressor, etc.

[0063] Machine learning classification models can also be based on clustering algorithms (e.g., a Minibatch K-means clustering algorithm), a recommendation algorithm (e.g., a Miniwise Hashing algorithm, or Euclidean Locality-Sensitive Hashing (LSH) algorithm), and/or an anomaly detection algorithm, such as a Local outlier factor. Additionally, machine-learning models can employ a dimensionality reduction approach, such as, one or more of: a Mini-batch Dictionary Learning algorithm, an Incremental Principal Component Analysis (PCA) algorithm, a Latent Dirichlet Allocation algorithm, and/or a Mini-batch K- means algorithm, etc.

[0064] The disclosure now turns to FIG. 7 which illustrates an example of a processor-based computing system 700 wherein the components of the system are in electrical communication with each other using a bus 705. The computing system 700 can include a processing unit (CPU or processor) 710 and a system bus 705 that may couple various system components including the system memory 715, such as read only memory (ROM) 720 and random-access memory (RAM) 725, to the processor 710. The computing system 700 can include a cache 712 of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 710. The computing system 700 can copy data from the memory 715, ROM 720, RAM 725, and/or storage device 730 to the cache 712 for quick access by the processor 710. In this way, the cache 712 can provide a performance boost that avoids processor delays while waiting for data. These and other modules can control the processor 710 to perform various actions. Other system memory 715 may be available for use as well. The memory 715 can include multiple different types of memory with different performance characteristics. The processor 710 can include any general-purpose processor and a hardware module or software module, such as module 1 732, module 2 734, and module 3 736 stored in the storage device 730, configured to control the processor 710 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 710 may essentially be a completely self-contained computing system, containing multiple cores orprocessors, a bus, memory controller, cache, etc. A multicore processor may be symmetric or asymmetric.

[0065] To enable user interaction with the computing system 700, an input device 745 can represent any number of inputmechanisms, such as a microphone for speech, a touch-protected screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 735 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing system 700. The communications interface 740 can govern and manage the user input and system output. There may be no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.

[0066] The storage device 730 can be a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memory, read only memory, and hybrids thereof.

[0067] As discussed above, the storage device 730 can include the software modules 732, 734, 736 for controlling the processor 710. Other hardware or software modules are contemplated. The storage device 730 can be connected to the system bus 705. In some embodiments, a hardware module that performs a particular function can include a software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 710, bus 705, output device 735, and so forth, to carry out the function. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.

[0068] In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non- transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.

[0069] Methods according to the above-described examples can be implemented using computerexecutable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.

[0070] Devices implementing methods according to these disclosurescan comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.

[0071] The instructions, mediator conveying such instructions, computing resourcesfor executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.

[0072] Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.

[0073] Claim language reciting "at least one of' refers to at least one of a set and indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B.