Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHOD FOR GENERATING VIRTUAL GINGIVA
Document Type and Number:
WIPO Patent Application WO/2023/158331
Kind Code:
A1
Abstract:
Systems and methods for generating a virtual gingiva. The method includes receiving, by one or more processors, a first three-dimensional (3D) representation of a dentition. The 3D representation includes a plurality of tooth representations and a gingiva representation. The method includes identifying, by the one or more processors, a shape of the gingiva representation. The method includes generating, by the one or more processors, virtual gingiva having a surface contour based on the shape of the gingiva representation. The method includes generating, by the one or more processors, a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

Inventors:
KUPCHISHIN ALEXANDER BORISOVICH (RU)
BOGATYREV MAXIM ALEXANDROVICH (RU)
KALININ ANTON OLEGOVICH (US)
NIKOLSKIY SERGEY (US)
GORBOVSKOY EVGENY SERGEEVICH (RU)
Application Number:
PCT/RU2022/000043
Publication Date:
August 24, 2023
Filing Date:
February 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SMILEDIRECTCLUB LLC (US)
SDC U S SMILEPAY SPV (US)
International Classes:
A61C7/00; A61C7/08; A61C13/00; G16H20/40; G16H50/50
Domestic Patent References:
WO2017205484A12017-11-30
WO2000019928A12000-04-13
Foreign References:
US20090208897A12009-08-20
US20020177108A12002-11-28
US201862660141P2018-04-19
US201816130762A2018-09-13
US201762522847P2017-06-21
US201816047694A2018-07-27
US10315353B12019-06-11
Attorney, Agent or Firm:
SOJUZPATENT (RU)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising: receiving, by one or more processors, a first three-dimensional (3D) representation of a dentition, the 3D representation including a plurality of tooth representations and a gingiva representation; identifying, by the one or more processors, a shape of the gingiva representation; generating, by the one or more processors, virtual gingiva having a surface contour based on the shape of the gingiva representation; and generating, by the one or more processors, a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

2. The method of claim 1, wherein the plurality of tooth representations comprise a plurality of morphed tooth representations having respective crown and root representations, the plurality of morphed tooth representations based on initial scan data depicting teeth portions of the dentition.

3. The method of claim 1, wherein identifying the shape of the gingiva representation comprises: determining, by the one or more processors, a curvature of a growth line of the dentition; and determining, by the one or more processors, a cross-sectional geometry of the gingiva representation at a plurality of points along the growth line of the dentition.

4. The method of claim 3, wherein determining the cross-sectional geometry of the gingiva representation comprises: identifying, by the one or more processors, for a first tooth representation, a marginal line defining a boundary where the gingiva representation contacts the first tooth representation; generating, by the one or more processors, a plane perpendicular to the growth line of the dentition, wherein the plane intersects the first tooth representation and the gingiva representation; identifying, by the one or more processors, at least two intersection points between the marginal line and the plane; identifying, by the one or more processors, a plurality of controls points of the gingiva representation that intersect the plane; and generating, by the one or more processors, an estimated cross-sectional geometry of the virtual gingiva based on the plurality of control points and the at least two intersection points.

5. The method of claim 4, wherein the plurality of control points comprise an ordered array of points, wherein a first control point is adjacent to a second control point and a third control point, wherein the second control point and the third control point are substantially equidistant from the first control point.

6. The method of claim 1, further comprising: determining, by the one or more processors, a curvature of the dentition; generating, by the one or more processors, a bottom of a base of the virtual gingiva by extending the curvature of the dentition in a buccal direction and a lingual direction from a tooth growth line, wherein the bottom of the base comprises a buccal side and a lingual side; connecting, by the one or more processors, the buccal side of the base with the lingual side of the base via a series of control points at a plurality of locations along the growth line of the dentition, the series of control points representing a shape of at least a portion of the gingiva of the 3D digital model.

7. The method of claim 1, wherein identifying the shape of the gingiva representation comprises: generating, by the one or more processors, a plurality of planes; re disposing, by the one or more processors, the plurality of planes along a growth line of the dentition, wherein the plurality of planes are perpendicular to the growth line; and generating, by the one or more processors, a series of control points within each of the plurality of planes, a portion of the series of control points representative of the shape of the gingiva representation.

8. The method of claim 1, wherein the virtual gingiva includes a portion adjacent to a marginal line that substantially matches a corresponding portion of the gingiva representation.

9. The method of claim 1, further comprising: moving, by the one or more processors, a first tooth representation from a first position to a second position; and modifying, by the one or more processors, the surface contour of a portion of the virtual gingiva adjacent to a first portion of the virtual gingiva that is associated with the first tooth representation and a second portion of the virtual gingiva that is associated with a second tooth representation based on the first tooth representation being moved from the first position to the second position.

10. The method of claim 9, wherein modifying the surface contour of the portion of the virtual gingiva comprises applying a smoothing function to the surface contour of the virtual gingiva.

11. The method of claim 1 , further comprising: manufacturing, by the one or more processors, one or more dental aligners based on the second 3D representation, the one or more dental aligners configured to reposition teeth of a patient from an initial position to a final position.

12. A system comprising: one or more processors configured to: receive a first three-dimensional (3D) representation of a dentition, the 3D representation including a plurality of tooth representations and a gingiva representation; identify a shape of the gingiva representation; generate virtual gingiva having a surface contour based on the shape of the gingiva representation; and generate a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

13. The system of claim 12, wherein the plurality of tooth representations comprise a plurality of morphed tooth representations having respective crown and root representations, the plurality of morphed tooth representations based on initial scan data depicting teeth portions of the dentition.

14. The system of claim 12, wherein to identify the shape of the gingiva representation, the one or more processors are configured to: determine a curvature of a growth line of the dentition; and determine a cross-sectional geometry of the gingiva representation at a plurality of points along the growth line of the dentition.

15. The system of claim 14, wherein to determine the cross-sectional geometry of the gingiva representation, the one or more processors are configured to: identify, for a first tooth representation, a marginal line defining a boundary between the gingiva representation adjacent the first tooth representation; generate a plane perpendicular to the growth line of the dentition, wherein the plane intersects the first tooth representation and the gingiva representation; identify at least two intersection points between the marginal line and the plane; identify a plurality of controls points of the gingiva representation that intersect the plane; and generate an estimated cross-sectional geometry of the virtual gingiva based on the plurality of control points and the at least two intersection points.

16. The system of claim 12, wherein the one or more processors are configured to: determine a curvature of the dentition; generate a bottom of a base of the virtual gingiva by extending the curvature of the dentition in a buccal direction and a lingual direction from a tooth growth line, wherein the bottom of the base comprises a buccal side and a lingual side; connect the buccal side of the base with the lingual side of the base via a series of control points at a plurality of locations along the growth line of the dentition, the series of control points representing a shape of at least a portion of the gingiva of the 3D digital model.

17. The system of claim 12, wherein to identify the shape of the gingiva representation, the one or more processors are configured to: generate a plurality of planes; dispose the plurality of planes along a growth line of the dentition, wherein the plurality of planes are perpendicular to the growth line; and generate a series of points within each of the plurality of planes, a portion of the series of points representative of the shape of the gingiva representation.

18. The system of claim 12, wherein the one or more processors are further configured to: move a first tooth representation from a first position to a second position; and modify the surface contour of a portion of the virtual gingiva adjacent to a first portion of the virtual gingiva that is associated with the first tooth representation and a second portion of the virtual gingiva that is associated with a second tooth representation responsive to the first tooth representation being moved from the first position to the second position.

19. The system of claim 18, wherein to modify the surface contour of the portion of the virtual gingiva, the one or more processors are configured to apply a smoothing function to the surface contour of the virtual gingiva.

20. A non-transitory computer readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to: receive a first three-dimensional (3D) representation of a dentition, the 3D representation including a plurality of tooth representations and a gingiva representation; identify a shape of the gingiva representation; generate virtual gingiva having a surface contour based on the shape of the gingiva representation; and generate a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

Description:
SYSTEMS AND METHODS FOR GENERATING VIRTUAL GINGIVA

TECHNICAL FIELD

[0001] The present disclosure relates generally to the field of dental imaging and treatment, and more specifically, to systems and methods for generating virtual gingiva.

BACKGROUND

[0002] Dental impressions and associated physical or digital reproductions of a patient’s teeth can be used by dentists or orthodontists to diagnose or treat an oral condition, such as the misalignment of the patient’s teeth. Typically, to receive treatment for a misalignment, a patient visits a dentist that specializes in such treatment and either has an intraoral scan or dental impressions administered. Some patients may receive treatment via dental aligners that fit over the patient’s teeth and reposition the patient’s teeth. Oftentimes, dental aligners may not fit comfortably because the dental aligners do not accurately reflect the patient’s gingiva, and therefore do not conform properly to the patient’s gingiva.

SUMMARY

[0003] In one aspect, this disclosure is directed to a method. The method includes receiving, by one or more processors, a first three-dimensional (3D) representation of a dentition. The 3D representation includes a plurality of tooth representations and a gingiva representation. The method includes identifying, by the one or more processors, a shape of the gingiva representation. The method includes generating, by the one or more processors, virtual gingiva having a surface contour based on the shape of the gingiva representation. The method includes generating, by the one or more processors, a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

[0004] In another aspect, this disclosure is directed to a system. The system includes one or more processors. The one or more processors is configured to receive a first three-dimensional (3D) representation of a dentition. The 3D representation includes a plurality of tooth representations and a gingiva representation. The one or more processors is configured to identify a shape of the gingiva representation. The one or more processors is configured to generate virtual gingiva having a surface contour based on the shape of the gingiva representation. The one or more processors is configured to generate a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

[0005] In yet another aspect, this disclosure is directed to a non-transitory computer readable medium that stores instructions. The instructions, when executed by one or more processors, cause the one or more processors to receive a first three-dimensional (3D) representation of a dentition. The 3D representation includes a plurality of tooth representations and a gingiva representation. The instructions further cause the one or more processors to identify a shape of the gingiva representation. The instructions further cause the one or more processors to generate virtual gingiva having a surface contour based on the shape of the gingiva representation. The instructions further cause the one or more processors to generate a second 3D representation comprising the virtual gingiva and the plurality of tooth representations.

[0006] Various other embodiments and aspects of the disclosure will become apparent based on the drawings and detailed description of the following disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] FIG. 1 shows a system for orthodontic treatment, according to an illustrative embodiment.

[0008] FIG. 2 shows a process flow of generating a treatment plan, according to an illustrative embodiment.

[0009] FIG. 3 shows a top-down simplified view of a model of a dentition, according to an illustrative embodiment.

[0010] FIG. 4 shows a perspective view of a three-dimensional model of the dentition of FIG. 3, according to an illustrative embodiment. [0011] FIG. 5 shows a trace of a gingiva-tooth interface on the model shown in FIG. 3, according to an illustrative embodiment.

[0012] FIG. 6 shows a selection of teeth in a tooth model generated from the model shown in FIG. 5, according to an illustrative embodiment.

[0013] FIG. 7 shows a segmented tooth model of an initial position of the dentition shown in FIG. 3, according to an illustrative embodiment.

[0014] FIG. 8 shows an exploded view of a model of a dentition, according to an illustrative embodiment.

[0015] FIG. 9 shows the model of the dentition shown in FIG. 8, according to an illustrative embodiment.

[0016] FIG. 10 shows the model of the dentition shown in FIG. 8, according to an illustrative embodiment.

[0017] FIG. 11 shows marginal lines of the model of the dentition shown in FIG. 8, according to an illustrative embodiment.

[0018] FIG. 12 shows a plurality of intersection points between a plane and the model of the dentition shown in FIG. 8, according to an illustrative embodiment.

[0019] FIG. 13 shows an enlarged portion of the model of the dentition shown in FIG. 8, according to an illustrative embodiment.

[0020] FIG. 14 shows a virtual gingiva, according to an illustrative embodiment.

[0021] FIG. 15 shows a perspective view of a 3D model of a dentition with the virtual gingiva shown in FIG. 14, according to an illustrative embodiment.

[0022] FIG. 16 shows a target final position o the dentition from the initial position of the dentition shown in FIG. 7, according to an illustrative embodiment. [0023] FIG. 17 shows a movement of a tooth representation from the 3D model of the dentition shown in FIG. 15.

[0024] FIG. 18 shows a series of stages of the dentition from the initial position shown in FIG. 7 to the target final position shown in FIG. 16, according to an illustrative embodiment.

[0025] FIG. 19 shows a diagram of a method of generating a 3D model of a dentition, according to an illustrative embodiment.

[0026] FIG. 20 shows a diagram of a method of modifying a surface contour of a virtual gingiva, according to an illustrative embodiment.

DETAILED DESCRIPTION

[0027] The present disclosure is directed to systems and methods for generating a virtual representation of gingiva for purposes of planning orthodontic treatment. For example, a scan of a patient’s mouth, a scan of a dental impression taken of a patient’s teeth and gingiva, or an image of a patient’s mouth can provide a visual of the patient’s gingiva and teeth, but the scan or image may include various defects. For example, the scan or image may have incomplete information for some areas that are difficult to detect or may have produced excess features that are not actually there. For example, the scans or images of the gingiva may include tunnels, inverted triangles, gaps, uneven triangulation, etc.

[0028] The present disclosure is directed to systems and method of reducing or eliminating the defects in the gingiva portion of a 3D dentition model created from such a scan or image. The virtual representation of a gingiva generated by the systems and methods disclosed herein also allows the gingiva to be manipulated in conjunction with movements and adjustments made to the teeth. For example, as a tooth of a 3D model is moved from a first position to a second position, certain parts of the virtual gingiva will also transform. This provides a realistic digital representation of how the teeth and gingiva of a patient will work together during a treatment process and enables a manufacturer of dental aligners for repositioning the patient’s teeth to design and manufacture dental aligners that accurately conform to the patient’s teeth and gingiva. [0029] The present disclosure is also directed to systems and methods for generating and deforming a virtual representation of gingiva. For example, the systems and methods disclosed herein are directed to generating realistic movements of the virtual gingiva based on movements of the teeth. For example, when a tooth is moved, portions of the virtual gingiva may move accordingly and may be modified to create a smooth surface contour that resembles how a patient’s gingiva actually responds to a moving tooth. The disclosed systems and methods use a linking mechanism that links control points within the virtual gingiva such that movement of a tooth may cause movement of a first control point of the virtual gingiva, which may cause movement to any number of other control points of the virtual gingiva, based on the movement of the tooth.

[0030] The systems and methods described herein have many benefits over existing computing systems. For example, by generating a virtual gingiva based on some portion or points from a scan or image of the patient’s teeth or dental impressions, the systems and methods described herein may produce more accurate representations of the patient’s gingiva, thereby resulting in more comfortable fitting dental aligners. Additionally, computing time is reduced because the mathematical computations that typically need to be performed to determine respective movements are eliminated due to the use of a 3D model, or a mesh, that depicts a more accurate representation of the patient’s gingiva. For example, creation of a direct relationship between vertices on the mesh and control points of the virtual gingiva allow changing of the positions of the corresponding vertices during the movement of teeth without rebuilding the mesh. This makes deformation work almost instantaneously because all the computationally intensive operations are removed. Various other technical benefits and advantages are described in greater detail below.

[0031] Referring to FIG. 1, a system 100 for orthodontic treatment is shown, according to an illustrative embodiment. As shown in FIG. 1, the system 100 includes a treatment planning computing system 102 communicably coupled to an intake computing system 104, a fabrication computing system 106, and one or more treatment planning terminals 108. In some embodiments, the treatment planning computing system 102 may be or include one or more servers which are communicably coupled to a plurality of computing devices. In some embodiments, the treatment planning computing system 102 may include a plurality of servers, which may be located at a common location (e.g., a server bank) or may be distributed across a plurality of locations. The treatment planning computing system 102 may be communicably coupled to the intake computing system 104, fabrication computing system 106, and/or treatment planning terminals 108 via a communications link or network 110 (which may be or include various network connections configured to communicate, transmit, receive, or otherwise exchange data between addresses corresponding to the computing systems 102, 104, 106). The network 110 may be a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), an Internet Area Network (IAN) or cloud-based network, etc. The network 110 may facilitate communication between the respective components of the system 100, as described in greater detail below.

[0032] The computing systems 102, 104, 106 include one or more processing circuits, which may include processor(s) 112 and memory 114. The processor(s) 112 may be a general purpose or specific purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a group of processing components, or other suitable processing components. The processor(s) 112 may be configured to execute computer code or instructions stored in memory 114 or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.) to perform one or more of the processes described herein. The memory 114 may include one or more data storage devices (e.g., memory units, memory devices, computer-readable storage media, etc.) configured to store data, computer code, executable instructions, or other forms of computer-readable information. The memory 114 may include random access memory (RAM), read-only memory (ROM), hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects and/or computer instructions. The memory 114 may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. The memory 114 may be communicably connected to the processor 112 via the processing circuit, and may include computer code for executing (e.g., by processor(s) 112) one or more of the processes described herein. [0033] The treatment planning computing system 102 is shown to include a communications interface 116. The communications interface 116 can be or include components configured to transmit and/or receive data from one or more remote sources (such as the computing devices, components, systems, and/or terminals described herein). In some embodiments, each of the servers, systems, terminals, and/or computing devices may include a respective communications interface 116 which permit exchange of data between the respective components of the system 100. As such, each of the respective communications interfaces 116 may permit or otherwise enable data to be exchanged between the respective computing systems 102, 104, 106. In some implementations, communications device(s) may access the network 110 to exchange data with various other communications device(s) via cellular access, a modem, broadband, Wi-Fi, satellite access, etc. via the communications interfaces 116.

[0034] Referring now to FIG. 1 and FIG. 2, the treatment planning computing system 102 is shown to include one or more treatment planning engines 118. Specifically, FIG. 2 shows a treatment planning process flow 200 which may be implemented by the system 100 shown in FIG. 2, according to an illustrative embodiment. The treatment planning engine(s) 118 may be any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to receive inputs for and/or automatically generate a treatment plan from an initial three-dimensional (3D) model of a dentition. In some embodiments, the treatment planning engine(s) 118 may be instructions stored in memory 114 which are executable by the processor(s) 112. In some embodiments, the treatment planning engine(s) 118 may be stored at the treatment planning computing system 102 and accessible via a respective treatment planning terminal 108. As shown in FIG. 2, the treatment planning computing system 102 may include a scan pre-processing engine 202, a gingival line processing engine 204, a segmentation processing engine 206, a geometry processing engine 208, a final position processing engine 210, and a staging processing engine 212. While these engines 202-212 are shown in FIG. 2, it is noted that the system 100 may include any number of treatment planning engines 118, including additional engines which may be incorporated into, supplement, or replace one or more of the engines shown in FIG. 2. [0035] Referring to FIG. 2 - FIG. 4, the intake computing system 104 may be configured to generate a 3D model of a dentition. Specifically, FIG. 3 and FIG. 4 show a simplified top-down view and a side perspective view of a 3D model of a dentition, respectively, according to illustrative embodiments. In some embodiments, the intake computing system 104 may be communicably coupled to or otherwise include one or more scanning devices 214. The intake computing system 104 may be communicably coupled to the scanning devices 214 via a wired or wireless connection. The scanning devices 214 may be or include any device, component, or hardware designed or implemented to generate, capture, or otherwise produce a 3D model 300 of an object, such as a dentition or dental arch. In some embodiments, the scanning devices 214 may include intraoral scanners configured to generate a 3D model of a dentition of a patient as the intraoral scanner passes over the dentition of the patient. For example, the intraoral scanner may be used during an intraoral scanning appointment, such as the intraoral scanning appointments described in U.S. Provisional Patent AppL No. 62/660,141, titled “Arrangements for Intraoral Scanning,” filed April 19, 2018, and U.S. Patent Appl. No. 16/130,762, titled “Arrangements for Intraoral Scanning,” filed September 13, 2018, the contents of each of which are incorporated herein by reference in their entirety. In some embodiments, the scanning devices 214 may include 3D scanners configured to scan a dental impression. The dental impression may be captured or administered by a patient using a dental impression kit similar to the dental impression kits described in U.S. Provisional Patent Appl. No. 62/522,847, titled “Dental Impression Kit and Methods Therefor,” filed June 21, 2017, and U.S. Patent Appl. No. 16/047,694, titled “Dental Impression Kit and Methods Therefor,” filed July 27, 2018, the contents of each of which are incorporated herein by reference in their entirety. In these and other embodiments, the scanning devices 214 may generally be configured to generate a 3D digital model of a dentition of a patient. The scanning device(s) 214 may be configured to generate a 3D digital model of the upper (i.e., maxillary) dentition and/or the lower (i.e., mandibular) dentition of the patient. The 3D digital model may include a digital representation of the patient’s teeth 302 and gingiva 304. The scanning device(s) 214 may be configured to generate 3D digital models of the patient’s dentition prior to treatment (i.e., with their teeth in an initial position). In some embodiments, the scanning device(s) 214 may be configured to generate the 3D digital models of the patient’s dentition in real-time (e.g., as the dentition / impression) is scanned. In some embodiments, the scanning device(s) 214 may be configured to export, transmit, send, or otherwise provide data obtained during the scan to an external source which generates the 3D digital model, and transmits the 3D digital model to the intake computing system 104. In some embodiments, the intake computing system 104 is configured to generate the 3D digital model from one or more 2D images of the patient’s dentition. For example, the patient themselves or someone else can capture one or more images of the patient’s dentition using a digital camera, such as a camera system on a mobile phone or tablet, and then transmit or upload the one or more images to the intake computing system 104 for processing into the 3D digital model. The images captured by the patient, or someone assisting the patient, can be 2D photographs, videos, or a 3D photograph.

[0036] The intake computing system 104 may be configured to transmit, send, or otherwise provide the 3D digital model to the treatment planning computing system 102. In some embodiments, the intake computing system 104 may be configured to provide the 3D digital model of the patient’s dentition to the treatment planning computing system 102 by uploading the 3D digital model to a patient file for the patient. The intake computing system 104 may be configured to provide the 3D digital model of the patient’s upper and/or lower dentition at their initial (i.e., pre-treatment) position. The 3D digital model of the patient’s upper and/or lower dentition may together form initial scan data which represents an initial position of the patient’s teeth prior to treatment.

[0037] The treatment planning computing system 102 may be configured to receive the initial scan data from the intake computing system 104 (e.g., from the scanning device(s) 214 directly, indirectly via an external source following the scanning device(s) 214 providing data captured during the scan to the external source, etc.). As described in greater detail below, the treatment planning computing system 102 may include one or more treatment planning engines 118 configured or designed to generate a treatment plan based on or using the initial scan data.

[0038] Referring to FIG. 2, the treatment planning computing system 102 is shown to include a scan pre-processing engine 202. The scan pre-processing engine 202 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to modify, correct, adjust, or otherwise process initial scan data received from the intake computing system 104 prior to generating a treatment plan. The scan pre-processing engine 202 may be configured to process the initial scan data by applying one or more surface smoothing algorithms to the 3D digital models. The scan pre-processing engine 202 may be configured to fill one or more holes or gaps in the 3D digital models. In some embodiments, the scan pre-processing engine 202 may be configured to receive inputs from a treatment planning terminal 108 to process the initial scan data. For example, the scan pre-processing engine 202 may be configured to receive inputs to smooth, refine, adjust, or otherwise process the initial scan data.

[0039] The inputs may include a selection of a smoothing processing tool presented on a user interface of the treatment planning terminal 108 showing the 3D digital model(s). As a user of the treatment planning terminal 108 selects various portions of the 3D digital model(s) using the smoothing processing tool, the scan pre-processing engine 202 may correspondingly smooth the 3D digital model at (and/or around) the selected portion. Similarly, the scan pre-processing engine 202 may be configured receive a selection of a gap filling processing tool presented on the user interface of the treatment planning terminal 108 to fill gaps in the 3D digital model(s).

[0040] In some embodiments, the scan pre-processing engine 202 may be configured to receive inputs for removing a portion of the gingiva represented in the 3D digital model of the dentition. For example, the scan pre-processing engine 202 may be configured to receive a selection (on a user interface of the treatment planning terminal 108) of a gingiva trimming tool which selectively removes gingival from the 3D digital model of the dentition. A user of the treatment planning terminal 108 may select a portion of the gingiva to remove using the gingiva trimming tool. The portion may be a lower portion of the gingiva represented in the digital model opposite the teeth. For example, where the 3D digital model shows a mandibular dentition, the portion of the gingiva removed from the 3D digital model may be the lower portion of the gingiva closest to the lower jaw. Similarly, where the 3D digital model shows a maxillary dentition, the portion of the gingiva removed from the 3D digital model may be the upper portion of the gingiva closest to the upper jaw. [0041] Referring now to FIG. 2 and FIG. 5, the treatment planning computing system 102 is shown to include a gingival line processing engine 204. Specifically, FIG. 5 shows a trace of a gingiva-tooth interface on the model 300 shown in FIG. 3 and FIG. 4. The gingival line processing engine 204 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise define a gingival line of the 3D digital models. The gingival line may be or include the interface between the gingiva and teeth represented in the 3D digital models. In some embodiments, the gingival line processing engine 204 may be configured to receive inputs from the treatment planning terminal 108 for defining the gingival line. The treatment planning terminal 108 may show a gingival line defining tool on a user interface which includes the 3D digital models.

[0042] The gingival line defining tool may be used for defining or otherwise determining the gingival line for the 3D digital models. As one example, the gingival line defining tool may be used to trace a rough gingival line 500. For example, a user of the treatment planning terminal 108 may select the gingival line defining tool on the user interface, and drag the gingival line defining tool along an approximate gingival line of the 3D digital model. As another example, the gingival line defining tool may be used to select (e.g., on the user interface shown on the treatment planning terminal 108) lowest points 502 at the teeth-gingiva interface for each of the teeth in the 3D digital model.

[0043] The gingival line processing engine 204 may be configured to receive the inputs provided by the user via the gingival line defining tool on the user interface of the treatment planning terminal 108 for generating or otherwise defining the gingival line. In some embodiments, the gingival line processing engine 204 may be configured to use the inputs to identify a surface transition on or near the selected inputs. For example, where the input selects a lowest point 502 (or a portion of the gingival line 500 near the lowest point 502) on a respective tooth, the gingival line processing engine 204 may identify a surface transition or seam at or near the lowest point 502 which is at the gingival margin. The gingival line processing engine 204 may define the transition or seam as the gingival line. The gingival line processing engine 204 may define the gingival line for each of the teeth included in the 3D digital model 300. The gingival line processing engine 204 may be configured to generate a tooth model using the gingival line of the teeth in the 3D digital model 300. The gingival line processing engine 204 may be configured to generate the tooth model by separating the 3D digital model along the gingival line. The tooth model may be the portion of the 3D digital model which is separated along the gingival line and includes digital representations of the patient’s teeth.

[0044] Referring now to FIG. 2 and FIG. 6, the treatment planning computing system 102 is shown to include a segmentation processing engine 206. Specifically, FIG. 6 shows a view of the tooth model 600 generated by the gingival line processing engine 204. The segmentation processing engine 206 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise segment individual teeth from the tooth model. In some embodiments, the segmentation processing engine 206 may be configured to receive inputs (e.g., via a user interface shown on the treatment planning terminal 108) which select the teeth (e.g., points 602 on the teeth) in the tooth model 600. For example, the user interface may include a segmentation tool which, when selected, allows a user to select points 602 on each of the individual teeth in the tooth model 600. In some embodiments, the selection of each teeth may also assign a label to the teeth. The label may include tooth numbers (e.g., according to FDI world dental federation notation, the universal numbering system, Palmer notation, etc.) for each of the teeth in the tooth model 600. As shown in FIG. 6, the user may select individual teeth in the tooth model 600 to assign a label to the teeth.

[0045] Referring now to FIG. 7, depicted is a segmented tooth model 700 generated from the tooth model 600 shown in FIG. 6. The segmentation processing engine 206 may be configured to receive the selection of the teeth from the user via the user interface of the treatment planning terminal 108. The segmentation processing engine 206 may be configured to separate each of the teeth selected by the user on the user interface. For example, the segmentation processing engine 206 may be configured to identify or determine a gap between two adjacent points 602. The segmentation processing engine 206 may be configured to use the gap as a boundary defining or separating two teeth. The segmentation processing engine 206 may be configured to define boundaries for each of the teeth in the tooth model 600. The segmentation processing engine 206 may be configured to generate the segmented tooth model 700 including segmented teeth 702 using the defined boundaries generated from the selection of the points 602 on the teeth in the tooth model 600.

[0046] The treatment planning computing system 102 is shown to include a geometry processing engine 208. The geometry processing engine 208 may be or may include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise generate whole tooth models for each of the teeth in the 3D digital model. Once the segmentation processing engine 206 generates the segmented tooth model 700, the geometry processing engine 208 may be configured to use the segmented teeth to generate a whole tooth model for each of the segmented teeth. Since the teeth have been separated along the gingival line by the gingival line processing engine 204 (as described above with reference to FIG. 6), the segmented teeth may only include crowns (e.g., the segmented teeth may not include any roots). The gingival line processing engine 204 may be configured to generate a whole tooth model including both crown and roots using the segmented teeth. In some embodiments, the segmentation processing engine 206 may be configured to generate the whole tooth models using the labels assigned to each of the teeth in the segmented tooth model 700. For example, the geometry processing engine 208 may be configured to access a tooth library 216. The tooth library 216 may include a library or database having a plurality of whole tooth models. The plurality of whole tooth models may include tooth models for each of the types of teeth in a dentition. The plurality of whole tooth models may be labeled or grouped according to tooth numbers.

[0047] The geometry processing engine 208 may be configured to generate the whole tooth models for a segmented tooth by performing a look-up function in the tooth library 216 using the label assigned to the segmented tooth to identify a corresponding whole tooth model. The geometry processing engine 208 may be configured to morph the whole tooth model identified in the tooth library 216 to correspond to the shape (e.g., surface contours) of the segmented tooth. In some embodiments, the geometry processing engine 208 may be configured to generate the whole tooth model by stitching the morphed whole tooth model from the tooth library 216 to the segmented tooth, such that the whole tooth model includes a portion (e.g., a root portion) from the tooth library 216 and a portion (e.g., a crown portion) from the segmented tooth. In some embodiments, the geometry processing engine 208 may be configured to generate the whole tooth model by replacing the segmented tooth with the morphed tooth model from the tooth library. In these and other embodiments, the geometry processing engine 208 may be configured to generate whole tooth models, including both crown and roots, for each of the teeth in a 3D digital model. The whole tooth models of each of the teeth in the 3D digital model may depict, show, or otherwise represent an initial position of the patient’s dentition.

[0048] Referring now to FIG. 2 and FIGS. 8-15, the geometry processing engine 208 may be or may include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise generate a virtual gingiva from the 3D digital model. For example, once the geometry processing engine 208 generates the whole tooth models, the geometry processing engine 208 may be configured to generate the virtual gingiva based, at least partially, on the whole tooth models. For example, the geometry processing engine 208 may be configured to identify a shape of the gingiva 304 of the 3D digital model. Based, at least partially, on the shape of the gingiva 304, the geometry processing engine 208 may be configured to generate a virtual gingiva having a surface contour, a part of which may be similar to that of the gingiva 304. With the virtual gingiva, the geometry processing engine 208 may be configured to generate a new 3D digital model including the virtual gingiva and the plurality of whole tooth models.

[0049] Referring now to FIG. 8, a 3D representation 800 of a dentition is shown, according to an exemplary embodiment. The 3D representation 800 may include a plurality of tooth representations 802 and a gingiva representation 804. The tooth representations 802 may include whole teeth models that include a crown 806 and a root 808. The whole teeth models may be a plurality of morphed tooth representations based on initial scan data depicting teeth portions of the dentition. The geometry processing engine 208 may be configured to identify a border, shown as marginal line 810, that defines a boundary where the gingiva representation 804 meets the crown 806 of a tooth representation 802. The geometry processing engine 208 may be configured to identify a marginal line 810 for each of the tooth representations 802. For example, the geometry processing engine 208 may be configured to identify a point at which there is a change in slope. The point at which there is a change in slope may indicate a location at which a surface of the gingiva representation 804 meets a surface of the crown 806 of the tooth representation 802. In some embodiments, the geometry processing engine 208 may be configured to identify a plurality of points at which there is a change in slope. The plurality of points may define a perimeter around the crown 806 of the tooth representation 802. The geometry processing engine 208 may be configured to identify the marginal line 810 based on the plurality of points. With the marginal lines 810 identified, the geometry processing engine 208 may be configured to apply a smoothing function to the marginal lines 810 to smooth the marginal lines 810. For example, the geometry processing engine 208 can smooth the marginal lines 810 such that they are smooth, continuous lines that ignore any potential artifacts or irregularities of the 3D representation.

[0050] Referring now to FIG. 9, a top view of the 3D representation 800 is shown, according to an exemplary embodiment. The geometry processing engine 208 may be configured to identify a growth line 902 of the dentition of the 3D representation 800. The growth line 902 may be a line configured to pass through each of the tooth representations 802 to provide a general shape of the dentition. For example, the growth line 902 may be a smooth curved line that passes through the tooth representations 802 at a point proximate a center of the tooth representations 802. Depending on the alignment of the tooth representations within a given dental arch, the growth line 902 may not pass through a point proximate the center of some of the tooth representations. The geometry processing engine 208 may be configured to determine or otherwise generate the growth line 902 by computing a 2nd, 3rd, 4th, or Nth order curve using the center of the tooth representations 802 as points for computing the curve. The geometry processing engine 208 may be configured to generate the growth line 902 as a best-fit curve to the center of the tooth representations 802.

[0051] The geometry processing engine 208 may be configured to determine a curvature of the growth line 902. For example, the geometry processing engine 208 may be configured to identify an ellipse that closely resembles the curvature of the growth line 902. Based on the curvature of the growth line 902, the geometry processing engine 208 may be configured to generate a bottom of a base 904 for the virtual gingiva. For example, the growth line 902 may extend in a buccal direction to form a buccal surface 906 of the base 904. The growth line 902 may also extend in a lingual direction to form a lingual surface 908 of the base 904. Both the buccal surface 906 and the lingual surface 908 may extend beyond the most extreme (e.g., rear-most) tooth representations 802. For example, a portion of the base 904 may extend beyond the rear-most tooth representations 802. The portion of the buccal surface 906 and lingual surface 908 that extends beyond the most extreme tooth representations 802 may follow the same curvature as the rest of the buccal surface 906 and lingual surface 908. In some embodiments, the portion that extends beyond the most extreme tooth representations 802 may extend at a different curvature or angle. For example, as shown in FIG. 9, at a point proximate a center point of the most extreme tooth representations 802, the direction of the buccal surface 906 and the lingual surface 908 changes from the original curvature.

[0052] Referring now to FIG. 10, the tooth representations 802 of the 3D representation 800 are shown, according to an exemplary embodiment. The geometry processing engine 208 may be configured to generate a plurality of planes 1002. The plurality of planes 1002 may be arranged to generally follow the curvature of the growth line 902. The plurality of planes 1002 may be oriented perpendicular to the growth line 902. Each of the plurality of planes 1002 disposed along the growth line 902 may be spaced substantially equidistant from each other. In some embodiments, the plurality of planes 1002 may be spaced at different intervals along the growth line 902. In some embodiments, all of the plurality of planes 1002 may intersect a tooth representation 802 of the 3D representation 800. In some embodiments, a subset of the plurality of planes 1002 may not intersect a tooth representation 802. For example, there may be a space between a first tooth representation 802 and a second tooth representation 802 such that a plane 1002 is disposed between the first and second tooth representations 802 and does not intersect with either tooth representation 802.

[0053] Referring now to FIG. 1 1, marginal lines 810 from the 3D representation 800 of FIG. 8 and the planes 1002 from FIG. 10 are shown according to an exemplary embodiment. In some embodiments, a plane 1002 may intersect a marginal line 810. For example, a plane 1002 may pass through a marginal line 810 such that plane 1002 intersects the marginal line 810 at a first intersection point 1 102 and at a second intersection point 1102. The geometry processing engine 208 may be configured to identify the first and second intersection points 1 102. The geometry processing engine 208 may be configured to apply the first and second intersection points 1102 to determine a cross-sectional geometry of gingiva representation 804.

[0054] Referring now to FIG. 12, a plurality of intersection points between a plane 1002 and 3D representation 800 are shown, according to an exemplary embodiment. The geometry processing engine 208 may be configured to identify a plurality of points disposed within a single plane 1002. For example, the geometry processing engine 208 may be configured to identify a first marginal line intersection point 1 102 and a second marginal line intersection point 1 102. The first and second marginal line intersection points 1102 may be where the plane 1002 intersects a marginal line 810. The geometry processing engine 208 may also be configured to identify a first base intersection point 1202 and a second base intersection point 1202. The first base intersection point 1202 may be where the plane 1002 intersects the bottom of the buccal surface 906 and the second base intersection point 1202 may be where the plane 1002 intersects the bottom of the lingual surface 908. With the identified points, the geometry processing engine 208 may be configured to generate a series of control points, shown as spline 1204, that intersects each of the points disposed within the plane 1002. The spline 1204 may connect the buccal side 906 of the bottom of the base 904 with the lingual side 908 of the bottom of the base. The spline 1204 may be a series of control points within the plane 1002. For example, the spline 1204 may be a line that connects each of the intersection points disposed within the plane 1002. For example, the spline 1204 may begin at the first base intersection point 1202, extend to the first marginal line intersection point 1102 (e.g., the marginal intersection point 1102 on the same side of the dentition as the first base intersection point 1202), extend to the second marginal line intersection point 1 102, then extend and end at the second base intersection point 1202. The areas between the marginal intersection points 1102 and the base intersection points 1202 may represent a portion of an exterior surface of the gingiva representation 804. For example, a portion of the series of points represent a shape of the gingiva representation 804. The area between the first marginal intersection point 1 102 and the second marginal line intersection point 1 102 may represent an area under a tooth representation 802 (e.g., where a crown 806 of the tooth representation 802 generally resides). In some embodiments, instead of the spline 1204 following a smooth curve, the spline 1204 may connect the first marginal line intersection point 1 102 to the second marginal line intersection point 1 102 via the shortest geodesic path (e.g., a straight line).

[0055] Referring now to FIG. 13, an enlarged portion of the model of the dentition shown in FIG. 8 is shown, according to an illustrative embodiment. The geometry processing engine 208 may be configured to identify a plurality of control points, shown as gingival intersection points 1302. The gingival intersection points 1302 may define where a plane 1002 intersects an external surface (e.g., a perimeter) of the gingiva representation 804. For example, a plurality of gingival intersection points 1302 may follow the perimeter of the external surface of the gingiva representation 804. A first set of gingival intersection points 1302 may extend along a buccal side 1304 of the gingival representation 804. The first set may follow the perimeter of the external surface of the buccal side 1304 of the gingival representation for a predetermined distance along the external surface. For example, the first set of gingival intersection points 1302 may follow a 3mm portion of the gingival representation 804, starting from the marginal intersection point 1102 on the buccal side 1304 of the gingival representation 804 and extending along the buccal side 1304 3mm. A second set of gingival intersection points 1302 may follow a similar sized portion of the gingival representation 804 on the opposite side (e.g., lingual side 1306) of the gingival representation 804. In some embodiments, the first set of gingival intersection points 1302 may follow a different sized portion than the one followed by the second set of gingival intersection points 1302. The plurality of gingival intersection points 1302 may be disposed substantially equidistant from each other. For example, a first gingival intersection point 1302 may be adjacent to a second gingival intersection point 1302 and a third gingival intersection point 1302. The second and third gingival intersection points 1302 may each be substantially equidistant from the first gingival intersection point 1302. In some embodiments, the spacing between the gingival intersection points 1302 may be random, or may be based, at least partially, on the shape of the exterior surface of the gingiva representation 804. For example, when the exterior surface is relatively smooth, fewer gingival intersection points 1302 may be needed. Using control points (e.g., the gingival intersection points 1302) to determine the geometry of the surface of the gingival representation 804 may allow the geometry processing engine 208 to generate a virtual gingiva without topological or geometric defects (e.g., tunnels, inverted triangles, gaps, uneven triangulation, etc.) present in the original gingival representation 804.

[0056] The geometry processing engine 208 may be configured to adjust the spline 1204 or generate a new spline 1204 to include the gingival intersection points 1302. For example, the spline 1204 can connect the points in the following order: first base intersection point 1202, first set of gingival intersection points 1302, first marginal line intersection point 1102, second marginal line intersection point 1102, second set of gingival intersection points 1302, second base intersection points 1202. The spline 1204 may indicate a cross-sectional geometry of the gingiva representation 804 of the 3D representation 800.

[0057] In some embodiments, the plane 1002 may not intersect a marginal line 810. In such embodiments, the geometry processing engine 208 may generate a spline 1204 comprising no marginal line intersection points 1102. For example, the spline 1204 may include the first base intersection point 1202, the second base intersection point 1202, and a plurality of gingival intersection points 1302. In such an embodiment, the plurality of gingival intersection points 1302 may extend across a top surface of the gingival representation 804 in addition to the predetermined distance along the external surface.

[0058] Referring now to FIG. 14, a virtual gingiva 1400 is shown, according to an exemplary embodiment. The geometry processing engine 208 maybe configured to identify the points described above (e.g., marginal line intersection points 1102, base intersection points 1202, gingival intersection points 1302) at a plurality of locations along the growth line 902 of the dentition. For example, the geometry processing engine 208 may be configured to generate a plurality of planes 1002 that extend along the growth line 902. In some embodiments, the plurality of planes 1002 may extend beyond the ends of the growth line 902. The geometry processing engine 208 may be configured to identify the points disposed in each of the plurality of planes 1002 and generate a plurality of splines 1204 accordingly. For example, the plurality of splines 1204 may indicate a cross-sectional area of the gingiva representation 804 at locations where the tooth representations 802 are located, at locations between the tooth representations 802, and at locations beyond the most extreme tooth representations 802. The geometry processing engine 208 may be configured to apply the plurality of splines 1204 to generate the virtual gingiva 1400.

[0059] The virtual gingiva 1400 may include a gingiva portion 1402, at least one tooth indentation 1404, and at least one gingiva extension 1406. The gingiva extension 1406 may be a portion of the virtual gingiva 1400 configured to be a back end of the base 904 of the virtual gingiva 1400. For example, the geometry processing engine 208 may extend the bottom of the base 904 of the buccal surface 906 in a quarter circle to connect with the bottom of the base 904 of the lingual surface 908 that may also extend in a quarter circle. Planes 1002 may be generated along each of the bottom of the base 904 of the buccal surface 906 and the lingual surface 908. A spline 1204 may be generated within each plane 1002 with a given curvature parameter and/or plane angle to generate a segment of a circle to create the gingiva extension 1406. In some embodiments, the gingiva extension 1406 may be generated by a single half circle extension or other methods of extending the base 904 of the virtual gingiva 1400. The gingiva extension 1406 may begin beyond end of the most extreme tooth representation 802. In some embodiments, the gingiva extension 1406 may begin before the end of the most extreme tooth representation 802. For example, the gingiva extension 1406 may begin proximate a midpoint of the most extreme tooth representation 802.

[0060] The tooth indentation 1404 may be a representation of an area under a crown 806 of a tooth representation 802. For example, the tooth indentation 1404 may indicate where a whole tooth portion may be positioned in a new 3D representation of the dentition. The tooth indentation 1404 may be a void, hole, or space in which a root portion of a tooth representation 802 resides when the tooth representation 802 is located in the virtual gingiva 1400.

[0061] The gingiva portion 1402 may include the gingiva geometries determined via the intersection points 1102, 1202, 1302, and the splines 1204. The gingiva portion 1402 may include gingiva disposed in line with teeth indentations 1404 (e.g., gingiva geometries generated via splines that include marginal line intersection points 1 102). The gingiva portion 1402 may include gingiva disposed offset from teeth indentations 1404 (e.g., spaces between the teeth indentations 1404, space behind the most extreme teeth indentation 1404, etc.). The gingiva portion 1402 may be coupled with the gingiva extension 1406.

[0062] The gingiva portion 1402 may also include a matching portion 1408 and the base 904. The base 904, as described above, may be formed via a spline 1204 that connects base intersection points 1202 with marginal intersection pointes 1102. The base 904 may not mimic a shape (e.g., surface contour) of the gingiva representation 804. The matching portion 1408 may be a part of the gingiva portion 1402 that substantially matches the shape of a corresponding portion of the gingiva representation 804 of the 3D representation 800. For example, the matching portion 1408 may be a portion adjacent to the marginal lines 810 (e.g., a part of the gingiva portion 1402 that is closest to the crowns 806 of the tooth representations 802). The portion adjacent to the marginal lines 810 may match a corresponding portion of the gingiva representation 804. For example, the matching portion 1408 of the virtual gingiva 1400 is a 3mm portion of the gingiva portion 1402 adjacent to the marginal lines 810. The matching portion 1408 may match a 3mm portion of the gingiva representation 804 adjacent to the marginal lines 810.

[0063] Referring now to FIG. 15, a new 3D representation 1500 of the dentition is shown, according to an exemplary embodiment. The geometry processing engine 208 may be configured to generate the new 3D representation 1500 of the dentition. The new 3D representation 1500 may include the tooth representations 802 from the initial 3D representation 800 (e.g., the whole tooth models) and the generated virtual gingiva 1400. The tooth representations 802 may be disposed at a corresponding tooth indentation 1404 of the virtual gingiva 1400. The crowns 806 of the tooth representations 802 may be disposed outside of the virtual gingiva 1400 and the roots 808 may be disposed behind or within the virtual gingiva 1400 such that they are not visible. For example, the crowns 806 of the tooth representations 802 that were visible in the original 3D representation 800 may be similarly visible in the virtual gingiva 1400.

[0064] Referring now to FIG. 2 and FIG. 16, the treatment planning computing system 102 is shown to include a final position processing engine 210. Specifically, FIG. 16 shows a target final position of the dentition from the initial position of the dentition shown in FIG. 7. The final position processing engine 210 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise generate a final position of the patient’s teeth. The final position processing engine 210 may be configured to generate the treatment plan by manipulating individual 3D teeth models of teeth within the 3D model (e.g., shown in FIG. 7). In some embodiments, the final position processing engine 210 may be configured to receive inputs for generating the final position of the patient’s teeth. The final position may be a target position of the teeth post- orthodontic treatment or at a last stage of realignment. A user of the treatment planning terminal 108 may provide one or more inputs for each tooth or a subset of the teeth in the initial 3D model to move the teeth from their initial position to their final position (shown in dot-dash). For example, the treatment planning terminal 108 may be configured to receive inputs to drag individual teeth to their final position, incrementally shift the teeth to their final position, etc. The movements may include lateral/longitudinal movements, rotational movements, etc. In various embodiments, the manipulation of the 3D model may show a final (or target) position of the teeth of the patient following orthodontic treatment or at a last stage of realignment via dental aligners. In some embodiments, the final position processing engine 210 may be configured to apply one or more movement thresholds (e.g., a maximum lateral and/or rotational movement for treatment) to each of the individual 3D teeth models for generating the final position. As such, the final position may be generated in accordance with the movement thresholds.

[0065] Referring now to FIG. 2 and FIG. 17, the final position processing engine 210 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise modify the virtual gingiva 1400 based, at least in part, on the movements of the 3D teeth models. For example, the final position processing engine 210 may be configured to modify a surface contour of a portion of the virtual gingiva 1400 based on a movement of a corresponding 3D tooth model. For example, the final position processing engine 210, based on the plurality of control points (e.g., marginal line intersection points 1 102, base intersection points 1202, gingiva intersection points 1302) used to generate the virtual gingiva 1400, or a subset of the plurality of control points, may be configured to link various control points such that movement of one control point may dictate movement of another control point (e.g., an adjacent control point). For example, the final position processing engine 210 may generate a mesh creating direct relationships between vertices of the mesh and the control points of the virtual gingiva 1400. Therefore, as a position of a first control point changes, a corresponding first vertices of the mesh may move accordingly. As the first vertices moves, a second vertices may also move accordingly. As such, a second control point associated with the second vertices may move accordingly. The movements of the first control point and the second control point may be similar movements (e.g., the first control point moves a first distance and the second control point moves substantially the same distance) or may be scaled movement (e.g., the first control point moves a first distance and the second control point moves half of the first distance). The distance, rotation, and the direction of the movements may be scaled. The scale may be based on various factors, including but not limited to, position with respect to the first control point, location within the virtual gingiva 1400, geometry of the 3D tooth model associated with the portion of the virtual gingiva 1400, etc. For example, a control point disposed in a plane directly adjacent to a first plane comprising the first control point may move more than a control point disposed in a plane further away from the first plane. Regardless of the scale of the movement, the movements may be made without having to rebuild the mesh. Therefore, deformations of the virtual gingiva 1400 may be made almost instantaneously since the computationally intensive operations are removed. Furthermore, this reduces the amount of data stored by only storing series of points in sections that are significant, not the entire mesh.

[0066] In FIG. 17, a displacement of a tooth representation 802 of the new 3D representation 1500 is shown, according to an exemplary embodiment. When the tooth representation 802 moves from a first position to a second position, the final position processing engine 210 may be configured to modify a surface contour of at least a portion of the virtual gingiva 1400 based on the movement of the tooth representation 802. For example, as shown in FIG. 17, when the tooth representation 802 is moved in a lingual direction, a marginal line 810 associated with the tooth representation 802 moves with the tooth representation 802. Accordingly, a spline 1204 comprising marginal line intersection points 1102 associated with the marginal line 810 is deformed such that the spline 1204 remains in the same plane 1002 and maintains the marginal line intersection points 1102. However, to maintain the marginal line intersection points 1102, the final position processing engine 210 may determine a displacement 1702 of the marginal line intersection points 1 102. The final position processing engine 210 may deform the spline 1204 based, at least partially, on the displacement 1702 of the marginal line intersection points 1102. For example, based on the displacement 1702, the final position processing engine 210 may determine a corresponding displacement 1704 for each of the control points (e.g., gingiva intersection points 1302) along the spine 1204. The final position processing engine 210 may also consider the smoothness of the spline 1204, among other factors, when determining the corresponding displacement 1704 of the control points. Some of these calculations may be predetermined and executed by the final position processing engine 210 via the linking of the control points described above. For example, a plurality of planes 1002 may intersect the tooth representation 802 that is moved. Each spline 1204 associated with each of the plurality of planes 1002 that intersect the tooth representation 802 may be modified based on the movement of the marginal line intersection points 1102 being moved. Additionally, other control points on the splines 1204 associated with each of the plurality of planes may move accordingly. Furthermore, control points of a spline 1204 disposed within an outermost plane 1002 that intersects the tooth representation 802 may also be linked to control points of a spline 1204 that is not within a plane 1002 that intersects the tooth representation 802. For example, a first plane 1002 may intersect a first tooth representation 802 and a second plane 1002 may intersect a second tooth representation 802. The control points of the first plane 1002, or a subset thereof, may be linked to the control points of the second plane 1002. Therefore, a portion of the virtual gingiva 1400 that is not associated with the moved tooth representation 802 may still be modified. A portion of the virtual gingiva 1400 being associated with a tooth representation 802 may mean that the portion of the virtual gingiva 1400 is aligned with the tooth representation 802 such that a plane through the portion of the virtual gingiva would intersect the tooth portion of the representation 802. This can create a smooth transition between portions of the gingiva model, and prevent segmentation between the portions. [0067] In another embodiment, a portion of the virtual gingiva 1400 that is not associated with any tooth representation 804 may be modified based on a movement of a tooth representation 802. For example, a first portion of the virtual gingiva 1400 may be associated with a first tooth representation 802 and a second portion of the virtual gingiva 1400 may be associated with a second tooth representation 802. A third portion of the virtual gingiva 1400 may be disposed between the first portion and the second portion, and not be associated with any tooth representation 802. The final position processing engine 210 may be configured to modify a surface contour of the third portion based on movement of the first tooth representation 802. For example, a control point of a plane 1002 that intersects the first tooth representation 802 may be linked with a control point of a plane 1002 that intersects the only the third portion of the virtual gingiva 1400. The final position processing engine 210 may be configured to apply a smoothing function to the surface contour of the virtual gingiva such that segmentation between the portions of the virtual gingiva is not visible. This allows the final position processing engine 210 to generate an accurate representation of how the movements of the tooth representations 802 affect the virtual gingiva 1400, allowing the system to provide more predictable results with the treatment plans.

[0068] Referring now to FIG. 2 and FIG. 18, the treatment planning computing system 102 is shown to include a staging processing engine 212. Specifically, FIG. 18 shows a series of stages of the dentition from the initial position shown in FIG. 7 to the target final position shown in FIG. 16, according to an illustrative embodiment. The staging processing engine 212 may be or include any device(s), component(s), circuit(s), or other combination of hardware components designed or implemented to determine, identify, or otherwise generate stages of treatment from the initial position to the final position of the patient’s teeth. In some embodiments, the staging processing engine 212 may be configured to receive inputs (e.g., via a user interface of the treatment planning terminal 108) for generating the stages. In some embodiments, the staging processing engine 212 may be configured to automatically compute or determine the stages based on the movements from the initial to the final position. The staging processing engine 212 may be configured to apply one or more movement thresholds (e.g., a maximum lateral and/or rotational movement for a respective stage) to each stage of treatment plan. The staging processing engine 212 may be configured to generate the stages as 3D digital models of the patient’s teeth as they progress from their initial position to their final position. For example, and as shown in FIG. 18, the stages may include an initial stage 1800 including a 3D digital model of the patient’s teeth at their initial position, one or more intermediate stages 1802 including 3D digital model(s) of the patient’s teeth at one or more intermediate positions, and a final stage 1804 including a 3D digital model of the patient’s teeth at the final position.

[0069] In some embodiments, the staging processing engine 212 may be configured to generate at least one intermediate stage for each tooth based on a difference between the initial position of the tooth and the final position of the tooth. For instance, where the staging processing engine 212 generates one intermediate stage, the intermediate stage may be a halfway point between the initial position of the tooth and the final position of the tooth. Each stage comprising a different tooth position may also include a modified gingiva portion, based on the movements of the teeth. Each of the stages may together form a treatment plan for the patient, and may include a series or set of 3D digital models.

[0070] Following generating the stages, the treatment planning computing system 102 may be configured to transmit, send, or otherwise provide the staged 3D digital models to the fabrication computing system 106. In some embodiments, the treatment planning computing system 102 may be configured to provide the staged 3D digital models to the fabrication computing system 106 by uploading the staged 3D digital models to a patient file which is accessible via the fabrication computing system 106. In some embodiments, the treatment planning computing system 102 may be configured to provide the staged 3D digital models to the fabrication system 106 by sending the staged 3D digital models to an address (e.g., an email address, IP address, etc.) for the fabrication computing system 106.

[0071] The fabrication computing system 106 can include a fabrication computing device and fabrication equipment 218 configured to produce, manufacture, or otherwise fabricate dental aligners. The fabrication computing system 106 may be configured to receive a plurality of staged 3D digital models corresponding to the treatment plan for the patient. As stated above, each 3D digital model may be representative of a particular stage of the treatment plan (e.g., a first 3D model corresponding to an initial stage of the treatment plan, one or more intermediate 3D models corresponding to intermediate stages of the treatment plan, and a final 3D model corresponding to a final stage of the treatment plan).

[0072] The fabrication computing system 106 may be configured to send the staged 3D models to fabrication equipment 218 for generating, constructing, building, or otherwise producing dental aligners 220. In some embodiments, the fabrication equipment 218 may include a 3D printing system. The 3D printing system may be used to 3D print physical models corresponding the 3D models of the treatment plan. As such, the 3D printing system may be configured to fabricate physical models which represent each stage of the treatment plan. In some implementations, the fabrication equipment 218 may include casting equipment configured to cast, etch, or otherwise generate physical models based on the 3D models of the treatment plan. Where the 3D printing system generates physical models, the fabrication equipment 218 may also include a thermoforming system. The thermoforming system may be configured to thermoform a polymeric material to the physical models, and cut, trim, or otherwise remove excess polymeric material from the physical models to fabricate a dental aligner. In some embodiments, the 3D printing system may be configured to directly fabricate dental aligners 220 (e.g., by 3D printing the dental aligners 220 directly based on the 3D models of the treatment plan). Additional details corresponding to fabricating dental aligners 220 are described in U.S. Provisional Patent AppL No. 62/522,847, titled “Dental Impression Kit and Methods Therefor,” filed June 21 , 2017, and U.S. Patent AppL No. 16/047,694, titled “Dental Impression Kit and Methods Therefor,” filed July 27, 2018, and U.S. Patent No. 10,315,353, titled “Systems and Methods for Thermoforming Dental Aligners,” filed November 13, 2018, the contents of each of which are incorporated herein by reference in their entirety.

[0073] The fabrication equipment 218 may be configured to generate or otherwise fabricate dental aligners 220 for each stage of the treatment plan. In some instances, each stage may include a plurality of dental aligners 220 (e.g., a plurality of dental aligners 220 for the first stage of the treatment plan, a plurality of dental aligners 220 for the intermediate stage(s) of the treatment plan, a plurality of dental aligners 220 for the final stage of the treatment plan, etc.). Each of the dental aligners 220 may be worn by the patient in a particular sequence for a predetermined duration (e.g., two weeks for a first dental aligner 220 of the first stage, one week for a second dental aligner 220 of the first stage, etc.). The dental aligners 220 are configured to reposition teeth of a patient from an initial position to a final position. For example, the dental aligners 220, collectively, can be configured to move the teeth of the patient from the initial stage 1800, to one or more intermediate stages 1802, ultimately to a final stage 1804, as shown in FIG. 18. The dental aligners 220 are configured to move a tooth by exerting a force upon the tooth (e.g., a pushing force, a rotation force, etc.). As part of moving a tooth, repositioning teeth, or to enhance fit or comfort, the dental aligners 220 may also interface with the gingiva of the patient. For example, the dental aligners 220 may be cut or formed flush with the tooth-gingival interface (e.g., marginal line 810), the dental aligners 220 may be cut or formed a threshold distance away from the tooth-gingival interface such that a portion of one or more teeth is not covered entirely by the dental aligners 220, or the dental aligners 220 may be cut or formed a threshold distance away from the tooth-gingival interface such that a portion of the dental aligners 220 overlaps a portion of the patient’s gingiva (e.g., overlaps the gingiva by 1.0mm, 0.5mm, 0.25mm, 0.1mm, or within the range of 0.01mm to 2.0mm).

[0074] Referring now to FIG. 19, a method 1900 of generating a three-dimensional (3D) representation of a dentition is shown, according to an exemplary embodiment. The method 1900 may include receiving a first 3D representation of a dentition (step 1902), identifying a shape of a gingiva representation of the first 3D representation (step 1904), generating a virtual gingiva (step 1906), and generating a second 3D representation of the dentition (step 1908). At step 1902, one or more processors may receive a first three-dimensional (3D) representation of a dentition. For example, the geometry processing engine 208 may receive the 3D representation of the dentition. The 3D representation may include a plurality of tooth representations 802 and a gingiva representation 804. The 3D representation of the dentition may be obtained via an intake computing system 104. The intake computing system 104 may be a part of the treatment planning computing system 102, or may be separate from the treatment planning computing system 102. In some embodiments, a scanning device 214 of the intake computing system 104 may detect or generate data corresponding to a dentition. The data can be used to create the first 3D representation of the dentition. In some embodiments, the first 3D representation of the dentition includes a plurality of morphed tooth representations having respective crown and root representations. The plurality of morphed tooth representations may be based on initial scan data depicting teeth portions of the dentition.

[0075] At step 1904, one or more processors may identify a shape of the gingiva representation of the first 3D representation. To identify the shape of the gingiva representation, the one or more processors may determine a curvature of a growth line of the dentition. The growth line 902 may be a line configured to pass through each of the tooth representations 802 to provide a general shape or curvature of the dentition. For example, the growth line 902 may be configured to attempt to be a smooth curved line that passes through the tooth representations 802 at a point proximate a center of the tooth representations 802. Depending on the alignment of the tooth representations, the growth line 902 may not pass through a point proximate the center of some of the tooth representations. In one embodiment, the one or more processors may apply an equation to the growth line to determine the curvature. In another embodiment, the one or more processors may identify an ellipse that closely resembles the curvature of the growth line 902.

[0076] Based on the curvature of the growth line 902, the one or more processors may generate a bottom of a base 904 of a virtual gingiva. For example, the one or more processors may extend the curvature of the growth line 902 in a buccal direction from the tooth growth line 902 to generate a buccal side 906 of the base 904 and in a lingual direction from the tooth growth line to generate a lingual side 908 of the base 904.

[0077] To identify the shape of the gingiva representation, the one or more processors may determine a cross-sectional geometry of the gingiva representation of the 3D representation at a plurality of points along the growth line of the dentition. To determine the cross-sectional geometry at a first point along the growth line 902, the one or more processors may identify a marginal line 810 for a first tooth representation 802. The marginal line 810 may define a boundary where the gingiva representation 804 contacts the first tooth representation 802. The one or more processors may generate a plane 1002 perpendicular to the growth line 902. The plane 1002 may be disposed at the first point along the growth line 902. The plane 1002 may intersect the first tooth representation 802 and a portion of the gingiva representation 804 that is associated with the first tooth representation 802. The one or more processors may identify at least two marginal line intersection points 1102 where the plane 1002 intersects the marginal line 810. The one or more processors may identify a plurality of control points (e.g., gingiva intersection points 1302) where the plane 1002 intersects the gingiva representation 804. The plane 1002 may also intersect the bottom of the base 904 of the virtual gingiva. The one or more processors may identify at least two base intersection points 1202 where the plane 1002 intersects the base 904.

[0078] The one or more processors may generate the estimated cross-sectional geometry of the gingiva representation 804 at the first point along the growth line 902 based on at least a subset of the plurality of intersection points 1102, 1202, 1302 identified by the one or more processors. For example, the one or more processors may generate a series of control points (e.g., spline 1204) comprising the plurality of gingiva intersection points 1302 and the marginal line intersection points 1102. The portion of the series of control points comprising the gingiva intersection points 1302 may represent a shape of the gingiva representation 804. In some embodiments, the portion of the series of control points representing the shape of the gingiva representation 804 may not include topological or geometric defects. In some embodiments, the plurality of control points comprise an ordered array of points. In some embodiments, the ordered array of points may comprise substantially equal steps between each of the points. For example, a first control point is adjacent to a second control point and a third control point. The second and third control point may be substantially equidistant from the first control point. In some embodiments, the one or more processors may generate the estimated cross-sectional geometry of the gingiva representation 804 by connecting the buccal side 906 of the base 904 with the lingual side 908 of the base 904 via the series of control points. For example, the series of control points may pass through the control points in the following order: first base intersection point 1202, first set of gingival intersection points 1302, first marginal line intersection point 1 102, second marginal line intersection point 1102, second set of gingival intersection points 1302, second base intersection points 1202. In some embodiments, the one or more processors may connect the first marginal line intersection point 1102 and the second marginal line intersection point 1102 via the shortest geodesic path (e.g., a straight line). [0079] In some embodiments, the plane 1002 disposed at the first point along the growth line 902 does not intersect a tooth representation 802, and therefore does not intersect a marginal line 810. In such embodiments, the series of control points may pass through the control points in the following order: first base intersection point 1202, first set of gingival intersection points 1302, additional gingiva intersection points 1302, second set of gingiva intersection points 1302, and second base intersection points 1202. All the gingiva intersection points 1302 may be considered a single set of gingiva intersection points 1302.

[0080] In some embodiments, the one or more processors identify the shape of the gingiva representation 804 at a plurality of locations along the growth line 902 of the dentition. For example, the process described above is done at each of a plurality of points along the growth line 902. For example, the one or more processors generate a plurality of planes 1002. Each of the plurality of planes 1002 is disposed at one of the plurality of points along the growth line 902. The one or more processors may generate a series of control points within each of the plurality of planes 1002.

[0081] At step 1906, the one or more processors may generate a virtual gingiva. The virtual gingiva may have a surface contour based at least partially on the shape of the gingiva representation 804 of the first 3D representation. For example, the one or more processors may generate a gingiva portion 1402 based, at least partially, on the series of control points of the plurality of planes 1002 disposed along the growth line 902. The gingiva portion 1402 may include the base 904 and a matching portion 1408. The base 904 may not mimic a shape (e.g., surface contour) of the gingiva representation 804. The matching portion 1408 may be a part of the gingiva portion 1402 that does substantially match the shape of a corresponding portion of the gingiva representation 804 of the 3D representation. For example, the one or more processors may generate the matching portion 1408 based on the gingiva intersection points 1302 of the series of control points. The one or more processors may receive an indication as to what size to make the matching portion 1408. For example, the one or more processors may receive instructions indicating to make the matching portion 1408 match a first 3mm of the gingiva representation 804 of the first 3D representation. Based on the instructions, the one or more processors may identify gingiva intersection points 1302 along a corresponding 3mm portion of the gingiva representation 804. In some embodiments, the matching portion 1408 is directly adjacent to the marginal lines 810. The one or more processors may reduce or eliminate defects of the gingiva representation 804 of the first 3D representation by identifying the control points rather than tracing every detail of the gingiva representation 804.

[0082] By generating the gingiva portion 1402 based on the series of control points, the one or more processors may also generate at least one tooth indentation 1404. The one or more processors may generate a tooth indentation 1404 based on the portions of the series of control points that connect the two marginal intersection points 1102. The tooth indentation 1404 may indicate where a whole tooth portion may be positioned in a new 3D representation of the dentition. For example, the tooth indentation 1404 may be a void, hole, or space in which a root portion of a tooth representation 802 resides when the tooth representation 802 is located in the virtual gingiva 1400.

[0083] Generating the virtual gingiva at step 1906 may also include the one or more processors generating at least one gingiva extension 1406. For example, the one or more processors may generate an additional portion to couple with the gingiva portion 1402 that extends beyond the most extreme tooth representations 802. The gingiva extension 1406 may act as a back or end to the virtual gingiva. To generate the gingiva extension 1406, the one or more processors may extend the bottom of the base 904 of the buccal surface 906 in a quarter circle to connect with the bottom of the base 904 of the lingual surface 908 that may also extend in a quarter circle. The one or more processors may generate a plurality of planes 1002 to be disposed along each of the bottoms of the base 904. The one or more processors may generate a series of control points within each plane 1002 with a given curvature parameter and/or plane angle to generate a segment of a circle to create the gingiva extension 1406. In some embodiments, the one or more processor may generate a single half circle extension, or other geometry for extending the base 904 of the virtual gingiva 1400. The one or more processors may initiate the gingiva extension 1406 beyond an end of the most extreme tooth representation 802. In some embodiments, one or more processors may initiate the gingiva extension 1406 before the end of the most extreme tooth representation 802. For example, the gingiva extension 1406 may initiate proximate a midpoint of the most extreme tooth representation 802. [0084] At step 1908, the one or more processors may generate a second 3D representation. The second representation may include the virtual gingiva and a plurality of tooth representations. Generating the second 3D representation may include combining the virtual gingiva 1400 with a plurality of morphed tooth representations. The one or more processors may dispose each of the plurality of morphed tooth representations at a location of a corresponding tooth indentation 1404 of the virtual gingiva 1400. Generating the second 3D representation may also include linking the virtual gingiva with the plurality of tooth representations such that movement of one produces movement of the other. For example, when the one or more processors moves a tooth representation from a first position to a second position, the virtual gingiva will also move.

[0085] Referring now to FIG. 20, a method 2000 of modifying a virtual gingiva is shown, according to an exemplary embodiment. Modifying a virtual gingiva may include one or more processors generating a 3D representation of a dentition (step 2002), moving a tooth representation of the 3D representation (step 2004), and modifying a surface contour of the virtual gingiva (step 2006). At step 2002, one or more processors generate a 3D representation of a dentition. In some embodiments, the 3D representation may be generated according to steps 1902-1908. In such embodiments, the 3D representation may include the virtual gingiva and a plurality of morphed tooth representations. The 3D representation may represent the plurality of morphed tooth representation in an initial position.

[0086] At step 2004, the one or more processors may move one of the plurality of tooth representations of the 3D representation. For example, the one or more processors may move a tooth representation 802 from a first position to a second position. The second position may indicate a final (target) position of a patient’s tooth after completing treatment. In some embodiments, the second position may indicate an intermediate position of a patient’s tooth. The intermediate position may be a position of the tooth representation between the first position and a final position. Moving the tooth representation 802 may include receiving an input regarding the movement of the tooth representation 802. For example, the input may indicate how many intermediate stages to have, which direction to move the tooth representation, a desired displacement of the tooth representation 802, etc. Moving the tooth representation 802 may also include the one or more processors to apply one or more movement thresholds to the tooth representation 802. As such, the one or more processors may move the plurality of tooth representations 802 according to such input.

[0087] At step 2006, the one or more processors may modify the virtual gingiva. Modifying the virtual gingiva may include modifying a surface contour of a portion of the virtual gingiva. The modifying of the surface contour may be based, at least partially, on the movement of a tooth representation 802 from a first position to a second position. Modifying the surface contour of a portion of the virtual gingiva may include the one or more processors linking a plurality of control points within the virtual gingiva with each other such that movement of a first control points dictates movement of a second control point. For example, linking the plurality of control points within the virtual gingiva with each other may comprise generating a mesh. Generating the mesh may include the one or more processors generating a plurality of mesh points that are linked to each other via the mesh. Each of the plurality of mesh points may have a corresponding control point within the virtual gingiva. The virtual gingiva may be in an initial position such that each of the plurality of control points is in an initial position. The one or more processors may align each of the plurality of mesh points with its corresponding control point. As such, when a control point is moved due to a tooth representation 802 being moved (as described with reference to FIG. 17), the one or more processors may move other control points accordingly.

[0088] Moving the various control points may modify the surface contour of a portion of the virtual gingiva. For example, a first portion of the virtual gingiva 1400 may be associated with a first tooth representation 802 and a second portion of the virtual gingiva 1400 may be associated with a second tooth representation 802. A third portion of the virtual gingiva 1400 may be disposed between the first portion and the second portion, and may not be associated with any tooth representation 802. The one or more processors may modify a surface contour of the third portion based on movement of the first tooth representation 802. For example, the first tooth representation 802 associated with the first portion may be translated 3mm in a lingual direction. The second tooth representation 802 associated with the second portion may not move at all. Instead of having an immediate transition from 0mm to 3mm, from the first portion to the second and third portion, the one or more processors may generate a more gradual transition between the first portion and the second portion. For example, the mesh may cause a first control point within the third portion to move 2mm and a second control point within the third portion to move 1mm. The mesh may remove abrupt transitions between portions of the virtual gingiva by creating more gradual transitions.

[0089] The one or more processors may receive input regarding how to move control points. For example, the input may indicate a relationship between certain control points such that movement of one dictates movement of another. The relationship may indicate how to move (translate, rotate, etc.), where to move (e.g., buccal or lingual direction, etc.), how much to move (e.g., same amount as related control point, a percentage of the amount of the related control point, etc.), etc. The one or more processors may apply the input received to a mesh, such that the intensive computational operations are not needed during the deformation process. The mesh allows the one or more processers to apply a smoothing function to the surface contour of the virtual gingiva. For example, the one or more processors may eliminate segmentation between portions of the virtual gingiva by smoothing a transition between a first portion associated with a first tooth representation 802 and a second portion associated with a second tooth representation 802 (or not associated with a tooth representation).

[0090] As utilized herein, the terms “approximately,” “about,” “substantially”, and similar terms are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. It should be understood by those of skill in the art who review this disclosure that these terms are intended to allow a description of certain features described and claimed without restricting the scope of these features to the precise numerical ranges provided. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the disclosure as recited in the appended claims.

[0091] It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples).

[0092] The term “coupled” and variations thereof, as used herein, means the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly to each other, with the two members coupled to each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled to each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic.

[0093] The term “or,” as used herein, is used in its inclusive sense (and not in its exclusive sense) so that when used to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is understood to convey that an element may be either X, Y, Z; X and Y; X and Z; Y and Z; or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one ofZ to each be present, unless otherwise indicated.

[0094] References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the F. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.

[0095] The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit or the processor) the one or more processes described herein.

[0096] The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.

[0097] Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. Such variation may depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.

[0098] It is important to note that the construction and arrangement of the systems, apparatuses, and methods shown in the various exemplary embodiments is illustrative only. Additionally, any element disclosed in one embodiment may be incorporated or utilized with any other embodiment disclosed herein. For example, any of the exemplary embodiments described in this application can be incorporated with any of the other exemplary embodiment described in the application. Although only one example of an element from one embodiment that can be incorporated or utilized in another embodiment has been described above, it should be appreciated that other elements of the various embodiments may be incorporated or utilized with any of the other embodiments disclosed herein.