Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEEP LEARNING FOR AUTOMATED SMILE DESIGN
Document Type and Number:
WIPO Patent Application WO/2023/017390
Kind Code:
A1
Abstract:
A method for displaying teeth after planned orthodontic treatment in order to show persons how their smiles will look after the treatment. The method includes receiving a digital 3D model of teeth or rendered images of teeth, and an image of a person such as a digital photo. The method uses a generator network to produce a generated image of the person showing teeth of the person, the person's smile, after the planned orthodontic treatment. The method uses a discriminator network processing input images, generated images, and real images to train the generator network through deep learning models to product a photo-realistic image of the person after the planned treatment.

Inventors:
FABBRI CAMERON M (US)
DONG WENBO (US)
GRAHAM JAMES L (US)
OLSON CODY J (US)
Application Number:
PCT/IB2022/057323
Publication Date:
February 16, 2023
Filing Date:
August 05, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3M INNOVATIVE PROPERTIES COMPANY (US)
International Classes:
A61C7/00; A61C9/00; G06N3/04; G06N3/08; G06T17/00; G16H50/50
Domestic Patent References:
WO2019213129A12019-11-07
WO2021108016A12021-06-03
Foreign References:
US20190350680A12019-11-21
KR20200046843A2020-05-07
Other References:
KIM MINGYU, KIM SUNGCHUL, KIM MINJEE, BAE HYUN-JIN, PARK JAE-WOO, KIM NAMKUG: "Realistic high-resolution lateral cephalometric radiography generated by progressive growing generative adversarial network and quality evaluations", SCIENTIFIC REPORTS, vol. 11, no. 1, 15 June 2021 (2021-06-15), pages 12563 - 12563-10, XP093034189, DOI: 10.1038/s41598-021-91965-y
Attorney, Agent or Firm:
SRINIVASAN, Sriram et al. (US)
Download PDF:
Claims:
The invention claimed is:

1. A method for displaying teeth after planned orthodontic treatment, comprising steps of executed by a processor: receiving a digital 3D model of teeth or rendered images of teeth, and an image of a person; using a generator network to produce a generated image of the person showing teeth of the person after the planned orthodontic treatment of the teeth; and using a discriminator network processing input images, generated images, and real images to train the generator network.

2. The method of claim 1, further comprising receiving a final alignment or stage of the teeth after the planned orthodontic treatment.

3. The method of claim 2, further comprising blocking out teeth of the person in the received image.

4. The method of claim 3, wherein the using the generator network step comprises filling in the blocked out teeth in the received image with the final alignment or stage of the teeth.

5. The method of claim 1, further comprising using a feature extracting network to extract features from the digital 3D model of teeth or rendered images of teeth for the generator network.

6. The method of claim 1, further comprising using a feature extracting network to extract features from the digital 3D model of teeth or rendered images of teeth for the discriminator network.

7. The method of claim 1, wherein the image comprises a digital photo.

8. The method of claim 1, wherein when the discriminator network is provided with real images, the discriminator network classifies the input images as real in order to train the generator network.

9. The method of claim 1, wherein when the discriminator network is provided with generated images, the discriminator network classifies the input images as fake in order to train the generator network.

10. The method of claim 1, wherein the planned orthodontic treatment comprises a final stage or setup.

11. The method of claim 1, wherein the planned orthodontic treatment comprises an intermediate stage or setup.

12. A system for displaying teeth after planned orthodontic treatment, comprising a processor configured to execute any of the methods of claims 1-11.

Description:
DEEP LEARNING FOR AUTOMATED SMILE DESIGN

BACKGROUND

Orthodontic clear tray aligners allow patients to receive high quality, customizable treatment options. A potential patient may browse past clinical cases if they are considering getting treatment. A high-level overview of one pipeline is as follows: a potential patient will arrive at the doctor's office; the doctor will take a scan of their teeth, extracting a three-dimensional (3D) mesh; and this 3D mesh is processed by an algorithm to produce a mesh of the patient's teeth in their final alignment.

A common question from patients is, "what would my new smile look like?” While they have the ability to view previous clinical trials and even the ability to view the 3D mesh of their newly aligned teeth, neither option provides the patient with a true feel of what their teeth and smile may look like after orthodontic treatment. Because of this, potential patients may not be fully committed to receiving treatment.

SUMMARY

A method for displaying teeth after planned orthodontic treatment includes receiving a digital 3D model of teeth or rendered images of teeth, and an image of a person. The method uses a generator network to produce a generated image of the person showing teeth of the person after the planned orthodontic treatment The method uses a discriminator network processing input images, generated images, and real images to train the generator network.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram of a system for generating an image of a person’s smile showing posttreatment results.

FIG. 2 is a diagram of a generator network for the system.

FIG. 3 is a diagram of a discriminator network for the system.

FIG. 4 shows some results after training the system.

DETAILED DESCRIPTION

Embodiments include an automated system to generate an image of a potential person’s smile showing post-treatment aligner results, before treatment has begun. The system utilizes data including a person’s image as well as their corresponding 3D scan in order to learn how to generate a photorealistic image. Though the system is trained to generate a person’s smile from their pre-treatment scan, the scan can be swapped out with a post-treatment scan in order to give the person the ability to view potential post-treatment results. Alternatively or in addition, the system can be used to show persons their appearance after each stage, treatment, or selected stages of treatment.

The ability for a person to view a post-treatment photo of themselves smiling, before any treatment has begun, may give them confidence moving forward with the treatment process, as well as help convince those who may be uncertain. Additionally, the person would be able to provide feedback to the doctor or practitioner if any aesthetic changes are requested, and the doctor or practitioner can modify the alignment of the mesh to meet the person’s needs.

FIG. 1 is a diagram of a system 10 for generating an image of a person’s smile showing posttreatment results (21). System 10 includes a processor 20 receiving a digital 3D model (mesh) or rendered images of teeth and an image (e.g., digital photo) of the corresponding person (12). The digital 3D model can be generated from, for example, intra-oral 3D scans or scans of impressions of teeth. System 10 can also include an electronic display device 16, such as a liquid crystal display (LCD) device, and an input device 18 for receiving user commands or other information. Systems to generate digital 3D images or models based upon image sets from multiple views are disclosed in U.S. Patent Nos. 7,956,862 and 7,605,817, both of which are incorporated herein by reference as if fully set forth. These systems can use an intra-oral scanner to obtain digital images from multiple views of teeth or other intra-oral structures, and those digital images are processed to generate a digital 3D model representing the scanned teeth and gingiva. System 10 can be implemented with, for example, a desktop, notebook, or tablet computer.

The system is built upon generative machine or deep learning models known as Generative Adversarial Networks (GANs). This class of algorithms contains a pair of differentiable functions, often deep neural networks, whose goal is to learn an unknown data distribution. The first function, known as the generator, produces a data sample given some input (e.g., random noise, conditional class label, or others). The generator and feature extracting network also receive the pixel-wise difference between the generated and ground truth image in the form of a loss function. The second function, known as the discriminator, attempts to classify the “fake” data generated by the generator from the “real” data coming from the true data distribution. As the generator continuously tries to fool the discriminator into classifying data as “real,” the generated data becomes more realistic.

The system uses a conditional GAN (cGAN), where the generator is conditioned on either a two-dimensional (2D) rendered image of the person’s scanned teeth, or the 3D mesh model of their teeth, along with an image of the person smiling with their teeth blocked out. The generator (see FIG. 2), represented as a Convolutional Neural Network (CNN), aims to use the conditional information given from their scan in order to inpaint the missing data for their smile. As shown in FIG. 2, a generator network 30 receives a 3D mesh or rendered images of teeth (22) via a feature extracting network 26 providing features 28. Feature extracting network 26 can be implemented with, for example, an inference engine to provide the features in a less noisy output mesh or rendered image. Generator network 30 also receives an image (e.g., digital photo) of the person’s face (24). Generator network 30 then generates an image of that person smiling (32) with the respective teeth model that it was given, allowing persons to view their smile with any number of different styles of teeth. Specifically, the system can target viewing the post-treatment smile as a use case.

The discriminator, also represented as a CNN, has two training steps. As shown in FIG. 3, a discriminator network 44 receives a 3D mesh or rendered images of teeth (34) via a feature extracting network 40 providing features 42. Feature extracting network 40 can be implemented with, for example, an inference engine to provide the features in a less noisy output mesh or rendered image. In the first step, discriminator network 44 is given the image (e.g., digital photo) of the person with their mouth blocked (36), a real image of the person without their mouth blocked (46), and features (42) of person’s scan extracted from another neural network. In this case, discriminator network 44 should classify the triplet of data as “real” (47), as the photo of the person comes from the real dataset. In the second step discriminator network 44 is again given the image (e.g., digital photo) of the person with their mouth blocked (36), features (42) of person’s scan extracted from another neural network, but this time a generated image (38) from the generator. In this case, discriminator network 44 should classify the triplet of data as “fake” (45). The generator and discriminator are trained simultaneously, improving upon each other.

The images of the person smiling with their teeth blocked out can be generated by finding features of the smile in the images (e g., comers of the mouth), extracting the bounds of those features, and whiting out those features.

The generator network, discriminator network, and feature extracting network can be implemented in, for example, software or firmware modules for execution by a processor such as processor 20. The generated images can be displayed on, for example, display device 16.

The dataset for the following experiment consisted of ~5,000 patients. Each patient has a front facing photo of themselves smiling, as well as a scan of their teeth. For this experiment, we used a 2D render for the scan as the conditional information.

FIG. 4 shows results after training the system. Each column contains a different patient, whereas each row contains the matching patient’s scan (e.g., the patient in column A matches their scan in row A). In practice, the system would work as follows. A potential patient would arrive at the doctor and receive a scan of their teeth. This scan would be processed by a staging algorithm in order to place the teeth in their final alignment or in an intermediate stage of treatment. Staging algorithms to generate intermediate and final stages or setups are described in, for example, US Patent Application Publication No. 2020/0229900 and PCT Application Publication No. 2020/202009. Using the generator network, this scan would then be swapped with the patient’s pre-treatment scan in order to generate a photo of them with post-treatment results. Swapping the scans can include filling in the blocked out mouth of the person from the input image.

In order to test the viability of this, we swapped the scans of different patients, then generated their corresponding photo. For example, Column A in FIG. 4 shows generated photos of Patient A with their own scan (Row A), as well as with scans from Patients B and C (Rows B and C). In FIGS. 2-4, line drawings are used to represent the persons and their smiles. An implementation would typically use actual photos, real and generated, as described above.