Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC MESH CODING WITH SIMPLIFIED TOPOLOGY
Document Type and Number:
WIPO Patent Application WO/2023/129988
Kind Code:
A2
Abstract:
In some embodiments, a mesh decoder reconstructs geometry information of a dynamic mesh from a coded mesh bitstream of the dynamic mesh. The reconstructed geometry information include data specifying vertices of the dynamic mesh. The decoder also reconstructs connectivity information of the dynamic mesh which includes data specifying faces of the dynamic mesh. The decoder refines the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information. The decoder further reconstructs an attribute image of the dynamic mesh from the coded mesh bitstream which includes image content to be applied to faces of the dynamic mesh. The decoder refines the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image. Based on the reconstructed geometry information, the refined connectivity information, and the refined attribute image, the decoder reconstructs the dynamic mesh which can be rendered for display.

Inventors:
ZAKHARCHENKO VLADYSLAV (US)
YU YUE (US)
YU HAOPING (US)
Application Number:
PCT/US2022/082500
Publication Date:
July 06, 2023
Filing Date:
December 28, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
G06T15/00
Attorney, Agent or Firm:
SHEN, Fei et al. (US)
Download PDF:
Claims:
Claims

1. A computer-implemented method for decoding a coded mesh bitstream of a dynamic mesh representing three-dimensional (3D) content, the method comprising: reconstructing geometry information of the dynamic mesh from a geometry component bitstream in the coded mesh bitstream, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

2. The computer-implemented method of claim 1, wherein refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information comprises dividing a face out of the plurality of faces specified by the reconstructed connectivity information into two faces based on a vertex of the plurality of vertices specified in the reconstructed geometry information.

3. The computer-implemented method of claim 1, wherein refining the reconstructed connectivity information based on the reconstructed geometry information comprises: identifying a vertex of the plurality of vertices that is located inside a face of the plurality of faces; determining a projected vertex of the vertex on an edge of the face; and dividing the face into two refined faces, each refined face has the projected vertex as one vertex.

4. The computer-implemented method of claim 3, wherein refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image comprises: determining a transformation based on the vertex and the projected vertex; and applying an inverse of the transformation to a face in the reconstructed attribute image that corresponds to the face.

5. The computer-implemented method of claim 4, wherein the transformation is one of an affine transformation or an interpolation transformation.

6. The computer-implemented method of claim 5, wherein parameters of the interpolation transformation are signaled in the coded mesh bitstream.

7. The computer-implemented method of claim 5, wherein parameters of the affine transformation are determined based on the vertex and the projected vertex.

8. The computer-implemented method of claim 1, further comprising: reconstructing mapping information of the dynamic mesh from a mapping component bitstream in the coded mesh bitstream, the reconstructed mapping information comprising data specifying a list of vertex attribute coordinates in the attribute image; and refining the reconstructed mapping information based on the reconstructed geometry information to generate refined mapping information, wherein reconstructing the dynamic mesh is further based on the refined mapping information.

9. A non-transitory computer-readable medium having program code that is stored thereon, the program code executable by one or more processing devices for performing operations comprising: reconstructing geometry information of a dynamic mesh from a geometry component bitstream in a coded mesh bitstream of the dynamic mesh, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

10. The non-transitory computer-readable medium of claim 9, wherein refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information comprises dividing a face out of the plurality of faces specified by the reconstructed connectivity information into two faces based on a vertex of the plurality of vertices specified in the reconstructed geometry information.

11. The non-transitory computer-readable medium of claim 9, wherein refining the reconstructed connectivity information based on the reconstructed geometry information comprises: identifying a vertex of the plurality of vertices that is located inside a face of the plurality of faces; determining a projected vertex of the vertex on an edge of the face; and dividing the face into two refined faces, each refined face has the projected vertex as one vertex.

12. The non-transitory computer-readable medium of claim 11, wherein refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image comprises: determining a transformation based on the vertex and the projected vertex; and applying an inverse of the transformation to a face in the reconstructed attribute image that corresponds to the face.

13. The non-transitory computer-readable medium of claim 12, wherein the transformation is one of an affine transformation or an interpolation transformation.

14. The non-transitory computer-readable medium of claim 13, wherein parameters of the interpolation transformation are signaled in the coded mesh bitstream.

15. The non-transitory computer-readable medium of claim 13, wherein parameters of the affine transformation are determined based on the vertex and the projected vertex.

16. The non-transitory computer-readable medium of claim 9, wherein the operations further comprise: reconstructing mapping information of the dynamic mesh from a mapping component bitstream in the coded mesh bitstream, the reconstructed mapping information comprising data specifying a list of vertex attribute coordinates in the attribute image; and refining the reconstructed mapping information based on the reconstructed geometry information to generate refined mapping information, wherein reconstructing the dynamic mesh is further based on the refined mapping information.

17. A system comprising: a processing device; and a non-transitory computer-readable medium communicatively coupled to the processing device, wherein the processing device is configured to execute program code stored in the non-transitory computer-readable medium and thereby perform operations comprising: reconstructing geometry information of a dynamic mesh from a geometry component bitstream in a coded mesh bitstream of the dynamic mesh, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

18. The system of claim 17, wherein refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information comprises dividing a face out of the plurality of faces specified by the reconstructed connectivity information into two faces based on a vertex of the plurality of vertices specified in the reconstructed geometry information.

19. The system of claim 17, wherein refining the reconstructed connectivity information based on the reconstructed geometry information comprises: identifying a vertex of the plurality of vertices that is located inside a face of the plurality of faces; determining a projected vertex of the vertex on an edge of the face; and dividing the face into two refined faces, each refined face has the projected vertex as one vertex.

20. The system of claim 19, wherein refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image comprises: determining a transformation based on the vertex and the projected vertex; and applying an inverse of the transformation to a face in the reconstructed attribute image that corresponds to the face.

21. A computer-implemented method for encoding three-dimensional (3D) content represented by a dynamic mesh, the method comprising: converting geometry information from a list of vertices of the dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

22. The computer-implemented method of claim 21, wherein determining the face to be removed from connectivity component images of the dynamic mesh comprises: accessing four vertices of two neighboring faces of the dynamic mesh described by the reconstructed geometry component images; identifying a first vertex of the four vertices based on a distance of the first vertex to an edge formed by a second vertex and a third vertex of the four vertices; determining that a first orientation of a first face of the two neighboring faces is the same as a second orientation of a second face of the two neighboring faces; and in response to determining that the first orientation is the same as the second orientation, determining that the first face and the second face containing the first vertex are to be removed from the connectivity component images.

23. The computer-implemented method of claim 22, wherein updating the connectivity component images of the dynamic mesh further comprises merging the two neighboring faces into one face specified by the second vertex, the third vertex and a fourth vertex of the four vertices.

24. The computer-implemented method of claim 23, wherein updating the mapping component images comprises: identifying a mapping sample of the mapping component images associated with the first vertex; determining a transformation filter based on the first vertex and a vertex in the list of vertices that corresponds to the first vertex; and applying the transformation filter to a first mapping sample and the mapping sample associated with the first vertex.

25. The computer-implemented method of claim 21, wherein: encoding the geometry component images is performed via lossy video encoding.

26. The computer-implemented method of claim 21, wherein: encoding the updated connectivity component images is performed via lossless video encoding.

27. The computer-implemented method of claim 21, wherein: encoding the updated mapping component images is performed via lossless video encoding.

28. A non-transitory computer-readable medium having program code that is stored thereon, the program code executable by one or more processing devices for performing operations comprising: converting geometry information from a list of vertices of a dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

29. The non-transitory computer-readable medium of claim 28, wherein determining the face to be removed from connectivity component images of the dynamic mesh comprises: accessing four vertices of two neighboring faces of the dynamic mesh described by the reconstructed geometry component images; identifying a first vertex of the four vertices based on a distance of the first vertex to an edge formed by a second vertex and a third vertex of the four vertices; determining that a first orientation of a first face of the two neighboring faces is the same as a second orientation of a second face of the two neighboring faces; and in response to determining that the first orientation is the same as the second orientation, determining that the first face and the second face containing the first vertex are to be removed from the connectivity component images.

30. The non-transitory computer-readable medium of claim 29, wherein updating the connectivity component images of the dynamic mesh further comprises merging the two neighboring faces into one face specified by the second vertex, the third vertex and a fourth vertex of the four vertices.

31. The non-transitory computer-readable medium of claim 30, wherein updating the mapping component images comprises: identifying a mapping sample of the mapping component images associated with the first vertex; determining a transformation filter based on the first vertex and a vertex in the list of vertices that corresponds to the first vertex; and applying the transformation filter to a first mapping sample and the mapping sample associated with the first vertex.

32. The non-transitory computer-readable medium of claim 28, wherein: encoding the geometry component images is performed via lossy video encoding.

33. The non-transitory computer-readable medium of claim 28, wherein: encoding the updated connectivity component images is performed via lossless video encoding.

34. The non-transitory computer-readable medium of claim 28, wherein: encoding the updated mapping component images is performed via lossless video encoding.

35. A system comprising: a processing device; and a non-transitory computer-readable medium communicatively coupled to the processing device, wherein the processing device is configured to execute program code stored in the non-transitory computer-readable medium and thereby perform operations comprising: converting geometry information from a list of vertices of a dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

36. The system of claim 35, wherein determining the face to be removed from connectivity component images of the dynamic mesh comprises: accessing four vertices of two neighboring faces of the dynamic mesh described by the reconstructed geometry component images; identifying a first vertex of the four vertices based on a distance of the first vertex to an edge formed by a second vertex and a third vertex of the four vertices; determining that a first orientation of a first face of the two neighboring faces is the same as a second orientation of a second face of the two neighboring faces; and in response to determining that the first orientation is the same as the second orientation, determining that the first face and the second face containing the first vertex are to be removed from the connectivity component images.

37. The system of claim 36, wherein updating the connectivity component images of the dynamic mesh further comprises merging the two neighboring faces into one face specified by the second vertex, the third vertex and a fourth vertex of the four vertices.

38. The system of claim 37, wherein updating the mapping component images comprises: identifying a mapping sample of the mapping component images associated with the first vertex; determining a transformation filter based on the first vertex and a vertex in the list of vertices that corresponds to the first vertex; and applying the transformation filter to a first mapping sample and the mapping sample associated with the first vertex.

39. The system of claim 35, wherein: encoding the geometry component images is performed via lossy video encoding.

40. The system of claim 35, wherein: encoding the updated connectivity component images is performed via lossless video encoding; and encoding the updated mapping component images is performed via lossless video encoding.

Description:
DYNAMIC MESH CODING WITH SIMPLIFIED TOPOLOGY

Cross-Reference to Related Applications

[0001] This application claims priority to U.S. Provisional Application No. 63/266,176, entitled “Topology Coding Method and Apparatus for Dynamic Mesh Coding,” filed on December 29, 2021, U.S. Provisional Application No. 63/295,138, entitled “Attribute Adjustment Method and Apparatus for Dynamic Mesh Coding with Simplified Topology,” filed on December 30, 2021, and U.S. Provisional Application No. 63/266,175, entitled “Attribute Reconstruction Filter for Dynamic Mesh Coding with Simplified Topology,” filed on December 29, 2021, all of which are hereby incorporated in their entireties by this reference.

Technical Field

[0002] This disclosure relates generally to computer-implemented methods and systems for dynamic mesh coding. Specifically, the present disclosure involves dynamic mesh coding with simplified topology.

Background

[0003] 3D graphics technologies are integrated in various applications, such as entertainment applications, engineering applications, manufacturing applications, and architecture applications. In these various applications, 3D graphics may be used to generate 3D models of incredible detail and complexity. Given the detail and complexity of the 3D models, the data sets associated with the 3D models can be extremely large. Furthermore, these extremely large data sets may be transferred, for example, through the Internet. Transfer of large data sets, such as those associated with detailed and complex 3D models, can therefore become a bottleneck in various applications. As illustrated by this example, developments in 3D graphics technologies provide improved utility to various applications but also present technological challenges. Improvements to 3D graphics technologies, therefore, represent improvements to the various technological applications to which 3D graphics technologies are applied. Thus, there is a need for technological improvements to address these and other technological problems related to 3D graphics technologies.

Summary

[0004] Some embodiments involve dynamic mesh coding with simplified topology. In one example, a computer-implemented method for decoding a coded mesh bitstream of a dynamic mesh representing three-dimensional (3D) content includes reconstructing geometry information of the dynamic mesh from a geometry component bitstream in the coded mesh bitstream, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

[0005] In another example, a non-transitory computer-readable medium has program code that is stored thereon and the program code is executable by one or more processing devices for performing operations. The operations include reconstructing geometry information of a dynamic mesh from a geometry component bitstream in a coded mesh bitstream of the dynamic mesh, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

[0006] In another example, a system includes a processing device and a non-transitory computer-readable medium communicatively coupled to the processing device. The processing device is configured to execute program code stored in the non-transitory computer-readable medium and thereby perform operations. The operations include reconstructing geometry information of a dynamic mesh from a geometry component bitstream in a coded mesh bitstream of the dynamic mesh, the reconstructed geometry information comprising data specifying a plurality of vertices of the dynamic mesh; reconstructing connectivity information of the dynamic mesh from a connectivity component bitstream in the coded mesh bitstream, the reconstructed connectivity information comprising data specifying a plurality of faces of the dynamic mesh; refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information; reconstructing an attribute image of the dynamic mesh from an attribute component bitstream in the coded mesh bitstream, the reconstructed attribute image comprising image content to be applied to faces of the dynamic mesh; refining the reconstructed attribute image based on the reconstructed geometry information to generate refined attribute image; reconstructing the dynamic mesh based, at least in part, upon the reconstructed geometry information, the refined connectivity information, and the refined attribute image; and causing the reconstructed dynamic mesh to be rendered for display.

[0007] In another example, a computer-implemented method for encoding three- dimensional (3D) content represented by a dynamic mesh includes converting geometry information from a list of vertices of the dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

[0008] In another example, a non-transitory computer-readable medium has program code that is stored thereon and the program code is executable by one or more processing devices for performing operations. The operations include converting geometry information from a list of vertices of a dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

[0009] In another example, a system includes a processing device and a non-transitory computer-readable medium communicatively coupled to the processing device. The processing device is configured to execute program code stored in the non-transitory computer-readable medium and thereby perform operations. The operations include converting geometry information from a list of vertices of a dynamic mesh to a sequence of geometry component images; encoding the sequence of geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream; decoding the geometry component bitstream to generate reconstructed geometry component images; determining a face to be removed from connectivity component images of the dynamic mesh, the face containing a vertex in the reconstructed geometry component images; updating the connectivity component images of the dynamic mesh by removing the face from the connectivity component images; encoding the updated connectivity component images to generate a connectivity component bitstream; updating, based on updating the connectivity component images, mapping component images of the dynamic mesh; encoding the updated mapping component images to generate a mapping component bitstream; and generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream.

[0010] These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there. Brief Description of the Drawings

[0011] Features, embodiments, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings.

[0012] FIG. 1 illustrates an example encoder system for mesh coding, according to various embodiments of the present disclosure.

[0013] FIG. 2 illustrates an example decoder system for mesh decoding, according to various embodiments of the present disclosure.

[0014] FIG. 3 illustrates example mesh frames associated with color-per-vertex approaches, according to various embodiments of the present disclosure.

[0015] FIG. 4 illustrates an example of a mesh frame and its underlying defining data associated with color-per-vertex approaches and a corresponding 3D content, according to various embodiments of the present disclosure.

[0016] FIG. 5 illustrates example mesh frames associated with 3D coding approaches using vertex maps, according to various embodiments of the present disclosure.

[0017] FIG. 6 illustrates an example of data defining a mesh frame, a corresponding 3D content, and a corresponding attribute map associated with 3D coding approaches using attribute mapping, according to various embodiments of the present disclosure.

[0018] FIG. 7 illustrates an example associated with determining face orientation in various 3D coding approaches, according to various embodiments of the present disclosure.

[0019] FIG. 8 illustrates an example of an encoder system for mesh coding with connectivity simplification, according to various embodiments of the present disclosure.

[0020] FIG. 9 shows an example of the degeneration of faces in a mesh caused by lossy encoding, according to various embodiments of the present disclosure.

[0021] FIG. 10 provides an example of connectivity simplification, according to some embodiments of the present disclosure.

[0022] FIG. 11 illustrates an example decoder system for mesh decoding with connectivity refinement, according to various embodiments of the present disclosure.

[0023] FIG. 12 shows an example of the connectivity refinement process, according to some embodiments of the present disclosure.

[0024] FIG. 13 shows another example of the connectivity refinement process, according to some embodiments of the present disclosure.

[0025] FIG. 14 depicts an example of a process for mesh coding with connectivity simplification, according to some embodiments of the present disclosure. [0026] FIG. 15 depicts an example of a process for decoding a coded mesh bitstream with connectivity refinement, according to some embodiments of the present disclosure.

[0027] FIG. 16 illustrates an example of an encoder system for mesh coding with connectivity simplification and attribute and mapping adjustment, according to various embodiments of the present disclosure.

[0028] FIG. 17 shows examples of the affine transformations, according to various embodiments of the present disclosure.

[0029] FIG. 18 illustrates an example decoder system for mesh decoding with connectivity refinement and mapping refinement, according to various embodiments of the present disclosure.

[0030] FIG. 19 depicts an example of a process for mesh coding with connectivity simplification and attribute and mapping adjustment, according to some embodiments of the present disclosure.

[0031] FIG. 20 depicts an example of a process for decoding a coded mesh bitstream with connectivity refinement and mapping refinement, according to some embodiments of the present disclosure.

[0032] FIG. 21 illustrates an example of an encoder system for mesh coding with connectivity simplification and attribute and mapping adjustment, according to various embodiments of the present disclosure.

[0033] FIG. 22 illustrates an example decoder system for mesh decoding with connectivity refinement, mapping refinement, and attribute refinement, according to various embodiments of the present disclosure.

[0034] FIG. 23 depicts an example of a process for mesh coding with connectivity simplification and mapping adjustment without attribute adjustment, according to some embodiments of the present disclosure.

[0035] FIG. 24 depicts an example of a process for decoding a coded mesh bitstream with connectivity refinement, mapping refinement, and attribute refinement, according to some embodiments of the present disclosure.

[0036] FIG. 25 depicts an example of a computing system that can be used to implement some embodiments of the present disclosure.

Detailed Description

[0037] Various embodiments provide dynamic mesh coding with simplified topology to improve coding efficiency. Due to the lossy nature of mesh encoding, some information may be distorted during the encoding, such as the geometry information which describes the coordinates of the vertices of the mesh. As such, the reconstructed position of the decoded vertexes of the mesh may differ from the original positions of the vertices. As a result, neighboring faces may degenerate into simplified geometry. Such degenerate can be utilized to simplify the topology information contained in the connectivity data without introducing much further distortions, thereby reducing the amount the data to be encoded and increasing the coding efficiency. Various embodiments described herein involve the encoding and decoding of dynamic mesh with simplified connectivity information.

[0038] The following non-limiting examples are provided to introduce some embodiments. In one embodiment, a mesh encoder accesses a dynamic mesh to be encoded. The dynamic mesh may be represented as uncompressed mesh frame sequence that include mesh frames. Each mesh frame includes at least one mesh tile or mesh slice which includes data that describes 3D content (e.g., 3D objects) in a digital representation as a collection of geometry, connectivity, attribute, and attribute mapping information. The encoder can extract an attribute component (containing color information), a geometry component (containing a list of vertex coordinates), a connectivity component (containing a list of faces with corresponding vertex index and texture index), and a mapping component (containing a list of projected vertex attribute coordinate information) from the uncompressed mesh frame sequence.

[0039] The encoder encodes the geometry component and further decodes the encoded geometry component to generate the reconstructed geometry information. The encoder determines vertices in the reconstructed geometry information that can be removed from the connectivity component due to the degeneration caused by encoding distortions. The encoder further identifies neighboring faces that can be merged if the vertices were removed. The encoder updates the connectivity component of the dynamic mesh to simplify the connectivity information to replace the faces with the merged faces. The encoder encodes the updated connectivity component to generate a connectivity component bitstream. The encoder further encodes other components and generates a coded mesh bitstream by including the encoded components.

[0040] By merging faces and removing vertices, the connectivity component contains less data and thus the encoded connectivity component requires fewer bits to represent than the original connectivity component. As a result, the size of the overall coded mesh bitstream is also smaller than that of the mesh bitstream coded without the connectivity simplification. In addition, because the to-be-removed vertices are selected in a way that leads to small distortions (e.g., a vertex having the smallest distance to a face edge is selected to be removed), the visual artifacts introduced through the simplification process is minimized. As such, the coded mesh bitstream can be decoded using a regular mesh decoder without knowing that the coded mesh bitstream is generated with the simplified connectivity information.

[0041] In another embodiment, the coded mesh bitstream with simplified connectivity can be decoded using a mesh decoder with connectivity refinement to further reduce the distortions and visual artifacts introduced by the connectivity simplification at the encoding side. To perform the connectivity refinement, the mesh decoder can be configured with a connectivity refinement module to recover the faces that have been merged during the connectivity simplification. Based on the reconstructed geometry information, the decoder identifies vertices located within a face specified by the reconstructed connectivity information and divides the face into two or more faces according to the locations of the vertices to generate refined connectivity information. The refined connectivity information is used to reconstruct the dynamic mesh along with other decoded components of the mesh. Because of the refinement process, the connectivity information used to reconstruct the mesh is closer to the connectivity information before the connectivity simplification at the encoding. As a result, the reconstructed mesh has smaller distortions and fewer visual artifacts than the mesh decoded by a regular mesh decoder.

[0042] In another embodiment, in addition to the connectivity simplification, the mesh encoder can further reproject the attribute image in accordance with the distortions introduced by geometry information encoding and reconstruction. The distortions can be described with a transformation which is applied to a point that belongs to the edge between the two neighboring triangles that are degenerated and merged during the connectivity simplification. As such, the reprojection process can be implemented as the same transformation applied to the face in the attribute image that corresponds to the degenerated triangular face. Likewise, the mapping information can also be updated to be consistent with the connectivity information. Updating the attribute image and the mapping information in this way reduces the visual artifacts caused by the geometry information reconstruction error.

[0043] In a further embodiment, instead of updating the attribute image according to the geometry information reconstruction distortion at the encoder side, the attribute image is refined at the decoder side. A similar process to the previous embodiment can be utilized to estimate the transformation and an inverse of the estimated transformation can be applied to the decoded attribute image before the attribute image being used to reconstruct the dynamic mesh. Compared with the previous embodiment where the attribute image is updated at the encoder, this embodiment allows the mesh to be encoded faster with less computational resource consumption. The burden is shifted to the decoder where the timing requirement is less stringent and computing capability is more powerful than the encoder. As such, this embodiment is more suitable for applications like living streaming.

[0044] Descriptions of the various embodiments provided herein may include one or more of the terms listed below. For illustrative purposes and not to limit the disclosure, exemplary descriptions of the terms are provided herein.

Mesh', a collection of vertices, edges, and faces that may define the shape/topology of a polyhedral object. The faces may include triangles (e.g., triangle mesh).

Mesh slice', a collection of vertices, edges, and faces that may define the shape/topology of a polyhe. A mesh frame consists of several mesh slices.

Dynamic mesh', a mesh with at least one of various possible components (e.g., connectivity, geometry, mapping, vertex attribute, and attribute map) varying in time.

Animated Mesh', a dynamic mesh with constant connectivity.

Connectivity', a set of vertex indices describing how to connect the mesh vertices to create a 3D surface (e.g., geometry and all the attributes may share the same unique connectivity information).

Geometry, a set of vertex 3D (e.g., x, y, z) coordinates describing positions associated with the mesh vertices. The coordinates (e.g., x, y, z) representing the positions may have finite precision and dynamic range.

Mapping', a description of how to map the mesh surface to 2D regions of the plane. Such mapping may be described by a set of UV parametric/texture (e.g., mapping) coordinates associated with the mesh vertices together with the connectivity information.

Vertex attribute, a scalar of vector attribute values associated with the mesh vertices.

Attribute Map-, attributes associated with the mesh surface and stored as 2D images/videos. The mapping between the videos (e.g., parametric space) and the surface may be defined by the mapping information.

Vertex-, a position (e.g., in 3D space) along with other information such as color, normal vector, and texture coordinates.

Edge, a connection between two vertices.

Face, a closed set of edges in which a triangle face has three edges defined by three vertices. Orientation of the face may be determined using a "right-hand" coordinate system.

Orientation of a face: defined by vertex order in the face. Some transpositions are allowed, while other transpositions of the vertexes in the face lead to a different orientation. Surface', a collection of faces that separates the three-dimensional object from the environment.

Connectivity Coding Unit (CCU): a square unit of size N x N connectivity coding samples that carry connectivity information.

Connectivity Coding Sample', a coding element of the connectivity information calculated as a difference of elements between a current face and a predictor face.

Block', a representation of the mesh segment as a collection of connectivity coding samples represented as three attribute channels. A block may consist of CCUs. bits per point (bpp)'. an amount of information in terms of bits, which may be required to describe one point in the mesh.

[0045] 3D content, such as 3D graphics, can be represented as a mesh (e.g., 3D mesh content). The mesh can include vertices, edges, and faces that describe the shape or topology of the 3D content. The mesh can be segmented into blocks (e.g., segments, tiles). For each block, the vertex information associated with each face can be arranged in order (e.g., descending order). With the vertex information associated with each face arranged in order, the faces are arranged in order (e.g., ascending order). The sorted faces in each block can be packed into two-dimensional (2D) frames. Sorting the vertex information can guarantee an increasing order of vertex indices, facilitating improved processing of the mesh. Components of the connectivity information in the 3D mesh content can be transformed from onedimensional (ID) connectivity components (e.g., list, face list) to 2D connectivity images (e.g., connectivity coding sample array). With the connectivity information in the 3D mesh content transformed to 2D connectivity images, video encoding processes can be applied to the 2D connectivity images (e.g., as video connectivity frames). In this way, 3D mesh content can be efficiently compressed and decompressed by leveraging video encoding solutions. 3D mesh content encoded in accordance with these approaches can be efficiently decoded. Connectivity components can be extracted from a coded dynamic mesh bitstream and decoded as a frame (e.g., image). Connectivity coding samples, which correspond with pixels in the frame, are extracted. The 3D mesh content can be reconstructed from the connectivity information extracted.

[0046] A coded bitstream for dynamic mesh is represented as a collection of components, which is composed of mesh bitstream header and data pay load. The mesh bitstream header can include the sequence parameter set, picture parameter set, adaptation parameters, tile information parameters, and supplemental enhancement information, etc. The mesh bitstream payload can include the coded atlas information component (auxiliary information required to convert the local coordinate system of the block to the global coordinate system of the mesh frame), coded attribute information component, coded geometry (position) information component, coded mapping information component, and coded connectivity information component.

[0047] FIG. 1 illustrates an example encoder system 100 for mesh coding, according to various embodiments of the present disclosure. As illustrated in FIG. 1 , an uncompressed mesh frame sequence 102 can be input to the encoder system 100, and the example encoder system 100 can generate a coded mesh frame sequence 124 based on the uncompressed mesh frame sequence 102. In general, a mesh frame sequence includes mesh frames. A mesh frame is a data format that describes 3D content (e.g., 3D objects) in a digital representation as a collection of geometry, connectivity, attribute, and attribute mapping information. Each mesh frame is characterized by a presentation time and duration. A mesh frame sequence (e.g., sequence of mesh frames) forms a dynamic mesh video.

[0048] As illustrated in FIG. 1, the encoder system 100 can generate coded mesh sequence/picture header information 106 based on the uncompressed mesh frame sequence 102. The coded mesh sequence/picture header information 106 can include picture header information such as sequence parameter set (SPS), picture parameter set (PPS), slice header (SH) and supplemental enhancement information (SEI). A mesh bitstream header 132 can include the coded mesh sequence/picture header information 106. The uncompressed mesh frame sequence 102 can be input to the mesh segmentation module 104. The mesh segmentation module 104 segments the uncompressed mesh frame sequence 102 into block data and segmented mesh data. A mesh bitstream payload 130 can include the block data and the segmented mesh data. The mesh bitstream header 132 and the mesh bitstream payload 130 can be multiplexed together by the multiplexer 122 to generate the coded mesh frame sequence 124.

[0049] The encoder system 100 can include a block segmentation information module 108 to generate block segmentation information (e.g., atlas information) based on the block data. Based on the segmented mesh data, the encoder system 100 can generate uncompressed attribute component using an attribute image composition module 110, uncompressed geometry component using a geometry image composition module 112, uncompressed connectivity component using a connectivity image composition module 114, and uncompressed mapping component using a mapping image composition module 116. As illustrated in FIG. 1, the connectivity image composition module 114 and the mapping image composition module 116 can also use the block segmentation information generated by the block segmentation information module 108 when generating the respective components. As an example of the information generated, the block segmentation information can include binary atlas information. The attribute image component can include RGB and YUV component information (e.g., RGB 4:4:4, YUV 4:2:0). The geometry component can include the 3D coordinates of the vertex information in canonical or local coordinate system (e.g., XYZ 4:4:4, XYZ 4:2:0). The connectivity component can include vertex indices and texture vertex information (e.g., dvO, dvl, dv2 4:4:4). The mapping component can include texture vertex information (e.g., UV 4:4:X). These generated components may be represented as images.

[0050] The block segmentation information can be provided to a binary entropy coder 118 to generate atlas component. The binary entropy coder 118 may be a lossless coder which allows the encoded information to be recovered without any distortion. The uncompressed attribute component generated by the attribute image composition module 110 and represented as images can be provided to a video coder 120a to generate the coded attribute component. The video coder 120a may be a lossy coder where the encoded information may not be fully recovered at the decoder side. Similarly, the geometry component represented as images can be provided to a video coder 120b to generate coded geometry component. The video coder 120b may also be a lossy encoder. The connectivity image component represented as images can be provided to video coder 120c to generate coded connectivity component. The video coder 120c may be a lossless encoder. The mapping component represented as images can be provided to video coder 120d to generate coded mapping component. The video coder 120d may be a lossless encoder. The video coders 120a-120d may be any video or image encoder that can compress the information in a video sequence or images to reduce the size of the video, such as the H.264 video encoder, H. 265 video encoder, H.266 video encoder, JPEG image encoder, and so on. The video coders 120a-120d may use the same type or different types of video encoders. A mesh bitstream payload 130 can include the atlas component, the attribute component, the geometry component, the connectivity component, and the mapping component. The mesh bitstream payload and the mesh bitstream header are multiplexed together by the multiplexer 122 to generate the coded mesh frame sequence 124.

[0051] FIG. 2 illustrates an example decoder system 200 for mesh decoding, according to various embodiments of the present disclosure. As illustrated in FIG. 2, a coded mesh frame sequence 224, such as the coded mesh frame sequence 124 generated by the encoder system 100 in FIG. 1 can be input to the decoder system 200. The coded mesh frame sequence 224 may be a bitstream. The example decoder system 200 can generate a reconstructed mesh frame sequence 202 based on the coded mesh frame sequence 224. [0052] As illustrated in FIG. 2, the decoder system 200 de-multiplexes the coded mesh frame sequence 224 using a de-multiplexer 222 to identify various components of the coded information, including the coded mesh sequence/picture/slice header information 206 and the coded block segmentation information, which can be decoded using an entropy decoder 218. The de-multiplexed information further includes the coded geometry component, the coded connectivity image component, the coded mapping component, and the coded attribute component. The identified various coded component can be decoded using video decoders 220a-220d corresponding to the respective video encoders used to encode the information as indicated in coded mesh sequence header 106, such as video coders 120a-120d. Similar to the video coders 120a-120d, the video decoders 220a-220d can be any video decoder or image decoder.

[0053] The video decoded data can further be processed using the respective processing modules, such as the attribute image decoding module 210, the geometry image decoding module 212, the connectivity image decoding module 214, and the mapping image decoding module 216. These decoding modules convert the decoded video data into the respective formats of the data. For example, for geometry data, the decoded images in the video can be reformatted back into canonical XYZ 3D coordinates to generate the geometry data. Likewise, the decoded connectivity video/images can be reformatted into connectivity coded samples dvO, dvl, dv2 to generate the decoded connectivity data; the decoded mapping video/images can be reformatted into uv coordinates to generate the decoded mapping data; and the decoded attribute video/images can be used to generate the RGB or YUV attribute data of the mesh.

[0054] The geometry reconstruction module 232 reconstructs the geometry information from the decoded 3D coordinates; the connectivity reconstruction module 234 reconstructs the topology (e.g., faces) from the decoded connectivity data; and the mapping reconstruction module 236 reconstructs the attribute mapping from the decoded mapping data. With the reconstructed geometry information, faces, mapping data, attribute data, and the decoded mesh sequence/picture header information 206, a mesh reconstruction module 226 reconstructs the mesh to generate the reconstructed mesh frame sequence 202.

[0055] FIGS. 3-7 illustrate examples associated with coding and decoding information for a mesh, according to various embodiments of the present disclosure. In various approaches to coding 3D content, geometry, attribute, and connectivity information are encoded in mesh frames. For example, in color-per-vertex approaches, attribute information is stored with the geometry information and connectivity information are stored in mesh frames with associated vertex indices. FIG. 3 illustrates example mesh frames 300 associated with color-per-vertex approaches, according to various embodiments of the present disclosure. As illustrated in FIG. 3, geometry and attribute information 302 can be stored in mesh frames as an ordered list of vertex coordinate information. Each vertex coordinate is stored with corresponding geometry and attribute information. Connectivity information 304 can be stored in mesh frames as an ordered list of face information, with each face including corresponding vertex indices and texture indices.

[0056] FIG. 4 illustrates an example 400 of a mesh frame 402 and its underlying defining data 406 associated with color-per-vertex approaches and a corresponding 3D content 404, according to various embodiments of the present disclosure. As illustrated in mesh frame 402 and defined in the corresponding data 406, geometry coordinates with associated attribute information as well as connectivity information are stored in a mesh frame, with geometry and attribute information stored as an ordered list of vertex geometry coordinate information with associated attribute information and connectivity information stored as an ordered list of face information with corresponding vertex indices. The geometry and attribute information illustrated in mesh frame 402 includes four vertices. The positions of the vertices are indicated by X, Y, Z coordinates and color attributes are indicated by a_l, a_2, a_3 values that represent the R, G, B color prime values. The connectivity information illustrated in mesh frame 402 includes three faces. Each face includes three vertex indices listed in the geometry and attribute information to form a triangle face. By using the vertex indices for each corresponding face to point to the geometry and attribute information stored for each vertex coordinate, the 3D content 404 (e.g., 3D triangle) can be decoded based on the mesh frames 402.

[0057] FIG. 5 illustrates example uncompressed mesh frames 500 associated with 3D coding approaches using texture maps, according to various embodiments of the present disclosure. As illustrated in FIG. 5, geometry information 502 can be stored in mesh frames as an ordered list of vertex coordinate information. Each vertex coordinate is stored with corresponding geometry information. Attribute information 504 can be stored in mesh frames, separate from the geometry information 502, as an ordered list of projected vertex attribute coordinate information. The projected vertex attribute coordinate information is stored as 2D coordinate information with corresponding attribute information. Connectivity information 506 can be stored in mesh frames as an ordered list of face information, with each face including corresponding vertex indices and texture indices. In some examples, the mesh frames are formatted according to the wavefront OBJ file format.

[0058] FIG. 6 illustrates an example 600 of data 602 defining a mesh frame, a corresponding 3D content 604, and a corresponding attribute map 606 associated with 3D coding approaches using atribute mapping, according to various embodiments of the present disclosure. As illustrated in FIG. 6, geometry information, mapping information (e.g., attribute information), and connectivity information are stored in the mesh frame generated based on information described in data 602. The geometry information contained in the mesh frame includes four vertices. The positions of the vertices are indicated by X, Y, Z coordinates. The mapping information in the mesh frame includes five texture vertices. The positions of the texture vertices are indicated by U, V coordinates. The connectivity information in the mesh frame includes three faces. Each face includes three pairs of vertex indices and texture vertex coordinates. As illustrated in FIG. 6, by using the pairs of vertex indices and texture vertex coordinates for each face, the 3D content 604 (e.g., the object formed by the triangles in the 3D space) and the atribute map 606 can be decoded based on the mesh frame. Atribute information associated with the attribute map 606 can be applied to the 3D content 604 to apply the atribute information to the 3D content 604. In this example, the coordinates are normalized on a scale from -1.0 to +1.0 for each axis in geometry, and for atribute mapping the coordinates are normalized on a scale from 0.0 to +1.0. The coordinates in the mesh encoder are first converted from a floating-point value to a fixed point representation with a given bit-depth and then compressed by the mesh encoder.

[0059] FIG. 7 illustrates an example 700 associated with determining face orientation in various 3D coding approaches, according to various embodiments of the present disclosure. As illustrated in FIG. 7, face orientation can be determined using a right-hand coordinate system. Each face illustrated in the example 700 includes three vertices, forming three edges. Each face is described by the three vertices. In a manifold mesh 702, each edge belongs to at most two different faces. In a non-manifold mesh 704, an edge can belong to two or more different faces. In both cases of the manifold mesh 702 and the non-manifold mesh 704, the right-hand coordinate system can be applied to determine the orientation of a face,, which may also be referred to as a normal vector direction or a face normal direction.

[0060] FIG. 8 illustrates an example of an encoder system 800 for mesh coding with connectivity simplification, according to various embodiments of the present disclosure. The encoder system 800 shown in FIG. 8 includes various components or modules that are similar to those in FIG. 1. In addition, the encoder system 800 includes a geometry reconstruction module 802, a reconstructed geometry analysis module 804, and a connectivity image simplification module 806 to simplify the connectivity information before encoding to improve the coding efficiency. [0061] For example, the geometry reconstruction module 802 can decode the encoded geometry component generated by the video coder 120b back to the geometry component images. In examples where the video coder 120b is a lossy encoder, due to prediction and quantization, the reconstructed geometry component images will be different from the geometry component images. As such, the reconstructed positions of the decoded vertexes of the mesh may differ from the original positions of the vertexes of the mesh. As a result, neighboring faces may degenerate into simplified geometry. FIG. 9 shows an example of the topology degeneration. In FIG. 9, the original mesh 902 includes five triangular faces: A, B, C, D, and E. After the reconstruction, the positions of the reconstructed vertices in the reconstructed mesh 904 are shifted compared to that of the vertices in the original mesh 902. In this example, the geometry primitive in the original mesh 902 degenerates to a planar or a nearly planar surface. This planar surface can be represented using two faces: faces 908 A (approximately the union of faces A and B) and 908B (approximately the union of faces C, D, and E) or faces 910A (approximately the union of faces A and C) and 910B (approximately the union of faces B, D, and E). The reconstruction error 906 shows the position shifting of the vertices in the reconstructed mesh 904 compared to the original mesh 902.

[0062] Referring back to FIG. 8, based on the decoded/reconstructed geometry component images, the reconstructed geometry analysis module 804 can analyze the geometry information to determine vertices that can be removed from the connectivity component images thereby simplifying the connectivity component images. To determine a vertex to be removed, four vertices of a pair of neighboring faces or triangles of the mesh can be analyzed. In some examples, the two neighboring triangles have the same orientation. The reconstructed geometry analysis module 804 can form a triangle using three of the vertices with the fourth vertex inside the triangle. The distance from the fourth vertex to each of the three edges of the triangle can be determined. The smallest distance is determined among the three distances. If the smallest distance is below a pre-determined threshold, the vertex can be selected as a vertex to be removed. In some examples, the threshold may be adaptively defined by the encoder system 800 or passed as a user input to the encoder system 800. The generated triangle can be used as the triangle to be encoded. In other words, the pair of neighboring triangles can be merged into one triangle. This process can be repeated for other pairs of neighboring triangles.

[0063] The reconstructed geometry analysis module 804 provides the analysis results to the connectivity image simplification module 806 so that the connectivity component images can be simplified. The analysis results may include the vertices that are selected to be removed and, in some examples, the triangles to be merged. The connectivity image simplification module 806 can remove the selected vertices from the connectivity component images and update the connectivity component images to replace the data for the pairs of neighboring triangles with the corresponding merged triangles. The simplified connectivity component images may be provided to the video coder 120c for encoding as discussed above with respect to FIG. 1. The multiplexer 122 may then multiplex the encoded information to generate the coded mesh frame sequence 810 in a way similar to that described with respect to FIG. 1. Because of the connectivity simplification, the coded mesh frame sequence 810 is different from, and has a smaller size than, the coded mesh frame sequence 124 generated using the encoder system 100. While FIG. 8 shows three separate modules are involved in the connectivity simplification, i.e., the geometry reconstruction module 802, the reconstructed geometry analysis module 804, and the connectivity image simplification module 806, any number of modules may be employed to implement the connectivity simplification.

[0064] FIG. 10 provides an example of the connectivity simplification, according to some embodiments of the present disclosure. In this example, sub-figure (A) shows the original geometry information which specifies four vertices P, VI, V2, and V3. Sub-figure (B) shows the original geometry information along with the original connectivity information, which defines two neighboring triangles based on these four vertices: triangle P_V2_V3 and triangle P_V1_V2. These two triangles have the same counterclockwise orientation. Sub-figure (C) shows the reconstructed geometry and connectivity information. The reconstructed geometry information includes vertices P’, V1’, V2’, and V3’ which may have coordinates different from the respective original vertices due to lossy compression by the video coder. Among the four reconstructed vertices, V1’, V2’ andV3’ can form a triangle 1008 with the fourth reconstructed vertex P’ inside it.

[0065] The distance from P’ to each of the three edges of the triangle 1008 is calculated. In some examples, the distance is measured as the length of a line from the vertex to the edge that is perpendicular to the edge. For example, the distance from P’ to edge V1’V3’ can be determined as the length of line 1010 that is perpendicular to edge V1’V3’ and connects P’. Alternatively, or additionally, the distance can be defined as the length of the line which is generated by extending the line connecting the vertex opposite to the edge and P’ until it intersects with the edge. In this example, the distance between P’ to edge V1’V3’ can be determined as the length of line 1012 (i.e., line P’P”) generated by extending the line V2’P’ until it intersects with the edge VI’ and V3’ at point P”. The distance from P’ to the other two edges V1’V2’ and V2’V3’ can be calculated in a similar way. The smallest distance among the three distances can be selected and compared with a pre-determined threshold to determine whether the point P’ can be removed or excluded from encoding. If the smallest distance is smaller than the pre-determined threshold, then the faces associated with the point P’ can be marked as to be removed or excluded from connectivity encoding. In such a case, the two triangles P_V2_V3 and P_V1_V2 can be merged into one triangle V1’V2’V3 which also preserves the counterclockwise orientation of the original two triangles. The faces associated with point P’ that are inside the merged triangle are excluded from the connectivity component images. With this simplification, the reconstructed connectivity information only includes one triangle and the reconstructed geometry information includes the three vertices of the triangle and a vertex P’ inside the triangle as shown in sub-figure (D). In some examples, only the triangles with the same orientation can be merged.

[0066] The coded mesh frame sequence 810 can be stored or transmitted and be decoded using the decoder system 200 as described above with respect to FIG. 2. In other words, the coded mesh frame sequence 810 with connectivity simplification can be decoded by a decoder system without taking into account that the connectivity information has been simplified.

[0067] In another embodiment, the decoder system can be configured to reconstruct the original connectivity information based on the simplified connectivity information. This can be performed by using the vertices decoded from the geometry information which is not simplified at the time of encoding. As such, the decoding process can include the demultiplexer of the decoder extracting corresponding components from the coded mesh bitstream including the geometry component bitstream, attribute component bitstream, connectivity component bitstream, and the mapping component bitstream. Each of the components of the coded mesh bitstream is then processed with a decoder for the corresponding component. For example, the attribute component can be decoded by a video decoder indicated in the mesh bitstream header. Similarly, the other components can also be decoded by the corresponding video decoder indicated in the mesh bitstream header. Based on the extracted components, a reconstruction process is applied to generate the reconstructed mesh frame.

[0068] In some examples, the decoding process include the following steps: obtaining a coded mesh bitstream from network or a storage source, demultiplexing components of the coded bitstream and decoding in parallel or otherwise independently, the geometry component bitstream, the attribute component bitstream, the connectivity component bitstream, and the mapping component bitstream. The decoding process further includes refining the decoded connectivity information to reconstruct or recover the original connectivity information that has been simplified by subdividing edges of faces or triangles that contain vertex inside thereof. The refined and decoded components can be used to reconstruct the mesh frames. By refining the decoded connectivity information, the distortions and visual artifacts in the decoded mesh frames that were introduced by the connectivity simplification can be reduced.

[0069] FIG. 11 illustrates an example decoder system 1100 for mesh decoding with connectivity refinement, according to various embodiments of the present disclosure. The decoder system 1100 shown in FIG. 11 includes various components or modules that are similar to those in FIG. 2. In addition, the decoder system 1100 includes a connectivity refinement module 1102 to recover the connectivity information that was simplified at the encoder before the connectivity information is sent to the mesh reconstruction module 226 for reconstructing the mesh frame sequence 1104.

[0070] In some examples, the connectivity refinement module 1102 obtains the geometry information decoded by the geometry reconstruction module 232 from the geometry component bitstream. Vertex coordinates (X, Y, Z) are reconstructed using the decoded geometry and atlas components. The connectivity refinement module 1102 further obtains the connectivity information decoded from the connectivity component bitstream by the connectivity reconstruction module 234. The connectivity refinement module 1102 can mark the vertices from the geometry component that belong to the face list in the decoded connectivity information as visited, and mark the vertices that do not belong to the face list as non-visited.

[0071] For each non-visited vertex P', the connectivity refinement module 1102 finds a face or triangle inside which P' is located, e.g. The connectivity refinement module 1102 projects the vertex P' to the nearest edge (e.g., of the triangle to find a projection vertex P” In some examples, the nearest edge is identified through the minimum Euclidean distance between the vertex and the edge. The projection vertex P" is defined as an intersection of the ray casted from the opposite vertex V 2 of the triangle to the nearest edge of the triangle The connectivity refinement module 1102 subdivides the triangle into two triangles both with the same orientation as the triangle . If there are more than one point inside the triangle , the connectivity refinement module 1102 can iterate this process recursively starting from the point that has the maximum Euclidian distance to the edge. Then the updated triangle is used for further subdivision. The refined connectivity information can be used to reconstruct the mesh frames along with other decoded components.

[0072] FIG. 12 shows an example of the decoder side connectivity refinement process, according to some embodiments of the present disclosure. In FIG. 12, the decoded geometry 1202 includes four points and as shown in sub-figure (A). Without connectivity simplification, the four points should form two faces: and . Due to the simplification during encoding, these two faces have been merged as indicated by sub-figure (B) which shows the decoded geometry and connectivity information. In sub-figure (B), V 2 , and V 3 form a face/triangle with the point P’ inside this face. Because the connectivity information has been simplified at the encoder, the isolated point P’ inside the face indicates that the decoded triangle is a merged triangle from at least two triangles. To restore the triangles, the nearest edge for P’ in triangle is identified as in this example. P' is projected to this nearest edge to obtain the projection vertex P". The projection can be performed by extending the ray casted from the opposite vertex of the edge to find the intersection with the edge as shown in sub-figure (C) of FIG. 12. The triangle is then divided into two triangles and each having the same orientation as triangle AS such, the decoded geometry and refined connectivity information shown in sub¬ figure (D) includes the two triangles instead of the merged triangle

[0073] FIG. 13 shows another example of the connectivity refinement process, according to some embodiments of the present disclosure. In FIG. 13, there are more than one point inside the decoded triangle. As shown in sub-figure (A), the decoded geometry information includes five points: and As shown in sub-figure (B), the decoded connectivity information indicates that one face is encoded as with and inside this face. The nearest edge for P[ is identified as V'j V' 3 and the nearest edge for P 2 is identified as as shown in sub-figure (C) of FIG. 13. Between P[ and P 2 , P[ has a larger distance to the nearest edge and thus the refinement is performed for P[ first. As shown in subfigure (D), the projection vertex P ± is found for P[ as the intersection of the ray with the edge The triangle is thus divided into two triangles and each having the same counter-clockwise orientation as the triangle

[0074] If any of the triangles contains a point inside it, such a triangle will be further divided. In this example, triangle has inside it and thus will be further analyzed for division. As shown in sub-figure (E), the projection vertex is found for P 2 as the intersection of the ray with ’s nearest edg Triangle thus can be further divided into triangle and triangle both have the same orientation as the triangle \. Sub-figure (F) shows the reconstructed topology formed by the decoded geometry information with the refined connectivity information, where there are three reconstructed triangles, instead of the simplified one triangle before the refinement.

[0075] Referring now to FIG. 14, FIG. 14 depicts an example of a process 1400 for mesh coding with connectivity simplification, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 1400 by executing suitable program code. For example, the encoder system 800 in FIG. 8 may implement the operations depicted in FIG. 1400 by executing the corresponding program code. For illustrative purposes, the process 1400 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0076] At block 1402, the process 1400 involves accessing a dynamic mesh to be encoded. As discussed above, the dynamic mesh may be represented as uncompressed mesh frame sequence that include mesh frames. A mesh frame is a data format that describes 3D content (e.g., 3D objects) in a digital representation as a collection of geometry, connectivity, attribute, and attribute mapping information. Each mesh frame is characterized by a presentation time and duration. A mesh frame sequence (e.g., sequence of mesh frames) forms a dynamic mesh video. The uncompressed mesh frame sequence can be segmented into segmented mesh data. Based on the segmented mesh data, the encoder system 800 can generate attribute component images, geometry component images, connectivity component images, and mapping component images.

[0077] At block 1404, the process 1400 involves encoding the geometry component images using a video encoder to generate a geometry component bitstream as described in detail with respect to FIGS. 1 and 8. At block 1406, the process 1400 involves decoding the encoded geometry component images to generate reconstructed geometry component images. The decoding is performed using a video decoder that corresponds to the video encoder used to the encode the geometry component images. As discussed above, in examples where the video encoder (e.g., video encoder 120b) is a lossy encoder, due to prediction and quantization, the reconstructed geometry component images will be different from the geometry component images. As such, the reconstructed position of the decoded vertexes of the mesh may differ from the original position of the vertexes of the mesh. As a result, neighboring faces may degenerate into simplified geometry.

[0078] At block 1408, the process 1400 involves determining a vertex in the reconstructed geometry component images that belongs to the face(s) to be removed from connectivity component images of the dynamic mesh. To determine the face to be removed, four vertices of a pair of neighboring faces or triangles of the mesh can be analyzed. In some examples, the two neighboring triangles have the same orientation. A triangle can be formed using three of the vertices with the fourth vertex inside the triangle. The distance from the fourth vertex to each of the three edges of the triangle can be determined. The smallest distance is determined among the three distances. If the smallest distance is below a pre-determined threshold, these two neighboring faces can be selected as faces to be removed and replaced with a single face. The generated triangle can be used as the triangle to be encoded in place of the two neighboring triangles (i. e. , the two neighboring triangles are merged into the generated triangle). This step can be repeated to identify multiple vertices to be removed and multiple pairs of neighboring triangles to be merged.

[0079] At block 1410, the process 1400 involves updating the connectivity component images of the dynamic mesh to simplify the connectivity information. For example, the simplification can include removing the faces identified in block 1408 from the connectivity component images and updating the connectivity component images to replace the data for the pairs of neighboring triangles with the corresponding merged triangles. At block 1412, the process 1400 involves encoding the updated connectivity component images to generate a connectivity component bitstream. As discussed above in detail with respect to FIGS. 1 and 8, the encoding may involve using a video encoder to generate the connectivity component bitstream. At block 1414, the process 1400 involves generating a coded mesh bitstream by including at least the geometry component bitstream and the connectivity component bitstream. For example, the coded mesh bitstream can be generated by multiplexing the mesh bitstream payload, that includes the geometry component bitstream and the connectivity component bitstream as well as other bitstream such as attribute component bitstream, mapping component bitstream, and so on, with a mesh bitstream header.

[0080] Referring now to FIG. 15, FIG. 15 depicts an example of a process 1500 for decoding a coded mesh bitstream with connectivity refinement, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 1500 by executing suitable program code. For example, the decoder system 1100 in FIG. 11 may implement the operations depicted in FIG. 1100 by executing the corresponding program code. For illustrative purposes, the process 1500 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0081] At block 1502, the process 1500 involves accessing a coded mesh bitstream of a dynamic mesh for decoding. The coded mesh bitstream is encoded with connectivity simplification described above. The coded mesh bitstream can include a geometry component bitstream, an attribute component bitstream, a connectivity component bitstream, and a mapping component bitstream.

[0082] At block 1504, the process 1500 involves reconstructing geometry information of the dynamic mesh from the geometry component bitstream in the coded mesh bitstream. The reconstructed geometry information includes data specifying vertices of the dynamic mesh. As discussed above in detail with respect to FIGS. 2 and 11, the geometry information of the dynamic mesh can be reconstructed from the geometry component bitstream by applying a video decoder to the geometry component bitstream to generate reconstructed geometry component images which is then used to reconstruct the ordered list of vertex coordinates in the geometry information.

[0083] At block 1506, the process 1500 involves reconstructing connectivity information of the dynamic mesh from the connectivity component bitstream in the coded mesh bitstream. The reconstructed connectivity information includes data specifying the faces (e.g., triangles) of the dynamic mesh. Similar to the geometry information, the connectivity information of the dynamic mesh can be reconstructed from the connectivity component bitstream by applying a video decoder to the connectivity component bitstream to generate reconstructed connectivity component images. The generated reconstructed connectivity component images are then used to reconstruct the ordered list of face information with the corresponding vertex index and, in some cases, the texture index. Other components of the coded mesh bitstream, such as the attribute component bitstream, mapping component bitstream, can also be decoded and used to reconstruct the corresponding information.

[0084] At block 1508, the process 1500 involves refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information. As discussed above in detail, the refinement process divides a face specified by the reconstructed connectivity information into two faces based on a vertex specified in the reconstructed geometry information but not listed as a vertex of any faces specified by the reconstructed connectivity information. For example, a vertex that is located inside a face of the reconstructed connectivity information can be identified and its projected vertex on an edge of the face can be determined. The face can then be divided into two refined faces, each refined face having the projected vertex as one vertex and having the same orientation as the initially reconstructed face.

[0085] At block 1510, the process 1500 involves reconstructing the dynamic mesh based on the reconstructed geometry information and the refined connectivity information as well as other information including the attribute information, the mapping information, and so on. At block 1512, the process 1500 involves causing the reconstructed dynamic mesh to be rendered for display. For example, the reconstructed dynamic mesh can be transmitted to a device or a module configured to render the 3D object represented by the reconstructed dynamic mesh to generate rendered images or video for display.

[0086] FIG. 16 illustrates an example of an encoder system 1600 for mesh coding with connectivity simplification and attribute and mapping adjustment, according to various embodiments of the present disclosure. The encoder system 1600 shown in FIG. 16 includes various components or modules that are similar to those in FIG. 8. In addition, the encoder system 1600 includes a mapping topology update module 1602 and an attribute image update module 1604 to update the mapping topology and the attribute images, respectively, in addition to the connectivity simplification to reduce the distortion caused by the reconstruction errors of the geometry information. As shown in FIG. 10, because of the lossy compression of the geometry information, the reconstructed vertices may have different coordinates than the original coordinates. In this example, a vertex P becomes P’ after the encoding and reconstruction. To reduce the visual artifacts caused by this reconstruction error, the attribute information and the mapping information should be adjusted accordingly. For example, the attribute image update module 1604 can update the attribute component images to reflect the distortion caused by geometry encoding and reconstruction. To do so, the attribute image update module 1604 can employ an attribute reprojection process.

[0087] The distortions associated with an attribute image that was introduced by geometry compression error can be described as a transformation such as an affine transformation. The transformation is applied to a point that belongs to the edge between the two neighboring triangles that are degenerated and merged during the connectivity simplification, such as point P in FIG. 10. As such, the reprojection process can be implemented as the same transformation applied to corresponding attribute samples of each degenerated triangular face in the attribute image. The boundaries of the triangular face are defined by the vertex mapping coordinates (U, V) and the connectivity information as shown in FIG. 5. The transformation T() can be represented using a 4x4 matrix M, where

The transformation T() can be an affine transformation, The affine transformation T() transforms point (or vector) p to point (or vector) p’. The transformation T() of point p to point p’ is obtained by performing the matrix-vector multiplication Mp: (2)

(3)

One example of matrix M to implement shear transformation T() is

(4)

For example, the shear transform parameters Sh can be derived from the vectors representing the points P’ and P shown in FIG. 10.

[0088] Another example of matrix M to implement rotation transformation T() is

For Axis X:

For Axis Y:

(5)

For Axis Z

[0089] FIG. 17 shows examples of the affine transformations. Sub-figure (A) shows an example of the shear transformation for the 2-dimensional case and sub-figure (B) shows an example of the rotation transformation for the 2-dimensional case. The attribute image update module 1604 can select the format of the affine transformation (e.g., the shear transformation, the rotation transformation, or another transformation) and determine the parameters of the transformation based on the distortions associated with the lossy compression applied to geometry component and connect connectivity simplification.

[0090] Alternatively, or additionally, an interpolation transformation filter, such as a bilinear interpolation transformation filter, a Lanczos interpolation transformation filter, can be used to reproject the face of the attribute image corresponding to the degenerated face on the simplified topology. The parameters of the filter, such as the coefficients, positions, and length of the filter can be signaled in the coded mesh bitstream or be an integral part of the encoder, such as look-up table. For examples, the parameters can be signaled using supplemental enhancement information message or by indicating a filter index in a look-up table. The updated attribute component image can be provided to the video coder 120a for encoding as discussed above with respect to FIG. 1.

[0091] Similarly, the mapping topology update module 1602 can update the mapping component images in accordance with the connectivity simplification. The mapping topology update module 1602 can use the analysis results generated by the reconstructed geometry analysis module 804 which can include the vertices that are selected to be removed and, in some examples, the triangles to be merged. The mapping component coordinates (U,V) can be updated according to the changes of an attribute component image. The mapping coordinate associated with the vertex P in the original attribute image is updated with transformed mapping coordinates associated with vertex P’.

[0092] The updated mapping component images may be provided to the video coder 120d for encoding as discussed above with respect to FIG. 1. The multiplexer 122 may then multiplex the encoded information to generate the coded mesh frame sequence 1610 in a way similar to that described with respect to FIG. 1. Because of the attribute update and mapping update in addition to the connectivity simplification, the coded mesh frame sequence 1610 is different from the coded mesh frame sequence 124 generated using the encoder system 100 and the coded mesh frame sequence 810 generated using the encoder system 800.

[0093] FIG. 18 illustrates an example decoder system 1800 for mesh decoding with connectivity refinement and mapping refinement, according to various embodiments of the present disclosure. The decoder system 1800 shown in FIG. 18 includes various components or modules that are similar to those in FIG. 11. In addition, the decoder system 1800 includes a mapping refinement module 1802 to recover the mapping information that was updated at the encoder before the mapping information is sent to the mesh reconstruction module 226 for reconstructing the mesh frame sequence 1804.

[0094] In some examples, the mapping refinement module 1802 can adjust the position of the vertex P’ in the attribute component image according to the equation (1). For example, the mapping refinement module 1802 can estimate the transformation defined in equation (1) with P being the reconstructed point position and P’ being the projected point position. Based on the estimated transformation, the transform defined in equation (6) is applied to U and V coordinates of the reconstructed mapping component images. The refined mapping information can be used to reconstruct the mesh frames along with other decoded components. (6)

[0095] Referring now to FIG. 19, FIG. 19 depicts an example of a process 1900 for mesh coding with connectivity simplification and attribute and mapping adjustment, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 1900 by executing suitable program code. For example, the encoder system 1600 in FIG. 16 may implement the operations depicted in FIG. 1900 by executing the corresponding program code. For illustrative purposes, the process 1900 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0096] At block 1902, the process 1900 involves accessing a dynamic mesh to be encoded. As discussed above, the dynamic mesh may be represented as uncompressed mesh frame sequence that include mesh frames. A mesh frame is a data format that describes 3D content (e.g., 3D objects) in a digital representation as a collection of geometry, connectivity, attribute, and attribute mapping information. Each mesh frame is characterized by a presentation time and duration. A mesh frame sequence (e.g., sequence of mesh frames) forms a dynamic mesh video. The uncompressed mesh frame sequence can be segmented into segmented mesh data. Based on the segmented mesh data, the encoder system 1600 can generate attribute component images, geometry component images, connectivity component images, and mapping component images.

[0097] At block 1904, the process 1900 involves encoding the geometry component images using a video encoder to generate a geometry component bitstream as described in detail with respect to FIG. 1. At block 1906, the process 1900 involves decoding the encoded geometry component images to generate reconstructed geometry component images. The decoding is performed using a video decoder that corresponds to the video encoder used to the encode the geometry component images. As discussed above, in examples where the video encoder (e.g., video encoder 120b) is a lossy encoder, due to prediction and quantization, the reconstructed geometry component images will be different from the geometry component images. As such, the reconstructed position of the decoded vertexes of the mesh may differ from the original position of the vertexes of the mesh. As a result, neighboring faces may degenerate into simplified topology.

[0098] At block 1908, the process 1900 involves determining a face containing a vertex in the reconstructed geometry component images to be removed from connectivity component images of the dynamic mesh. At block 1910, the process 1900 involves updating the connectivity component images of the dynamic mesh to simplify the connectivity information. Blocks 1908 and 1910 are similar to block 1408 and block 1410, respectively, described with regard to FIG. 14.

[0099] At block 1912, the process 1900 involves updating the attribute component images and the mapping component images of the dynamic mesh. As discussed above in detail with respect to FIG. 16, the encoder system 1600 can estimate the transformation applied to the vertices in the degenerated geometry information by the lossy compression and apply the same transformation to the corresponding faces in the attribute images. The mapping component coordinates (U,V) can be updated according to the changes of an attribute component image. The mapping coordinate associated with the vertex P in the original attribute image is updated with transformed mapping coordinates associated with vertex P’.

[0100] At block 1914, the process 1900 involves encoding the updated connectivity component images to generate a connectivity component bitstream, encoding the updated attribute component images to generate an attribute component bitstream, and encoding the updated mapping component images to generate a mapping component bitstream. As discussed above in detail with respect to FIG. 1, the encoding may involve using video encoders to generate the respective component bitstreams. At block 1916, the process 1900 involves generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, the attribute component bitstream, and the mapping component bitstream. For example, the coded mesh bitstream can be generated by multiplexing the mesh bitstream payload, that includes the encoded component bitstreams with a mesh bitstream header.

[0101] Referring now to FIG. 20, FIG. 20 depicts an example of a process 2000 for decoding a coded mesh bitstream with connectivity refinement and mapping refinement, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 2000 by executing suitable program code. For example, the decoder system 1800 in FIG. 18 may implement the operations depicted in FIG. 2000 by executing the corresponding program code. For illustrative purposes, the process 2000 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0102] At block 2002, the process 2000 involves accessing a coded mesh bitstream of a dynamic mesh for decoding. The coded mesh bitstream is encoded with connectivity simplification and attribute and mapping updates described above with respect to FIG. 19. The coded mesh bitstream can include a geometry component bitstream, an attribute component bitstream, a connectivity component bitstream, and a mapping component bitstream.

[0103] At block 2004, the process 2000 involves reconstructing geometry information of the dynamic mesh from the geometry component bitstream in the coded mesh bitstream. The reconstructed geometry information includes data specifying vertices of the dynamic mesh. As discussed above in detail with respect to FIG. 2, the geometry information of the dynamic mesh can be reconstructed from the geometry component bitstream by applying a video decoder to the geometry component bitstream to generate reconstructed geometry component images which is then used to reconstruct the ordered list of vertex coordinates in the geometry information.

[0104] At block 2006, the process 2000 involves reconstructing connectivity information of the dynamic mesh from the connectivity component bitstream in the coded mesh bitstream. At block 2008, the process 2000 involves refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information. Blocks 2006 and 2008 are similar to block 1506 and block 1508, respectively, described with regard to FIG. 15.

[0105] At block 2010, the process 2000 involves reconstructing the mapping information from the mapping component bitstream and refining the reconstructed mapping information. As discussed above in detail with respect to FIG. 2, the mapping information of the dynamic mesh can be reconstructed from the mapping component bitstream by applying a video decoder to the mapping component bitstream to generate reconstructed mapping component images which are then used to reconstruct the ordered list of projected vertex attribute coordinate information. The reconstructed mapping information can be refined by the mapping refinement module 1802 via adjusting the position of the vertex P’ in the attribute component images according to equation (1). For example, the mapping refinement module 1802 can update the mapping component images to reflect the distortions caused by the topology simplification. The mapping refinement module 1802 can estimate the transformation defined in equation (1) with P being the reconstructed point position and P’ being the projected point position. Based on the estimated transformation, the transform defined in equation (6) is applied to U and V coordinates of the reconstructed mapping image component.

[0106] At block 2012, the process 2000 involves reconstructing the dynamic mesh based on the reconstructed geometry information, the refined connectivity information, and the refined mapping information as well as other information including the decoded attribute information and so on. At block 2014, the process 2000 involves causing the reconstructed dynamic mesh to be rendered for display. For example, the reconstructed dynamic mesh can be transmitted to a device or a module configured to render the 3D object represented by the reconstructed dynamic mesh to generate rendered images or video for display.

[0107] FIG. 21 illustrates an example of an encoder system 2100 for mesh coding with connectivity simplification and attribute and mapping adjustment, according to various embodiments of the present disclosure. The encoder system 2100 shown in FIG. 21 includes various components or modules that are similar to those in FIG. 16. Compared with FIG. 16, the encoder system 2100 does not include the attribute image update module 1604 for updating the attribute images. In other words, the attribute component images are encoded by the video encoder 120a. The multiplexer 122 can multiplex the encoded information to generate the coded mesh frame sequence 2110 in a way similar to that described with respect to FIG. 1. Because of the attribute update is not performed at the encoder along with the connectivity simplification and mapping update, the coded mesh frame sequence 2110 is different from the coded mesh frame sequence 124 generated using the encoder system 100, the coded mesh frame sequence 810 generated using the encoder system 800, and the coded mesh frame sequence 1610 generated using the encoder system 1600. Not updating the attribute image at the encoder side allows the encoder system 2100 to generate the coded mesh frame sequence 2110 faster than the encoder system 1600 shown in FIG. 16.

[0108] FIG. 22 illustrates an example decoder system 2200 for mesh decoding with connectivity refinement, mapping refinement, and attribute refinement, according to various embodiments of the present disclosure. The decoder system 2200 shown in FIG. 22 includes various components or modules that are similar to those in FIG. 18. In addition, the decoder system 2200 includes an attribute refinement module 2202 to refine and transform the decoded attribute image according to the distortions occurred to the geometry information before the attribute image is sent to the mesh reconstruction module 226 for reconstructing the mesh frame sequence 2204.

[0109] In some examples, the attribute refinement module 2202 estimates the transformation applied to each degenerated triangular face in a way similar to the estimation of the transformation performed by the attribute image update module 1604. For example, the attribute refinement module 2202 can estimate the affine transformation T() according Eqns. (l)-(5) with P being the original reconstructed point position and P’ being the projected point position. An inverse of the estimated affine transformation can be applied to the decoded attribute image. For example, if the estimated transformation is a shear transform with the shear transform parameters S h , the inverse shear transform can be applied to the decoded attribute image according to the following:

(7)

Here, [x’ y’ z’ ] represents reconstructed mapping component coordinates associated with reconstructed vertex P’ of the decoded attribute image and [x” y" z"] represents the refined mapping component coordinates associated with projected vertex P" of the refined attribute image.

[0110] Alternative, or in addition to, the attribute refinement filter in Eqn. (6), a 1- or 2- dimensional interpolation transformation filter, such as a bilinear interpolation transformation filter, a lanczos interpolation transformation filter, can be used to refine the face of the attribute component image corresponding to the degenerated face on the simplified topology. The parameters of the filter, such as the coefficients, positions, and size of the filter can be signaled in the coded mesh bitstream. For examples, the parameters can be signaled using supplemental enhancement information message or by indicating a filter index in a look-up table. The refined attribute component image can be used to reconstruct the mesh frames along with other decoded components as discussed above with respect to FIG. 2.

[0111] Referring now to FIG. 23, FIG. 23 depicts an example of a process 2300 for mesh coding with connectivity simplification and mapping adjustment without attribute adjustment, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 2300 by executing suitable program code. For example, the encoder system 2100 in FIG. 21 may implement the operations depicted in FIG. 2300 by executing the corresponding program code. For illustrative purposes, the process 2300 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0112] At block 2302, the process 2300 involves accessing a dynamic mesh to be encoded. As discussed above, the dynamic mesh may be represented as uncompressed mesh frame sequence that include mesh frames. A mesh frame is a data format that describes 3D content (e.g., 3D objects) in a digital representation as a collection of geometry, connectivity, attribute, and attribute mapping information. Each mesh frame is characterized by a presentation time and duration. A mesh frame sequence (e.g., sequence of mesh frames) forms a dynamic mesh video. The uncompressed mesh frame sequence can be segmented into segmented mesh data. Based on the segmented mesh data, the encoder system 2100 can generate attribute component images, geometry component images, connectivity component images, and mapping component images.

[0113] At block 2304, the process 2300 involves encoding the geometry component images using a video encoder to generate a geometry component bitstream as described in detail with respect to FIG. 1. At block 2306, the process 2300 involves decoding the encoded geometry component images to generate reconstructed geometry component images. The decoding is performed using a video decoder that corresponds to the video encoder used to the encode the geometry component images. As discussed above, in examples where the video encoder (e.g., video encoder 120b) is a lossy encoder, due to prediction and quantization, the reconstructed geometry component images will be different from the geometry component images. As such, the reconstructed position of the decoded vertexes of the mesh may differ from the original position of the vertexes of the mesh. As a result, neighboring faces may degenerate into simplified geometry.

[0114] At block 2308, the process 2300 involves determining a face containing a vertex in the reconstructed geometry component images to be removed from connectivity component images of the dynamic mesh. At block 2310, the process 2300 involves updating the connectivity component images of the dynamic mesh to simplify the connectivity information. Blocks 2308 and 2310 are similar to block 1408 and block 1410, respectively, described with regard to FIG. 14.

[0115] At block 2312, the process 2300 involves updating the mapping component images of the dynamic mesh. As discussed above in detail with respect to FIG. 21, the encoder system 2100 can update the mapping component images by adjusting mapping coordinates associated with the vertices for the simplified connectivity.

[0116] At block 2314, the process 2300 involves encoding the updated connectivity component images to generate a connectivity component bitstream and encoding the updated mapping component images to generate a mapping component bitstream. As discussed above in detail with respect to FIG. 1, the encoding may involve using video encoders to generate the respective component bitstreams. At block 2316, the process 2300 involves generating a coded mesh bitstream by including at least the geometry component bitstream, the connectivity component bitstream, and the mapping component bitstream. For example, the coded mesh bitstream can be generated by multiplexing the mesh bitstream payload, that includes the encoded component bitstreams with a mesh bitstream header. [0117] Referring now to FIG. 24, FIG. 24 depicts an example of a process 2400 for decoding a coded mesh bitstream with connectivity refinement, mapping refinement, and attribute refinement, according to some embodiments of the present disclosure. One or more computing devices implement operations depicted in FIG. 2400 by executing suitable program code. For example, the decoder system 2200 in FIG. 22 may implement the operations depicted in FIG. 2400 by executing the corresponding program code. For illustrative purposes, the process 2400 is described with reference to some examples depicted in the figures. Other implementations, however, are possible.

[0118] At block 2402, the process 2400 involves accessing a coded mesh bitstream of a dynamic mesh for decoding. The coded mesh bitstream is encoded with connectivity simplification and mapping updates without attribute updates as described above with respect to FIG. 23. The coded mesh bitstream can include a geometry component bitstream, an attribute component bitstream, a connectivity component bitstream, and a mapping component bitstream.

[0119] At block 2404, the process 2400 involves reconstructing geometry information of the dynamic mesh from the geometry component bitstream in the coded mesh bitstream. The reconstructed geometry information includes data specifying vertices of the dynamic mesh. As discussed above in detail with respect to FIG. 2, the geometry information of the dynamic mesh can be reconstructed from the geometry component bitstream by applying a video decoder to the geometry component bitstream to generate reconstructed geometry component images which are then used to reconstruct the ordered list of vertex coordinates in the geometry information.

[0120] At block 2406, the process 2400 involves reconstructing connectivity information of the dynamic mesh from the connectivity component bitstream in the coded mesh bitstream. At block 2408, the process 2400 involves refining the reconstructed connectivity information based on the reconstructed geometry information to generate refined connectivity information. Blocks 2406 and 2408 are similar to block 1506 and block 1508, respectively, described with regard to FIG. 15.

[0121] At block 2410, the process 2400 involves reconstructing the attribute image from the attribute component bitstream and refining the reconstructed attribute image. As discussed above in detail with respect to FIG. 2, the attribute component image of the dynamic mesh can be reconstructed from the attribute component bitstream by applying a video decoder to the attribute component bitstream to generate reconstructed attribute image. The reconstructed attribute image can be refined by applying an inverse transformation on the face corresponding to a degenerated face in the connectivity information. The transformation can be estimated based on the decoded vertex in the geometry information and the reconstructed projected vertex in the geometry information.

[0122] At block 2412, the process 2400 involves reconstructing the dynamic mesh based on the reconstructed geometry information, the refined connectivity information, and the refined attribute image as well as other information including the decoded attribute information and so on. At block 2414, the process 2400 involves causing the reconstructed dynamic mesh to be rendered for display. For example, the reconstructed dynamic mesh can be transmitted to a device or a module configured to render the 3D object represented by the reconstructed dynamic mesh to generate rendered images or video for display.

[0123] Computing System Example

[0124] Any suitable computing system can be used for performing the operations described herein. For example, FIG. 25 depicts an example of a computing device 2500 that can implement the mesh encoder 100 of FIG. 1, the mesh decoder 200 of FIG. 2, the mesh encoder 800 of FIG. 8, the mesh decoder 1100 of FIG. 11, the mesh encoder 1600 of FIG. 16, the mesh decoder 1800 of FIG. 18, the mesh encoder 2100 of FIG. 21, the mesh decoder 2200 of FIG. 22. In some embodiments, the computing device 2500 can include a processor 2512 that is communicatively coupled to a memory 2514 and that executes computer-executable program code and/or accesses information stored in the memory 2514. The processor 2512 may comprise a microprocessor, an application-specific integrated circuit (“ASIC”), a state machine, or other processing device. The processor 2512 can include any of a number of processing devices, including one. Such a processor can include or may be in communication with a computer-readable medium storing instructions that, when executed by the processor 2512, cause the processor to perform the operations described herein.

[0125] The memory 2514 can include any suitable non-transitory computer-readable medium. The computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, memory chip, ROM, RAM, an ASIC, a configured processor, optical storage, magnetic tape or other magnetic storage, or any other medium from which a computer processor can read instructions. The instructions may include processor-specific instructions generated by a compiler and/or an interpreter from code written in any suitable computerprogramming language, including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript. [0126] The computing device 2500 can also include a bus 2516. The bus 2516 can communicatively couple one or more components of the computing device 2500. The computing device 2500 can also include a number of external or internal devices such as input or output devices. For example, the computing device 2500 is shown with an input/output (“I/O”) interface 2518 that can receive input from one or more input devices 2520 or provide output to one or more output devices 2522. The one or more input devices 2520 and one or more output devices 2522 can be communicatively coupled to the I/O interface 2518. The communicative coupling can be implemented via any suitable manner (e.g., a connection via a printed circuit board, connection via a cable, communication via wireless transmissions, etc.). Non-limiting examples of input devices 2520 include a touch screen (e.g., one or more cameras for imaging a touch area or pressure sensors for detecting pressure changes caused by a touch), a mouse, a keyboard, or any other device that can be used to generate input events in response to physical actions by a user of a computing device. Non-limiting examples of output devices 2522 include an LCD screen, an external monitor, a speaker, or any other device that can be used to display or otherwise present outputs generated by a computing device.

[0127] The computing device 2500 can execute program code that configures the processor 2512 to perform one or more of the operations described above with respect to FIGS. 1-24. The program code can include the mesh encoder 100 of FIG. 1, the mesh decoder 200 of FIG. 2, the mesh encoder 800 of FIG. 8, the mesh decoder 1100 of FIG. 11, the mesh encoder 1600 of FIG. 16, the mesh decoder 1800 of FIG. 18, the mesh encoder 2100 of FIG. 21, the mesh decoder 2200 of FIG. 22. The program code may be resident in the memory 2514 or any suitable computer-readable medium and may be executed by the processor 2512 or any other suitable processor.

[0128] The computing device 2500 can also include at least one network interface device 2524. The network interface device 2524 can include any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks 2528. Nonlimiting examples of the network interface device 2524 include an Ethernet network adapter, a modem, and/or the like. The computing device 2500 can transmit messages as electronic or optical signals via the network interface device 2524.

[0129] General Considerations

[0130] Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.

[0131] Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.

[0132] The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.

[0133] Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied — for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Some blocks or processes can be performed in parallel.

[0134] The use of “adapted to” or “configured to” herein is meant as an open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.

[0135] While the present subj ect matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.