Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
REMESHING FOR EFFICIENT COMPRESSION
Document Type and Number:
WIPO Patent Application WO/2023/172457
Kind Code:
A1
Abstract:
A re-meshing pre-processor for re-meshing a 3D textured mesh M(i) to generate a base mesh m(i) and displacement field d(i) for input into a mesh encoder, can include a Mesh Decimation module that includes processing hardware that reduces the number of vertices or faces of input mesh M(i), or a mesh derived therefrom, while substantially preserving the shape of the input mesh M(i), thereby producing a decimated mesh dm(i) and a projected mesh P(i); and a Fitting Subdivision Surface module that includes processing hardware that processes the input mesh M(i), the decimated mesh dm(i) or a mesh derived therefrom, and the projected mesh P(i) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder.

Inventors:
MAMMOU KHALED (US)
TOURAPIS ALEXANDROS (US)
KIM JUNGSUN (US)
Application Number:
PCT/US2023/014516
Publication Date:
September 14, 2023
Filing Date:
March 03, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLE INC (US)
International Classes:
G06T9/00; G06T17/20; H04N19/597
Foreign References:
EP3882859A12021-09-22
US201916586872A2019-09-27
USPP63197288P
US198762631972P
Other References:
LEE AARON ET AL: "Displaced subdivision surfaces", PROCEEDINGS OF THE ACM SIGOPS 28TH SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES, ACMPUB27, NEW YORK, NY, USA, 1 July 2000 (2000-07-01), pages 85 - 94, XP059025634, ISBN: 978-1-4503-8709-5, DOI: 10.1145/344779.344829
G. J. SULLIVAN: "Adaptive Quantization Encoding Technique Using an Equal Expected-value Rule", JOINT VIDEO TEAM, JVT-N011, HONG KONG, January 2005 (2005-01-01)
L. IBARRIAJ. ROSSIGNAC.: "Dynapack : space-time compression of the 3D animations of triangle meshes with fixed connectivity", EUROGRAPHICS SYMPOSIUM ON COMPUTER ANIMATION, 2003, pages 126 - 133
N. STEFANOSKIJ. OSTERMANN.: "Connectivity-guided predictive compression of dynamic 3D meshes", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2006, pages 2973 - 2976, XP031049301
J.-H. YANGC.-S. KIMS.-U. LEE.: "Compression of 3-D triangle mesh sequences based on vertex-wise motion vector prediction", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 12, no. 12, 2002, pages 1178 - 1184, XP001141949, DOI: 10.1109/TCSVT.2002.806814
N. STEFANOSKIP. KLIEX. LIUJ. OSTERMANN.: "Scalable linear predictive coding of time-consistent 3D mesh sequences", IN THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO, 2007, pages 1 - 4, XP031158215
N. STEFANOSKIX. LIUP. KLIEJ. OSTERMANN.: "Layered predictive coding of time-consistent dynamic 3D meshes using a non-linear predictor", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2007, pages 109 - 112
V. LIBOR : S. VACLAV.: "Coddyac : Connectivity driven dynamic mesh compression.", IN 3DTV INTERNATIONAL CONFERENCE : TRUE VISION-CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO, KOS ISLAND, GREECE, 2007
M. SATTLERR. SARLETTER. KLEIN: "Simple and efficient compression of animation sequences", IN EUROGRAPHICS SYMPOSIUM ON COMPUTER ANIMATION, 2005, pages 209 - 217
I. GUSKOVA. KHODAKOVSKY.: "Wavelet compression of parametrically coherent mesh sequences.", IN EUROGRAPHICS SYMPOSIUM ON COMPUTER ANIMATION, 2004, pages 183 - 192, XP058329466, DOI: 10.1145/1028523.1028547
J.W. CHOM.S. KIMS. VALETTEH.Y. JUNGR. PROST.: "3D dynamic mesh compression using wavelet-based multiresolution analysis", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2006, pages 529 - 532, XP031048690
K. MAMOUT. ZAHARIAF. PRETEUX: "A skinning approach for dynamic 3D mesh com- pression", COMPUTER ANIMATION AND VIRTUAL WORLDS, vol. 17, no. 3-4, July 2006 (2006-07-01), pages 337 - 346
MARPEJ. OSTERMANN: "The new MPEG-4/FAMC standard for animated 3D mesh compression", 3DTV CONFERENCE (3DTV-CON 2008), ISTANBUL, TURKEY, May 2008 (2008-05-01)
K. MAMOUT. ZAHARIAF. PRETEUXA. KAMOUNF. PAYANM. ANTONINI: "Two optimizations of the MPEG-4 FAMC standard for enhanced compression of animated 3D meshes", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2008
K. MAMMOUJ. KIMA. TOURAPISD. PODBORSKIK. KOLAROV: "V-CG] Apple's Dynamic Mesh Coding CfP Response", ISO/IEC JTCL/SC29/WG7/M59281, April 2022 (2022-04-01)
A. TOURAPISJ. KIMD. PODBORSKIK. MAMMOU: "Base mesh data substream format for VDMC", ISO/IEC JTCL/SC29/WG7/M60362, July 2022 (2022-07-01)
Attorney, Agent or Firm:
FLETCHER, Michael G. et al. (US)
Download PDF:
Claims:
CLAIMS A re-meshing pre-processor for re-meshing a 3D textured mesh M(i) to generate a base mesh m(i) and displacement field d(i) for input into a mesh encoder, the pre-processor comprising: a Mesh Decimation module comprising processing hardware that reduces the number of vertices or faces of input mesh M(i), or a mesh derived therefrom, while substantially preserving the shape of the input mesh M(i), thereby producing a decimated mesh dm(i) and a projected mesh P(i); and a Fitting Subdivision Surface module comprising processing hardware that processes the input mesh M(i), the decimated mesh dm(i) or a mesh derived therefrom, and the projected mesh P(i) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder. The re-meshing pre-processor of claim 1 further comprising a Duplicated Vertex Removal module comprising processing hardware that removes duplicated vertices from the input mesh M(i) to produce a mesh with unified vertices UM(i) that is input into the Mesh Decimation module in lieu of the input mesh M(i). The re-meshing pre-processor of claim 1 further comprising a Duplicated Triangles Removal module comprising processing hardware that receives as an input decimated mesh dm(i) and processes it to remove triangles that reference the same vertices. The re-meshing pre-processor of claim 1 further comprising a Small Connected Components Removal module comprising processing hardware that detects and removes connected components having a number of vertices, number of triangles, or area below a determined threshold from the decimated mesh dm(i) or a mesh derived therefrom. The re-meshing pre-processor of claim 1 further comprising an Atlas Parameterization module comprising processing hardware that reduces a number of patches of the decimated mesh dm(i) or a mesh derived therefrom. The re-meshing pre-processor of claim 1 further comprising: a Duplicated Vertex Removal module comprising processing hardware that removes duplicated vertices from the input mesh M(i) to produce a mesh with unified vertices UM(i) that is input into the Mesh Decimation module in lieu of the input mesh M(i); a Duplicated Triangles Removal module comprising processing hardware that receives as an input decimated mesh dm(i) and processes it to remove triangles that reference the same vertices; a Small Connected Components Removal module comprising processing hardware that detects and removes connected components having a number of vertices, number of triangles, or area below a determined threshold from the decimated mesh dm(i) or a mesh derived therefrom; and an Atlas Parameterization module comprising processing hardware that reduces a number of patches of the decimated mesh dm(i) or a mesh derived therefrom. The re-meshing pre-processor of claim 1 wherein the Fitting Subdivision Surface module further comprises a mesh subdivision module comprising processing hardware that receives the decimated mesh dm(i) or a mesh derived therefrom and subdivides the polygons thereof to produce a subdivided mesh S(i). The re-meshing pre-processor of claim 7 wherein the Fitting Subdivision Surface module further comprises at least one Mesh Deformation module comprising processing hardware that receives the subdivided mesh S(i) and two or more of a mesh derived from the subdivided mesh, the projected mesh P(i), the input mesh M(i) and produces a deformed mesh by moving vertices of the subdivided mesh S(i) to match a shape of input mesh M(i). The re-meshing pre-processor of claim 8 wherein the Fitting Subdivision Surface module further comprises a Base Mesh Optimization module comprising processing hardware that receives the deformed mesh and the decimated mesh pm(i) and produces base mesh m(i) by updating position of pm(i) to minimize a distance between subdivided version of pm(i) and the deformed mesh. The re-meshing pre-processor of claim 9 wherein the Fitting Subdivision Surface module further comprises a Displacement Computation module that receives as inputs the deformed mesh and the base mesh m(i) and computes displacement field d(i) as the difference between them. The re-meshing pre-processor of claim 10 wherein the at least one Mesh Deformation Module comprises: an Initial Mesh Deformation module comprising processing hardware that, for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finds a nearest point H(v) on the surface of the projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user- defined threshold and moves vertex v to the nearest point H(v) to produce an initial deformed mesh F0(i); and an Iterative Mesh Deformation module comprising processing hardware that, for each vertex v of the deformed mesh with position Pos(v), finds a nearest point H(v) on input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold and moves the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i). The re-meshing pre-processor of claim 9 wherein the at least one Mesh Deformation Module comprises: an Initial Mesh Deformation module comprising processing hardware that, for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finds a nearest point H(v) on the surface of the projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user- defined threshold and moves vertex v to the nearest point H(v) to produce an initial deformed mesh F0(i); and an Iterative Mesh Deformation module comprising processing hardware that, for each vertex v of the deformed mesh with position Pos(v), finds a nearest point H(v) on input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold and moves the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i). The re-meshing pre-processor of claim 8 wherein the at least one Mesh Deformation Module comprises: an Initial Mesh Deformation module comprising processing hardware that, for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finds a nearest point H(v) on the surface of the projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user- defined threshold and moves vertex v to the nearest point H(v) to produce an initial deformed mesh F0(i); and an Iterative Mesh Deformation module comprising processing hardware that, for each vertex v of the deformed mesh with position Pos(v), finds a nearest point H(v) on input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold and moves the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i). A method of decimating an input mesh M(i) or a mesh with unified vertices UM(i) derived therefrom by duplicate vertex removal to produce a decimated mesh dm(i), the method comprising: reducing the number of vertices/faces of the input mesh M(i) while substantially preserving the shape of the original mesh; and tracking a mapping between the input mesh M(i) by projecting removed vertices onto the decimated mesh dm(i). A method of deforming a subdivided mesh S(i), the method comprising: for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finding a nearest point H(v) on the surface of a projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user-defined threshold; moving vertex v to the nearest point H(v) to produce a deformed mesh that is initially initial deformed mesh F0(i); and iteratively, for each vertex v of the deformed mesh with position Pos(v), finding a nearest point H(v) on an input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold; and moving the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i).

A method of time consistent remeshing, the method comprising reusing a base mesh pm(j) associated with a reference mesh M(j) for a base mesh pm(i) associated with a mesh M(i), wherein pm(i) and pm(j) have the same connectivity. The method of claim 16 further comprising: if input meshes M(i) and M(j) are temporally coherent, applying a Fitting Subdivision Surface module comprising processing hardware that processes the input mesh M(i), the decimated mesh dm(j) or a mesh derived therefrom, and the projected mesh P(j) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder; or if input meshes M(i) and M(j) are not temporally coherent: generating mesh M’(j) that is a deformed version of M(j) having the same shape as M(i); and applying a Fitting Subdivision Surface module comprising processing hardware that processes mesh M’(j), the decimated mesh dm(j) or a mesh derived therefrom, and the projected mesh P(j) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder.

Description:
REMESHING FOR EFFICIENT COMPRESSION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to the following U.S. Provisional Patent Applications, which are hereby incorporated by reference in their entirety: U.S. Provisional Application No. 63/269,211 filed on March 11, 2022 and entitled “Image/Video Based Mesh Compression”; U.S. Provisional Application No. 63/269,213 filed March 11, 2022 and entitled “Remeshing for Efficient Compression”; U.S. Provisional Application No. 63/269,214 filed on March 11, 2022 and entitled “Attribute Transfer for Efficient Dynamic Mesh Coding”; U.S. Provisional Application No. 63/269,217 filed March 11, 2022 and entitled “Motion Compression for Efficient Dynamic Mesh Coding”; U.S. Provisional Application No. 63/269,218 filed March 11, 2022 and entitled “Attribute Transfer for Efficient Dynamic Mesh Coding”; U.S. Provisional Application No. 63/269,219 filed March 11, 2022 and entitled “Adaptive Tessellation for Efficient Dynamic Mesh Encoding, Decoding, Processing, and Rendering”; and U.S. Provisional Application No. 63/368,793 filed on July 19, 2022 and entitled “VDMC support in the V3C framework”.

BACKGROUND

[0002] Video-based solutions, such as V3C were successfully developed to efficiently compress 3D volumetric data such as point clouds (i.e., V3C/V-PCC) or 3DoF+ content (V3C/MIV). The V3C standard makes it possible to compress 3D data such as static and dynamic point clouds by combining existing video coding technologies and metadata through well-defined syntax structures and processing steps. The video coding technologies are used to compress 3D projected data on 2D planes such as geometry and attributes, while the metadata includes information of how to extract and reconstruct the 3D representations from those 2D projections. Figure 1 shows a block diagram of the V-PCC TMC2 encoder.

SUMMARY

[0003] Disclosed herein are methods and apparatuses for image/video-based compression static and dynamic meshes. Specifically, disclosed herein are pre-processing remeshing techniques that can improve compression efficiency for 3D meshes. A re-meshing pre-processor for remeshing a 3D textured mesh M(i) to generate a base mesh m(i) and displacement field d(i) for input into a mesh encoder, can include a Mesh Decimation module that includes processing hardware that reduces the number of vertices or faces of input mesh M(i), or a mesh derived therefrom, while substantially preserving the shape of the input mesh M(i), thereby producing a decimated mesh dm(i) and a projected mesh P(i); and a Fitting Subdivision Surface module that includes processing hardware that processes the input mesh M(i), the decimated mesh dm(i) or a mesh derived therefrom, and the projected mesh P(i) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder.

[0004] The re-meshing pre-processor can further include a Duplicated Vertex Removal module that includes processing hardware that removes duplicated vertices from the input mesh M(i) to produce a mesh with unified vertices UM(i) that is input into the Mesh Decimation module in lieu of the input mesh M(i). The re-meshing pre-processor can further include a Duplicated Triangles Removal module that includes processing hardware that receives as an input decimated mesh dm(i) and processes it to remove triangles that reference the same vertices. The re-meshing pre-processor can further include a Small Connected Components Removal module that includes processing hardware that detects and removes connected components having a number of vertices, number of triangles, or area below a determined threshold from the decimated mesh dm(i) or a mesh derived therefrom. The re-meshing pre-processor can further include an Atlas Parameterization module that includes processing hardware that reduces a number of patches of the decimated mesh dm(i) or a mesh derived therefrom.

[0005] The Fitting Subdivision Surface module further can further include a mesh subdivision module that includes processing hardware that receives the decimated mesh dm(i) or a mesh derived therefrom and subdivides the polygons thereof to produce a subdivided mesh S(i). The Fitting Subdivision Surface module can further include at least one Mesh Deformation module that includes processing hardware that receives the subdivided mesh S(i) and two or more of a mesh derived from the subdivided mesh, the projected mesh P(i), the input mesh M(i) and produces a deformed mesh by moving vertices of the subdivided mesh S(i) to match a shape of input mesh M(i). The Fitting Subdivision Surface module can further include a Base Mesh Optimization module that includes processing hardware that receives the deformed mesh and the decimated mesh pm(i) and produces base mesh m(i) by updating position of pm(i) to minimize a distance between subdivided version of pm(i) and the deformed mesh. The Fitting Subdivision Surface module can further include a Displacement Computation module that receives as inputs the deformed mesh and the base mesh m(i) and computes displacement field d(i) as the difference between them.

[0006] The at least one Mesh Deformation Module can include an Initial Mesh Deformation module comprising processing hardware that, for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finds a nearest point H(v) on the surface of the proj ected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user- defined threshold and moves vertex v to the nearest point H(v) to produce an initial deformed mesh F0(i), and an Iterative Mesh Deformation module comprising processing hardware that, for each vertex v of the deformed mesh with position Pos(v), finds a nearest point H(v) on input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold and moves the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i).

[0007] A method of decimating an input mesh M(i) or a mesh with unified vertices UM(i) derived therefrom by duplicate vertex removal to produce a decimated mesh dm(i), can include reducing the number of vertices/faces of the input mesh M(i) while substantially preserving the shape of the original mesh; and tracking a mapping between the input mesh M(i) by projecting removed vertices onto the decimated mesh dm(i).

[0008] A method of deforming a subdivided mesh S(i), can include: for each initial 3D position Pos(v) corresponding to a vertex v of the subdivided mesh S(i), finding a nearest point H(v) on the surface of a projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user-defined threshold; moving vertex v to the nearest point H(v) to produce a deformed mesh that is initially initial deformed mesh F0(i); and iteratively, for each vertex v of the deformed mesh with position Pos(v), finding a nearest point H(v) on an input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user- defined threshold; and moving the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v) to produce a new deformed mesh that after a determined number of iterations becomes the final deformed mesh F(i).

[0009] A method of time consistent remeshing, can include reusing a base mesh pm(j) associated with a reference mesh M(j) for a base mesh pm(i) associated with a mesh M(i), wherein pm(i) and pm(j) have the same connectivity. The method can further include, if input meshes M(i) and M(j) are temporally coherent, applying a Fitting Subdivision Surface module including processing hardware that processes the input mesh M(i), the decimated mesh dm(j) or a mesh derived therefrom, and the projected mesh P(j) to produce abase mesh m(i) and the displacement field d(i) for input into a mesh encoder. Alternatively, if input meshes M(i) and M(j) are not temporally coherent, the method can further include generating mesh M’(j) that is a deformed version of M(j) having the same shape as M(i); and applying a Fitting Subdivision Surface module including processing hardware that processes mesh M’(j), the decimated mesh dm(j) or a mesh derived therefrom, and the projected mesh P(j) to produce a base mesh m(i) and the displacement field d(i) for input into a mesh encoder.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.

[0011] Figure 1 illustrates an example V-PCC encoder block diagram.

[0012] Figure 2 illustrates an example of a textured mesh.

[0013] Figure 3 illustrates an example of a textured mesh stored in OBJ format.

[0014] Figure 4 illustrates a high level block diagram of a mesh encoding process.

[0015] Figure 5 illustrates a high level block diagram of a mesh decoding process.

[0016] Figure 6 illustrates a resampling process for a 2D curve. [0017] Figure 7 illustrates subdivision and displacement of a 2D curve.

[0018] Figure 8 illustrates original vs. decimated vs. deformed meshes.

[0019] Figure 9 illustrates an original (wireframe) mesh vs. a deformed (flat shaded) mesh.

[0020] Figure 10 illustrates an intra frame encoder/encoding process.

[0021] Figure 11 illustrates a mid-point subdivision scheme.

[0022] Figure 12 illustrates a forward lifting transform.

[0023] Figure 13 illustrates an inverse lifting transform.

[0024] Figure 14A illustrates an algorithm for computing a local coordinate system.

[0025] Figure 14B illustrates an algorithm for quantizing wavelet coefficients.

[0026] Figure 15 illustrates an algorithm for packing wavelet coefficients into a 2D image.

[0027] Figure 16 illustrates an algorithm for computing Morton order.

[0028] Figure 17 illustrates an inter frame encoder/encoding process.

[0029] Figure 18 illustrates an intra frame decoder/decoding process.

[0030] Figure 19 illustrates an inter frame decoder/decoding process.

[0031] Figure 20 illustrates a block diagram of a re-meshing system.

[0032] Figure 21 illustrates examples of mesh decimation with tracking.

[0033] Figure 22 illustrates mesh parameterization with a reduced number of patches.

[0034] Figure 23 illustrates an example of attribute transfer after re-meshing.

[0035] Figure 24 illustrates the attribute transfer process.

[0036] Figure 25 illustrates an example implementation of the attribute transfer process.

[0037] Figure 26 illustrates discontinuities on boundary edges.

[0038] Figure 27 illustrates a process for seam edge discontinuity mitigation.

[0039] Figure 28 illustrates an example of attribute padding.

[0040] Figure 29 illustrates a block diagram of a proposed motion compression system. [0041] Figure 30 illustrates one example of CABAC -based encoding of prediction index and prediction attributes.

[0042] Figure 31 illustrates an example V3C Extended V-mesh bitstream system block diagram.

[0043] Figure 32 illustrates a v-mesh decoder framework block diagram.

[0044] Figure 33 illustrates an example input mesh to the mesh normalization process.

[0045] Figure 34 illustrates an example output of the mesh normalization process.

[0046] Figure 35 illustrates an example subdivision of areas in the mesh based upon information from their corresponding patch.

[0047] Figure 36 illustrates an example of a simple base mesh.

[0048] Figure 37 illustrates an example of an interpolated mesh.

[0049] Figure 38 illustrates a luma plane of a geometry image.

[0050] Figure 39 illustrates an example of a geometry image.

[0051] Figure 40 illustrates an example of vertex indices in the subpart associated with a patch.

[0052] Figure 41 illustrates adjusting global mesh resolution through varying subdivision iteration count.

[0053] Figure 42 illustrates rules for adaptively subdividing a triangle.

[0054] Figure 43 illustrates an embodiment of a system for implementing video dynamic mesh coding (v-DMC).

DETAILED DESCRIPTION

[0055] In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the disclosed concepts. As part of this description, some of this disclosure’s drawings represent structures and devices in block diagram form for sake of simplicity. In the interest of clarity, not all features of an actual implementation are described in this disclosure. Moreover, the language used in this disclosure has been selected for readability and instructional purposes, has not been selected to delineate or circumscribe the disclosed subject matter. Rather the appended claims are intended for such purpose. [0056] Various embodiments of the disclosed concepts are illustrated by way of example and not by way of limitation in the accompanying drawings in which like references indicate similar elements. For simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the implementations described herein. In other instances, methods, procedures and components have not been described in detail so as not to obscure the related relevant function being described. References to “an,” “one,” or “another” embodiment in this disclosure are not necessarily to the same or different embodiment, and they mean at least one. A given figure may be used to illustrate the features of more than one embodiment, or more than one species of the disclosure, and not all elements in the figure may be required for a given embodiment or species. A reference number, when provided in a given drawing, refers to the same element throughout the several drawings, though it may not be repeated in every drawing. The drawings are not to scale unless otherwise indicated, and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.

Section 1: Image/Video-Based Mesh Compression

[0057] A static/dynamic mesh can be represented as a set of 3D Meshes M(0), M(l), M(2), .. . , M(n). Each mesh M(i) can be defined by be a connectivity C(i), a geometry G(i), texture coordinates T(i), and a texture connectivity CT(i). Each mesh M(i) can be associated with one or more 2D images A(i, 0), A(i, 1)..., A(i, D-l), called also attribute maps, describing a set of attributes associated with the mesh surface. An example of attribute would be texture information (see Figs. 2-3). A set of vertex attributes could also be associated with the vertices of the mesh such as colors, normal, transparency, etc.

[0058] While geometry and attribute information could again be mapped to 2D images and efficiently compressed by using video encoding technologies, connectivity information cannot be encoded efficiently by using a similar scheme. Dedicated coding solutions optimized for such information are needed. In the next sections we present an efficient framework for static/dynamic mesh compression.

[0059] Figures 4 and 5 show a high-level block diagram of the proposed encoding process 400 and decoding process 500, respectively. The encoding process includes a pre-processor 403 that receives a static or dynamic mesh M(i) and an attribute map A(i). The pre-processor produces a base mesh m(i) and displacements d(i) that can be provided to encoder 402, which produces a compressed bitstream b(i) therefrom. Encoder 402 may also directly receive the attribute map A(i). Feedback loop 401 makes it possible for the encoder 402 to guide the pre-processor 403 and can change its parameters to achieve the best possible compromise for encoding bitstream b(i) according to various criteria, including but not limited to:

• Rate-distortion,

• Encode/decode complexity,

• Random access,

• Reconstruction complexity,

• Terminal capabilities,

• Encode/decode power consumption, and/or

• Network bandwidth and latency.

[0060] On the decoder side (Fig. 5), the comprewssed bitstream b(i) is received by a decoder 502 that decodes the bistream to produce METADATA(i) relating to the bitstream and the decoded mesh, a decoded mesh m’(i), decoded displacements d’(i), and a decoded attribute map A’(i). Each of these outputs of decoder 502 can be provided to a post-processor 503 that can perform various post-processing steps, such as adaptive tessellation. Post processor 503 can produce a post processed mesh M”(i) and a post processed attribute map A”(i), which correspond to the input mesn M(i) and input attribute map A(i) provided to the encoder. (As will be understood the outputs are not identical to the inputs because of the lossy nature of the compression due to quantization and other encoding effects.) An application 501 consuming the content could provide feedback 501a to decoder 502 to guide the decoding process and feedback 501b to postprocessor 503. As but one example, based on the position of the dynamic mesh with respect to a camera frustum, the decoder 502 and the post processor 503 may adaptively adjust the resolution/accuracy of the produced mesh M”(i) and/or its associated attribute maps A”(i). Pre-Processing

[0061] Figure 6 illustrates an exemplary pre-processing scheme that can be applied by preprocessor 403. The illustrated example uses the case of a 2D curve for simplicity of illustration, but the same concepts can be applied to the input static or dynamic 3D mesh M(i)=(C(i), G(i), T(i), TC(i)) to produce a base mesh m(i) and a displacement field d(i) discussed above with respect to Fig. 4. In Figure 6, the input 2D curve 601 (represented by a 2D polyline), referred to as the “original” curve, is first down-sampled to generate a base curve/polyline 602, referred to as the “decimated” curve. A subdivision scheme, such as those described in Reference [Al] (identified below), can be applied to the decimated polyline 602 to generate a “subdivided” curve 603. As one example, in Figure 6, a subdivision scheme using an iterative interpolation scheme can be applied. This can include inserting at each iteration a new point in the middle of each edge of the polyline. In the example illustrated in Figure 6, two subdivision iterations were applied.

[0062] The proposed scheme can be independent of the chosen subdivision scheme and could be combined with any subdivision scheme such as the ones described in Reference [Al], The subdivided polyline can then be deformed to get a better approximation of the original curve. More precisely, a displacement vector can be computed for each vertex of the subdivided mesh 603 (illustrated by the arrows in the displaced polyline 604 of Figure 6), so that the shape of the displaced curve is sufficiently close to the shape of the original curve. (See Figure 7.) One advantage of the subdivided curve (mesh) 603 can be that it can have a subdivision structure that allows more efficient compression, while still offering a faithful approximation of the original curve (mesh). Increased compression efficiency may be obtained because of various properties, including, but not necessarily limited to the following:

• The decimated/base curve can have a low number of vertices may require fewer bits to be encoded/transmitted.

• The subdivided curve can be automatically generated by the decoder once the base/decimated curve is decoded (i.e., there may be no need for any information other than the subdivision scheme type and subdivision iteration count to be encoded/transmitted). • The displaced curve can be generated by decoding the displacement vectors associated with the subdivided curve vertices. Besides allowing for spatial/quality scalability, the subdivision structure can also enable efficient wavelet decomposition (Reference [A2]), which can offer high compression performance (i.e., Rate-Distortion performance).

[0063] When applying the same concepts to the input mesh M(i), a mesh decimation technique, such as the one described in Reference [A3], could be used to generate the decimated/base mesh. Subdivision schemes, such as those described in Reference [A4], could be applied to generate the subdivided mesh. The displacement field d(i) could be computed by any method. One example is described below in Section 2. Figure 8 shows an example of re-sampling applied to an original mesh 801 with 40K triangles, which produces a IK triangle decimated/base mesh 802, and a 150K deformed mesh 803. Figure 9 compares the original mesh 901 (in wireframe) to the deformed mesh 902 (fl at- shaded).

[0064] The re-sampling process may compute a new parameterization atlas, which may be better suited for compression. In the case of dynamic meshes, this may be achieved through use of a temporally consistent re-meshing process, which may produce that the same subdivision structure that is shared by the current mesh M’(i) and a reference mesh M’(j). One example of such a re-meshing process is described in Section 2, below. Such a coherent temporal re-meshing process makes it possible to skip the encoding of the base mesh m(i) and re-use the base mesh m(j) associated with the reference frame M(j). This could also enable better temporal prediction for both the attribute and geometry information. More precisely, a motion field f(i) describing how to move the vertices of m(j) to match the positions of m(i) can computed and encoded as described in greater detail below.

Encoding— Intra Encoding

[0065] Figure 10 shows a block diagram of an intra encoding process.

Base Mesh Encoding

[0066] A base mesh m(i) associated with the current frame can be first quantized 1001 (e.g., using uniform quantization) and then encoded by using a static mesh encoder 1002. (Inter encoding using a motion mesh encoder is described below with reference to Fig. 17.) The methods and apparatus herein are agnostic to which mesh codec is used, i.e., any of a wide variety of mesh codecs could be used in conjunction with the techniques described herein. For example, mesh codecs such as those described in References [A5], [A6], [A7], or [A8] could also be used. The mesh codec used could be specified explicitly in the bitstream by encoding a mesh codec ID or could be implicitly defmed/fixed by either specification and/or application. Because the quantization step or/and the mesh compression module may be lossy, a reconstructed quantized version of m(i), denoted as m’(i), can be computed by a mesh decoder 1003 within the intra frame encoder. If the mesh information is losslessly encoded and the quantization step is skipped (either or both of which may be true in some embodiments), m(i) would exactly match m’(i).

Displacement Encoding

|0067J Depending on the application and the targeted bitrat e/visual quality, the encoder could optionally encode a set of displacement vectors associated with the subdivided mesh vertices, referred to as displacement field d(i). One technique for computing a displacement field d(i) is described in Section 2, below. The reconstructed quantized base mesh m’(i) can then be used by displacement updater 1004 to update the displacement field d(i) to generate an updated displacement field d’(i) that takes into account the differences between the reconstructed base mesh m’(i) and the original base mesh m(i). By exploiting the subdivision surface mesh structure (as described below), a wavelet transform 1005 (as described below) can then applied to d’(i), generating a set of wavelet coefficients e(i). The wavelet coefficients e(i) can then be quantized 1006 (producing quantized wavelet coefficients e’(i)), packed into a 2D image/video by image packer 1007, and compressed by using an image/video encoder 1008. The encoding of the wavelet coefficients may be lossless or lossy. The reconstructed version of the wavelet coefficients can be obtained by applying image unpacking 1009 and inverse quantization 1010 to the reconstructed wavelet coefficients video generated during the video encoding process. Reconstructed displacements d”(i) can then be computed by applying the inverse wavelet transform 1011 to the reconstructed wavelet coefficients. A reconstructed base mesh m”(i) can be obtained by applying inverse quantization 1012 to the reconstructed quantized base mesh m’(i). The reconstructed deformed mesh DM(i) can be obtained by subdividing m”(i) and applying the reconstructed displacements d”(i) to its vertices by reconstruction block 1013. Subdivision Scheme

[0068] Various subdivision schemes could be used in conjunction with the techniques herein. Suitable subdivision schemes may include, but are not limited to, those described in Reference [A4], One possible solution is a mid-point subdivision scheme, which at each subdivision iteration subdivides each triangle into four sub-triangles by bisecting each side of the triangle illustrated in Figure 11. For example, beginning with the two triangles of intial condition sO having two triangles 1101 and 1102, a first iteration si produces four sub-triangles HOla-HOld for triangles 1101 and four sub-triangles 1102a-l 102d for triangle 1102. Each sub-triangle can be further divided in a subsequent iteration s2. New vertices 1103 can be introduced in the middle of each edge in iteration si, with new vertices 1104 introduced in the middle of each edge in iteration s2, and so on. The subdivision process can be applied independently to the geometry and to the texture coordinates, because the connectivity for the geometry and for the texture coordinates can be different. The sub-division scheme computes the position Pos(v) of a newly introduced vertex at the center of an edge (vl, v2), as follows:

Pos(v) = 0.5 * (Pos(vl) + Pos(v2)), where Pos(vl) and Pos(v2) are the positions of the vertices vl and v2. The same process can be used to compute the texture coordinates of the newly created vertex. For normal vectors, an extra normalization step can be applied as follows:

N(v) = (N(v 1) + N(v2)) / || N(vl) + N(v2) || where: N(v), N(vl), and N(v2) are the normal vectors associated with the vertices v, vl, and v2, respectively, || x || is the norm2 of the vector x.

[0069] The subdivision scheme behavior could be adaptively changed (e. ., to preserve sharp edges) based on implicit and explicit criteria such as:

• Per face/edge/vertex attribute information associated with the base mesh and explicitly encoded as mesh attributes by the mesh codec.

• Analyzing the base mesh or the mesh at the previous iteration to decide how to update the subdivision behavior. Wavelet Transform

[0070] Various wavelet transforms could be applied, including without limitation those described in Reference [A2], One example a low complexity wavelet transform could be implemented by using the pseudo-code of the lifting scheme illustrated in Figures 12 and 13. These figures illustrate but one example implementation of a low complexity wavelet transform using a lifting scheme. Other implementations are possible and contemplated. The scheme has two parameters:

• Prediction weight, which controls the prediction step, and

• An update weight, which controls the update step.

One possible choice for the prediction weight is 1/2. The update weight could be chosen as 1/8. Note that the scheme allows skipping the update process by setting skip update to true.

Local vs. Canonical Coordinate Systems for Displacements

[0071] Displacement field d(i) can be defined in the same cartesian coordinate system as the input mesh. In some cases, a possible optimization may be to transform d(i) from this canonical coordinate system to a local coordinate system, which can be defined by the normal to the subdivided mesh at each vertex. The pseudo-code in Figure 14A shows one exemplary way to compute such a local coordinate system. Other implementations and algorithms are possible and contemplated. The normal vectors associated with the subdivided mesh can be computed as follows:

• The normal vectors associated with the base mesh can be either directly decoded or computed based on the quantized geometry.

• The normal vectors associated with the vertices introduced during the subdivision process are computed as described above.

One potential advantage of a local coordinate system for the displacements is the possibility to more heavily quantize the tangential components of the displacements as compared to the normal component. In many cases, the normal component of the displacement can have a more significant impact on the reconstructed mesh quality than the two tangential components. [0072] The decision to use the canonical coordinate system vs. local could be made at the sequence, frame, patch group, or patch level. The decision could be:

• explicitly specified by encoding an extra attribute associated with the base mesh vertices, edges or faces, or

• implicitly derived by analyzing the base mesh connect! vity/geometry/attribute information (e.g., use canonical coordinate system on the mesh boundaries).

Wavelet Coefficients Quantization

[0073] Various strategies can be used quantize the displacement wavelet coefficients. One example solution is illustrated in Figure 14B. Other techniques are possible and contemplated. The idea includes using a uniform quantizer with a dead zone and to adjust the quantization step such that high frequency coefficients are quantized more heavily. Instead of directly defining a quantization step, one can use a discrete quantization parameter. More sophisticated adaptive quantization schemes could be applied such as:

• Trellis quantization (as described in Reference [Al 6]).

• Optimizing the quantization parameters for the three components at once to minimize the distance of the reconstructed mesh to the original.

• The quantization adaptive rounding scheme described in Reference [Al 7],

Packing Wavelet Coefficients

[0074] Various strategies could be employed for packing the wavelet coefficients into a 2D image. Figure 15 illustrates one such strategy, which can proceed as follows:

• First, it traverses the coefficients from low to high frequency.

• Then, for each coefficient, it then determines the index of the NxM pixel block (e.g., N=M=16) in which it should be stored following a raster order for blocks.

• Finally, the position within the NxM pixel block can be computed by using a Morton order (see Reference [A9]) to maximize locality (see Figure 16 for details).

The example of Fig. 15 is but one example implementation, and other packing scheme s/strategies are possible and contemplated. In a particular embodiment, the values of N and M could be chosen as a power of 2, which makes it possible to avoid division in the scheme described in Figures 15 and 16. Figure 16 is but one example implementation of a Morton order computation, and other implementations are possible and contemplated.

Attribute Transfer

[0075] The attribute transfer module can compute a new attribute map based on the input mesh M(i) and the input texture map A(i). This new attribute map can be better suited for the reconstructed deformed mesh MD(i). A more detailed description is provided in Section 3 below.

Displacement Video Encoding

[0076] The techniques described herein are agnostic of which video encoder or standard is used, meaning that a wide variety of video codecs are applicable. When coding the displacement wavelet coefficients, a lossless approach may be used because the quantization can be applied in a separate module. Another approach could be to rely on the video encoder to compress the coefficients in a lossy manner and apply a quantization either in the original or transform domain.

Color Space Conversion And Chroma Sub-Sampling

[0077] As is the case with traditional 2D image/video encoding, applying color space conversion and chroma subsampling could be optionally applied to achieve better rate distortion performance (e.g., converting RGB 4:4:4 to YUV4:2:0). When applying such a color space conversion and chroma sub-sampling process, it may be beneficial to take into account the surface discontinuities in the texture domain (e.g., consider only samples belonging to the same patch and potentially exclude empty areas).

Inter Encoding

[0078] Figure 17 shows a block diagram of the inter encoding process, i.e., an encoding process in which the encoding depends on temporally separate e.g., prior) version of the mesh. In one non-limiting example, a reconstructed quantized reference base mesh m’(j) can be used to predict the current frame base mesh m(i). The pre-processing module described above could be configured such that m(i) and m(j) share the same number of vertices, connectivity, texture coordinates, and texture connectivity. Thus, only the positions of the vertices differ between m(i) and m(j). [0079] The motion field f(i) (which corresponds to the displacement of the vertices as between m(i) and m(j) can be computed by motion encoder 1701 considering the quantized 1702 version of m(i) and the reconstructed quantized base mesh m’(j). Because m’(j) may have a different number of vertices than m(j) (e.g., vertices may get merged/removed), the mesh encoder can keep track of the transformation applied to get from m(j) to m’(j). The mesh encoder may then apply the same transformation to to m(i) to guarantee a 1-to-l correspondence between m’(j) and the transformed and quantized version of m(i), denoted m*(i). The motion field f(i) can then be computed by motion encoder 1701 by subtracting the quantized positions p(i, v) of the vertex v of m*(i) from the positions p(j, v) of the vertex v of m’(j): f(i, v) = p(i, v) - p(j, v)

The motion field can then be further predicted using the connectivity information of m’(j), with the result then being entropy encoded (e.g., using context adaptive binary arithmetic encoding). More details about the motion field compression are provided section 4, below.

[0080] Because the motion field compression process can be lossy, a reconstructed motion field denoted as f (i) can be computed by applying the motion decoder module 1703. A reconstructed quantized base mesh m’(i) can then computed 1704 by adding the motion field to the positions of m’(j). The remaining of the encoding process is similar to the Intra frame encoding process described above with reference to Fig. 10, which includes corresponding elements.

Decoding

Intra decoding

[0081] Figure 18 shows a block diagram of the intra decoding process. First, the bitstream b(i) is de-multiplexed 1801 into three or more separate sub-streams: (1) a mesh sub-stream, (2) a displacement sub-stream for positions and potentially additional sub-streams for each vertex attribute, and (3) an attribute map sub-stream for each attribute map. In an alternative embodiment, an atlas sub-stream containing patch information could also be included in the same manner as in V3C/V-PCC.

[0082] The mesh sub-stream can be fed to a static mesh decoder 1802 corresponding to the mesh encoder used to encode the sub-stream to generate the reconstructed quantized base mesh m’(i). The decoded base mesh m”(i) can then obtained by applying inverse quantization 1803 to m’(i). Any suitable mesh codec can be used in conjunction with the techniques described herein. Mesh codecs such as those described in References [A5], [A6], [A7], or [A8] could be used, for example. The mesh codec used can be specified explicitly in the bitstream or can be be implicitly defined/fixed by the specification and/or the application.

[0083] The displacement sub-stream can be decoded by a video/image decoder 1804 corresponding to the video/image encoder used to encode the sub-stream. The generated image/video can then un-packed 1805 and inverse quantization 1806 can be applied to the wavelet coefficients that result from the unpacking. Any video codec/standard could be used with the techniques described herein. For example, image/video codecs such as HEVC/H.265 AVC/H.264, AVI, AV2, JPEG, JPEG2000, etc. could be leveraged. Use of such video codecs can allow the mesh encoding and decoding techniques described herein to take advantage of well-developed encoding and decoding algorithms that are implemented in hardware on a wide variety of platforms, thus providing high performance and high power efficiency.

[0084] In an alternative embodiment, the displacements could be decoded by dedicated displacement data decoder. The motion decoder used for decoding mesh motion information or a dictionary -based decoder such as ZIP could be for example be used as a dedicated displacement data decoder. The decoded displacement d”(i) can then generated by applying the inverse wavelet transform 1807 to the unquantized wavelet coefficients. The final decoded mesh M”(i) can be generated by applying the reconstruction process 1808 to the decoded base mesh m”(i) and adding the decoded displacement field d”(i).

[0085] The attribute sub-stream can be directly decoded by a video/image decoder 1809 corresponding to the video/image encoder used to encode the sub-stream. The decoded attribute map A”(i) can be generated as the output of this decoder directly and/or with appropriate color format/color space conversion 1810. As with the displacement sub-stream, any video codec/standard could be used with the techniques described herein, including (without limitation) image/video codecs such as HEVC/H.265 AVC/H.264, AVI, AV2, JPEG, JPEG2000. Alternatively, an attribute sub-stream could be decoded by using non-image/video decoders (e.g, using a dictionary-based decoder such as ZIP). Multiple sub-streams, each associated with a different attribute map, could be decoded. In some embodiments, each sub-stream could use a different codec. Inter decoding

[0086] Figure 19 shows a block diagram of the inter decoding process. First, the bitstream can be de-multiplexed 1901 into three separate sub-streams: (1) a motion sub-stream, (2) a displacement sub-stream, and (3) an attribute sub-stream. Tn some embodiments, an atlas substream containing patch information could also be included in the same manner as in V3C/V- PCC.

[0087] The motion sub-stream can be decoded by applying a motion decoder 1902 corresponding to the motion encoder used to encode the sub-stream. A variety of motion codecs/standards can used to decode the motion information as described herein. For instance, any motion decoding scheme described in Section 4, below, could be used. The decoded motion information can then optionally added to the decoded reference quantized base mesh m’(j) (in reconstruction block 1903) to generate the reconstructed quantized base mesh m’(i). In other words, the already decoded mesh at instance j can be used (in conjunction with the motion information) to predict the mesh at instance i. Afterwards, the decoded base mesh m”(i) can be generated by applying inverse quantization 1904 to m’(i).

[0088] The displacement and attribute sub-streams can be decoded in the same manner as described above with respect to the intra frame decoding process. The decoded mesh M”(i) is also reconstructed in the same manner. The inverse quantization and reconstruction processes are not normative and could be implemented in various ways and/or combined with the rendering process.

Post-Processing

[0089] Additional post-processing modules could also be applied to improve the visual/objective quality of the decoded meshes and attribute maps and/or adapt the resolution/quality of the decoded meshes and attribute maps to the viewing point or terminal capabilities. Some examples of post-processing are provided below:

• Color format/space conversion;

• Using patch information and occupancy map to guide chroma up-sampling;

Geometry smoothing (See Reference [A10]); • Attributes smoothing (See References [Al l], [A12]);

• Image/video smoothing/filtering algorithms;

• Adaptive tessellation (See References [A13], [A14], [A15], [A16]).

Other Extensions

[0090] In some embodiments and/or applications, it may be advantageous to subdivide the mesh into a set of patches (i.e., sub-parts) and selectively group patches as a set of patch groups/tiles. In some cases, different parameters (such as subdivision, quantization, wavelet transforms, coordinate systems, etc.) could be used to compress each patch or patch group. In such cases, it may be desirable to encode the patch information as a separate sub-stream (similar to V3C/V- PCC). Such techniques may be advantageous for handling cracks at patch boundaries, providing for:

• Lossless coding for boundary vertices

• Ensuring that positions/vertex attributes match after displacement;

• Using local coordinate systems; and

• Selectively disabling quantization of wavelets coefficients.

[0091] Encoder/decoder arrangements as described herein could also support scalability at different levels. For example, temporal scalability, which could be achieved through temporal subsampling and frame re-ordering. Likewise, quality and spatial scalability could be achieved by using different mechanisms for the geometry/vertex attribute data and the attribute map data. As one example, geometry scalability can be obtained by leveraging the subdivision structure, making it possible to change the mesh resolution by going from one level of detail to the next. The displacement information could then be stored as two or more image/video sub-streams e.g. :

• Base layer I Level of detail 0: A separate video sub-stream for low frequency coefficients;

• Refinement Layer 0: A separate video sub-stream for the next band of coefficients;

Refinement Layer N-l : A separate video sub-stream for the highest band of coefficients. In this example, a level of detail m can be generated by combining level of detail m-1 and refinement level m-1. Also, attribute maps could be encoded in a scalable manner by leveraging scalable video coding techniques such as those used in HEVC/H.265, AVC/H.264, VVC, AVI, or any other approach that supports quality/spatial scalability for 2D images or videos.

[0092] Region of interest (ROI) encoding can be provided by configuring the encoding process described above to encode an ROI with higher resolution and/or higher quality for geometry, vertex attribute, and/or attribute map data. Such configurations may be useful in providing higher visual quality content under tight bandwidth and complexity constraints. As one example, when encoding a mesh representing a person, higher quality could be used for the face as opposed to the rest of the body. Priority/importance/spatial/bounding box information could be associated with patches, patch groups, tiles, NAL units, and/or sub-bitstreams in a manner that allows the decoder to adaptively decode a subset of the mesh based on the viewing frustum, the power budget, or the terminal capabilities. Note that any combination of such coding units could be used together to achieve such functionality. For instance, NAL units and sub-bitstreams could be used together.

[0093] Temporal and spatial random access may also be provided. Temporal random access could be achieved by introducing IRAPs (Intra Random Access Points) in the different substreams (e.g., atlas, video, mesh, motion, and displacement sub-streams). Spatial random access could be supported through the definition and usage of tiles, sub-pictures, patch groups, and/or patches or any combination of these coding units. Metadata describing the layout and relationships between the different units could also need to be generated and included in the bitstream to assist the decoder in determining the units that need to be decoded.

[0094] Lossless geometry/vertex attribute coding could be supported by disabling one or more of the following blocks: re-meshing; subdivision (e.g., set subdivision levels to 0, making the base mesh is the same as input mesh); base mesh quantization; displacement sub-stream computation. Alternatively, a simplified version (e.g., a quantized, low quality version) of the base mesh could be encoded together with a set of displacements to make it possible for the decoder to retrieve a higher quality version, up to and including exactly the original mesh information. [0095] Lossless attribute map coding could be supported by configuring the video encoder to compress attribute maps in a lossless manner e.g., using lossless transforms, PCM mode)

[0096] To keep high quality texture coordinates, one option could be to send a separate displacement sub-stream for texture coordinates. A motion sub-stream for texture coordinates could also be employed.

[0097] Per vertex attributes could also be compressed in the same manner as the geometry information. For example, the mesh codec could be used to encode vertex attributes associated with the base mesh vertices. Wavelet-based encoding could be used for the attributes associated in the high-resolution mesh, which could then be stored/transmitted as a separate vertex attribute sub-stream. Equivalent processes applied on the decoder side could then recover/decompress vertex attribute information.

[0098] Support for polygonal/quad meshes could be achieved by using mesh codecs capable of encoding polygonal/quad meshes and/or by choosing a subdivision scheme, e.g., CatmulLClark or Doo-Sabin (see Reference [A4]), adapted for non-triangular meshes.

[0099] In the arrangement described above, the texture coordinates for the base mesh are explicitly specified and encoded in the bitstream by the mesh encoder. An alternative approach could be to use implicit texture coordinates derived from positions by means of projection (in the same manner as in V-PCC or MIV) or by considering any other model (e.g. , B-spline surfaces, polynomial functions, etc.). In such cases a texture coordinate Tex cood could be defined by:

Tex coord = f(position), where function f could be a projection on a predefined set of planes as in V-PCC (or any other suitable function).

[00100] References for the preceding section relating to Image/Video Based Mesh Compression, each of which is incorporated by reference in its entrety:

[Al] https://www.cs.utexas.edu/users/fussell/courses/cs384g- fall2011/lectures/lecture!7-Subdi vision_curves.pdf

[A2] http://www.mat.unimi.it/users/naldi/lifting.pdf

[A3] https://www.cs.cmu.edu/~garland/Papers/quadrics.pdf [A4] https://en.wikipedia.org/wiki/Subdivision_surface

[ A5 ] http s ://github . com/ rb sheth/ Open3 D GC

[ A6] http s ://googl e . github . i o/draco/

[A7] http://mcl.usc.edu/wp-content/uploads/2014/01/200503-Technol ogies-for-3D- triangular-mesh-compression-a-survey.pdf

[A8] https://perso.liris.cnrs.fr/glavoue/travaux/revue/CSUR2015.p df

[A9] https://en.wikipedia.org/wiki/Z-order_curve

[A10] https://graphics.stanford.edu/courses/cs468-12- spring/LectureSlides/06_smoothing.pdf

[Al l] https://cragl.cs.gmu.edu/seamless/

[A12] https://www.sebastiansylvan.com/post/LeastSquaresTextureSeam s/

[A13] https://developer.nvidia.com/gpugems/gpugems2/part-i-geometr ic- complexity/chapter-7-adaptive-tessellation-subdivision-surfa ces

[A14] https://niessnerlab.org/papers/2015/0dynamic/schaefer2015dyn amic.pdf

[A15] https://giv.cpsc.ucalgary.ca/publication/c5/

[A16] https://projet.liris.cnrs.fr/imagine/pub/proceedings/ICME- 2007/pdfs/0000468.pdf

[A17] G. J. Sullivan: “Adaptive Quantization Encoding Technique Using an Equal Expected-value Rule”, Joint Video Team, JVT-N011, Hong Kong (Jan.2005); https://www.google.com/url?sa=t&rct=j&q=&esrc=s& amp;source=web&cd=&ved= 2ahUKEwitlP7t46P2AhXBJkQJHRQhDp8QFnoECAcQAQ&url=https%3A %2 F%2F www. itu . int%2F wftp3 %2F av-arch%2Fj vt- site%2F2005_0 l_HongKong%2FJVT-

NO 11. doc&usg=AOv V awOBvZ V SEpKnT znCpBKWl vBn

Section 2: Remeshing For Efficient Compression

[00101] As noted above, a static/dynamic mesh can be represented as a set of 3D Meshes M(0), M(l), M(2), ..., M(n). Each mesh M(i) can be defined by be a connectivity C(i), a geometry G(i), texture coordinates T(i), and a texture connectivity CT(i). Each mesh M(i) can be associated with one or more 2D images A(i, 0), A(i, 1).. . , A(i, D-l), called also attribute maps, describing a set of attributes associated with the mesh surface. An example of attribute would be texture information (see Figure 2). A set of vertex attributes could also be associated with the vertices of the mesh such as colors, normal, transparency, etc.

[001021 While geometry and attribute information could again be mapped to 2D images and efficiently compressed by using video encoding technologies, connectivity information cannot be encoded efficiently by using a similar scheme. Dedicated coding solutions optimized for such information are needed. In the next sections we present an efficient framework for static/dynamic mesh compression.

[00103] Figures 4 and 5, discussed in more detail in Section 1, above, show a high-level block diagram of the proposed encoding and decoding processes, respectively. Note that the feedback loop during the encoding process makes it possible for the encoder to guide the preprocessing step and changes its parameters to achieve the best possible compromise according to various criteria, including but not limited to:

• Rate-distortion,

• Encode/decode complexity,

• Random access,

• Reconstruction complexity,

• Terminal capabilities,

• Encode/decode power consumption, and/or

• Network bandwidth and latency.

[00104] On the decoder side, an application consuming the content could provide feedback to guide both the decoding and the post-processing blocks. As but one example, based on the position of the dynamic mesh with respect to a camera frustum, the decoder and the post processing block may adaptively adjust the resolution/accuracy of the produced mesh and/or its associated attribute maps. Pre-Processing

[00105] Figure 6, also discussed above, illustrates the proposed pre-processing scheme in the case of a 2D curve. The same concepts can be applied to the input static or dynamic 3D mesh M(i)=(C(i), G(i), T(i), TC(i)) to produce a base mesh m(i) and a displacement field d(i). Tn Figure 6, the input 2D curve (represented by a 2D polyline), referred to as the “original” curve, is first down-sampled to generate a base curve/polyline, referred to as the “decimated” curve. A subdivision scheme, such as those described in Reference [B 1] ( identified below), can be applied to the decimated polyline to generate a “subdivided” curve. As one example, in Figure 6, a subdivision scheme using an iterative interpolation scheme can be applied. This can include inserting at each iteration a new point in the middle of each edge of the polyline. In the example illustrated in Figure 6, two subdivision iterations were applied.

[00106] The proposed scheme can be independent of the chosen subdivision scheme and could be combined with any subdivision scheme such as the ones described in Reference [Bl ], The subdivided polyline can then be deformed to get a better approximation of the original curve. More precisely, a displacement vector can be computed for each vertex of the subdivided mesh, so that the shape of the displaced curve is sufficiently close to the shape of the original curve. (See Figure 7.) One advantage of the subdivided curve can be that it can have a subdivision structure that allows more efficient compression, while still offering a faithful approximation of the original curve. Increased compression efficiency may be obtained because of various properties, including, but not necessarily limited to the following:

• The decimated/base curve can have a low number of vertices may require fewer bits to be encoded/transmitted.

• The subdivided curve can be automatically generated by the decoder once the base/decimated curve is decoded (i.e., there may be no need for any information other than the subdivision scheme type and subdivision iteration count to be encoded/transmitted).

The displaced curve can be generated by decoding the displacement vectors associated with the subdivided curve vertices. Besides allowing for spatial/quality scalability, the subdivision structure can also enable efficient wavelet decomposition (Reference [B2]), which can offer high compression performance (i.e., Rate-Distortion performance).

[00107] When applying the same concepts to the input mesh M(i), a mesh decimation technique, such as the one described in Reference [B3], could be used to generate the decimated/base mesh. Subdivision schemes, such as those described in Reference [B4], could be applied to generate the subdivided mesh. The displacement field d(i) could be computed by any method. Examples are described in greater detail elsewhere herein. Figure 8, also discussed above, shows an example of re-sampling applied to an original mesh with 40K triangles, which produces a IK triangle decimated/base mesh and a 150K deformed mesh. Figure 9, also discussed above compares the original mesh (in wireframe) to the deformed mesh (flat-shaded).

[00108] It should be noted that re-sampling process may compute a new parameterization atlas, which may be better suited for compression. In the case of dynamic meshes, this may be achieved through use of a temporally consistent re-meshing process, which may produce that the same subdivision structure that is shared by the current mesh M’(i) and a reference mesh M’(j). As described in greater detail below. Such a coherent temporal re-meshing process makes it possible to skip the encoding of the base mesh m(i) and re-use the base mesh m(j) associated with the reference frame M(j). This could also enable better temporal prediction for both the attribute and geometry information. More precisely, a motion field f(i) describing how to move the vertices of m(j) to match the positions of m(i) can computed and encoded as described in greater detail below.

3D Re-Meshing

[00109] Figure 20 shows a block diagram of the proposed remeshing system. The input mesh M(i) can be an irregular mesh. The output can be a base mesh m(i) with a set of displacements d(i) associated with the subdivided version of m(i). The various blocks of the system are described below. Each of these blocks may be implemented using data processing systems including dedicated hardware or hardware with suitable software and/or firmware, such as CPU hardware, GPU hardware, FPGA hardware, DSP hardware, ASICs, etc. Duplicated Vertex Removal

[00110] The Duplicated Vertex Removal block 2001 aims to merge duplicated vertices (i.e., vertices with the same position) or vertices with close 3D positions (e.g., vertices with a distance between that is less than a user-defined threshold) The duplicated vertex removal process can be accelerated by leveraging data structures such as hash-tables, kd-trees, octrees, etc. By removing duplicated vertices, appearance of cracks during subsequent processing stages (including the mesh decimation stage) may be avoided. Additionally, duplicate vertex removal may also improve coding efficiency and encode/decode complexity by eliminating computations using or based on superfluous data.

Mesh Decimation

[00111] Mesh Decimation block 2002 can employ techniques such as those described in References [B3] or [B4] to simplify the mesh, for example by reducing the number of vertices/faces while substantially preserving the shape of the original mesh. Figure 21 illustrates an original mesh 2101, a decimated mesh 2102, a projected mesh 2103, and a projected mesh overlayed on top of the decimated mesh 2104. Substantially preserving the shape of the original mesh can include preserving the shape of the input mesh sufficiently to achieve a desired encoder and/or decoder performance while simultaneously achieving a desired level of accuracy or fidelity in the resulting mesh representation. This can vary from one application to another depending on the capabilities of the available encoder and decoder equipment, the capabilities of the display or other output equipment, and/or the requirements of a particular application.

[00112] The illustrated mesh decimation block may apply a mesh decimation algorithm that expands on those described in References [B3] or [B4] (or any other suitable decimation algorithm) by also keeping track of a mapping between the full resolution input mesh and the decimated mesh. More specifically, at each iteration of the decimation process, the mesh decimation block 2002 can project removed vertices on the decimated version of the mesh. Alternatively, the mesh decimation block can project the removed points to the closest counterpart in the simplified mesh. “Closest” counterpart can mean closest based on shortest L2 distance in 3D space. Figure 21 shows an example of original (2101), decimated (2102) and projected (2103) meshes. Other criteria to define the projection process could be used. For example, rather than L2 distance in 3D space, other distance measures in the 3D space could be used e.g., LI, Lp, L inf, etc.). Alternatively, distances in a lower dimension space could be used by projection on a 2D local plane. This could employ orthogonal and/or non-orthogonal projections. Other projection processes could also be used as appropriate for a given use case. The simplification algorithm can also be modified to prevent decimation operations that would result in flipped triangles in the decimated and/or the projected meshes. This optional, extra requirement can help produce abetter mapping between the decimated and the projected meshes.

Duplicated Triangle Removal

[00113] Duplicated Triangle Removal Module 2003 can detect and remove duplicated triangles in the decimated mesh dm(i) (i.e., triangles that reference the same vertices). This can improve compression efficiency and encode/decode complexity. However, duplicated triangle removal may be optional for some embodiments.

Small Connected Components Removal

[00114] Small Connected Component Removal Module 2004 can detect and remove connected components. In this sense, connected components means set(s) of vertices connected to each other but not connected to the rest of the mesh. Connected components targeted for removal may include components with a number of triangles or vertices lower than a user- defined threshold (e.g., 8) and/or an area below a user-defined threshold (e.g., 0.1% of the original mesh area). Such small connected components are expensive to encode and have a limited impact on the final visual quality of the model.

[00115] The connected components removal criteria could be chosen to be fixed for the entire mesh, or adaptive based on local surface properties or user-provided information describing the importance or saliency subparts of the mesh. For example, for a mesh including a representation of a person, heightened removal criteria (resulting in fewer removed small connected surfaces) could be employed for a region depicting a head, while relaxed removal criteria (resulting in more removed small connected surfaces) could be employed for a region depicting a body. Additionally or alternatively, the small connected component removal thresholds may be tuned based on rate/distortion criteria, complexity criteria, power consumption criteria, resulting bitrate, etc. In at least some embodiments, these thresholds may be provided by or derived from feedback from the encoder module (as illustrated in Fig. 4). Atlas Parameterization

[00116] The parameterization information associated with the input mesh M(i) could be sub-optimal in it may define a high number of small patches (see Figure 22) making it hard to decimate, re-mesh, and compress. Instead of trying to preserve the initial parameterization during the simplification process, it can be optionally be recomputed by the Atlas Parameterization Module 2005 using techniques such as those described in References [B6], [B7] applied to the decimated mesh dm(i) or the decimated mesh with duplicated triangles and/or small connected components removed cm(i). As shown in Fig. 22, the parameterized decimated mesh 2202 has only nine patches, compared to the original mesh 2201, which has more than 100 patches.

Mesh Subdivision

[00117] The remeshing system described herein can employ a Mesh Subdivision Module 2006 implementing various mesh subdivision techniques, such as those described in References [B9], [B10], The remeshing techniques described herein can be used with these or any other subdivision technique. For triangular meshes, the mid-edge interpolation, loop, butterfly, and Catmull-Clark subdivision techniques are among the most popular. These methods offer various compromises in terms of computational complexity, generality (e. , applicability to triangular meshes vs. tri/quad or polygonal meshes), and power of approximation and smoothness of the generated surfaces, which may impact the rate distortion performance of the encoder module.

Initial Mesh Deformation

[00118] The Initial Mesh Deformation Module 2007 can move the vertices of subdivided mesh S(i) so that it has a shape close to the input mesh M(i). The quality of this approximation can directly impact the rate distortion performance of the encoder. One proposed algorithm can proceed as follows: (1) For each vertex v of the subdivided mesh S(i), let Pos(v) indicate its initial 3D position and let N(v) indicate its normal vector. (2) For each initial 3D position Pos(v), find the nearest point H(v) on the surface of the projected mesh P(i), such that the angle between the normal N(v) and the normal to H(v) is below a user-defined threshold. Various distances could be used, including without limitation, LI, L2, Lp, Linf. The threshold could be fixed for the entire mesh, or could be adaptive based on local surface properties and/or user-provided information describing the importance or saliency of subparts of the mesh (e.g., face vs. body). Additionally or alternatively, the threshold could be based on rate distortion criteria or other criteria (e.g., complexity, power consumption, bitrate, etc.) provided as feedback from the encoder module (as shown in Fig. 4).

[00119] H(v) can be identified by an index of the triangle to which it belongs (tindex(v)) and its barycentric coordinates (a, b, c) relative to that triangle. Because the projected mesh P(i) and mesh UM(i) have a 1-to-l mapping between their vertices (i.e., they have the same connectivity), we can compute a point H’(v) located on UM(i) by using the barycentric coordinates (a, b, c) relative to the triangle with the index tindex(v) of the mesh UM(i).

Iterative Mesh Deformation

[00120] The Iterative Mesh Deformation Module 2008 can have as an input deformed mesh F0(i) and can generate therefrom a final deformed mesh F(i). The Iterative Mesh Deformation Module 2008 can iteratively apply an algorithm including:

• Recomputing normal vectors associated with the mesh vertices (see Reference [Bl 1]).

• For each vertex v of the deformed mesh with position Pos(v), finding its nearest point H(v) on input mesh M(i), such that the angle between the normal vectors associated with Pos(v) and H(v) is below a user-defined threshold. o As noted above, nearest point can mean the point having the smallest distance, with various distances being used such as LI, L2, Lp, Linf, etc. o Also as noted above, the threshold could be fixed for the entire mesh, adaptive based on local surface properties and/or user-provided information describing the importance or saliency of subparts of the mesh (e.g., face vs. body), and/or based on rate distortion criteria or any other criteria (e.g., complexity, power consumption, bitrate) provided as feedback from the encoder module (see Fig. 3).

• Moving the vertex v to the new position determined by:

Pos(v) + <H(v) - Pos(v), N(v)> * N(v) where <H(v) - Pos(v), N(v)> is the dot product of the two 3D vectors H(v) - Pos(v) and N(v) and where N(v) is the normal vector at Pos(v). • Optionally checking that no triangle was flipped (i.e., no normal vector was inverted) by the previous step, otherwise do not move the vertex v and flag the vertex as a missed vertex. This step can help ensure a better remeshing result.

• Optionally applying mesh smoothing algorithms, such as those described in References [10], [11] to the missed vertices, while considering the updated positions for the other vertices.

• Optionally applying mesh smoothing algorithms, such as those described in References [10], [11] to all vertices and adjusting the parameters to reduce the smoothing intensity depending on the fitting iteration index and other criteria. The smoothing could be applied to the vertex positions and/or to the displacement vectors with respect to the initial mesh.

The number of deformation iterations, i.e., the number of iteration through the algorithm described above, can be a parameter provided by the user or automatically determined based on convergence criteria (e.g., the displacements applied in the last iteration fall below a user-defined threshold).

Base Mesh Optimization

[00121] The Base Mesh Optimization Module 2009 can take as inputs the final subdivided deformed mesh F(i) and the decimated mesh pm(i). If iterative mesh deformation is omitted, then the initial deformed mesh F0(i) may be substituted for final deformed mesh F(i). The Base Mesh Optimization Module 2009 can then update the positions of pm(i) to minimize the distance between the subdivided versions of pm(i) and F(i) (or F0(i)). In some embodiments, this could be achieved by solving a sparse linear system. One possible method to efficiently solve such sparse linear systems is the Conjugate Gradient Method see, e.g., Reference[Bl 5], Other techniques could also be used.

Computing Displacements

[00122] The Displacement Computation Module 2010 can compute displacements d(i) by taking the difference between the positions of F(i) (or F0(i) and the subdivided version of pm(i), to exploit correlations between the two meshes and produce a more compressible representation. The resulting displacement field d(i) can then fed as input to the encoder module (along with base mesh m(i) as described above in Section 1.

Time Consistent Re-Meshing

[00123] The remeshing procedure described above handles every frame M(i) independently. While this is optimal for intra coding, time-consistent remeshing may allow better temporal prediction for both mesh and image data. For time-consistent remeshing, one concept is reusing a base mesh pm(j) associated with a reference frame M(j) for a base mesh pm(i) having the same connectivity. By ensuring that a 1-to-l mapping between pm(i) and pm(j) exists, and that pm(i) and pm(j) have the same number of vertices, number of triangles (or polygons), texture coordinates, and texture coordinate triangles (or polygons), pm(i) and pm(j) will differ only by the positions of their vertices. There are thus two distinct cases: (1) the input meshes M(i) and M(j) themselves are temporally coherent or (2) the input Meshes M(i) and M(j) are not temporally coherent.

[00124] In the first case, z.e., if the input meshes M(i) and M(j) are temporally coherent, only the subdivision surface fitting module can be applied. In other words, there need be no simplification or pre-filtering of duplicated vertices and connected components). In that case, the inputs of the Fitting Subdivision Surface module 2011 (made up of components 2006-2010, discussed above) can be input mesh M(i), projected mesh P(j) (from the reference frame), and decimated mesh pm(j) (also from the reference frame) rather than M(i), P(i), pm(i).

[00125] In the second case, i.e., if the input meshes M(i) and M(j) are not temporally coherent, a deformed version of M(j) — denoted M’(j) — that has the same shape as M(i) may be generated. M’(j) may be generated using techniques such as those described in References [B 12], [B13], [B14] (or similar techniques). Then, one can proceed as above, applying only the Fitting Subdivision Surface Module 2011, providing as inputs M’(j), P(j), pm(j) instead of M(i), P(i), and pm(i).

[00126] References for the preceding section relating to Remeshing for Efficient Compression, each of which is incorporated by reference in its entrety:

[B 1 ] https ://github . com/rbsheth/Open3DGC

[B 2] http s ://googl e . github . i o/draco/ [B3] https://www.cs.cmu.edu/~garland/Papers/quadrics.pdf

[B4] http://jerrytalton.net/research/t-ssmsa-04/paper.pdf

[B5] https://graphics.stanford.edu/courses/cs468-10- fall/LectureSlides/08 Simplification.pdf

[B6] https://graphics.stanford.edu/courses/cs468-05-fall/Papers/p aram-survey.pdf

[B7] https://www.semanticscholar.org/paper/Iso-charts%3A-stretch- driven-mesh- parameterization-Zhou-Snyder/27b260713ad9802923aec06963cd5f2 a41c4e20a

[B8] https://members.loria.fr/Bruno.Levy/papers/LSCM_SIGGRAPH_200 2.pdf

[B9] https://en.wikipedia.org/wiki/Subdivision_surface

[BIO] https://graphics.pixar.com/opensubdiv/docs/intro.html

[B 11] https://cs.nyu.edu/~perlin/courses/fall2002/meshnormals.html

[BIO] https://graphics.stanford.edu/courses/cs468-12- spring/LectureSlides/06_smoothing.pdf

[Bl l] https://www.medien.ifi.lmu.de/lehre/ws2122/gp/slides/gp-ws21 22-3-smooth.pdf

[B 12] https://lgg.epfl.ch/publications/2008/sgp2008GCO.pdf

[B13] https://arxiv.org/abs/2004.04322

[Bl 4] https://people.inf.ethz.ch/~sumnerb/research/embdef/Sumner20 07EDF pdf

[B 15] http://math. stmarys-ca.edu/wp-content/uploads/2017/07/Mike-Rambo.pdf

Section 3: Attribute Transfer for Efficient Dynamic Mesh Coding

[00127] As noted above, static/dynamic meshes can be represented as a set of 3D Meshes M(0), M(l), M(2), . .. , M(n). Each mesh M(i) can be defined by a connectivity C(i), a geometry G(i), texture coordinates T(i), and a texture connectivity CT(i). Each mesh M(i) can be associated with one or more 2D images A(i, 0), A(i, 1). . ., A(i, D-l), also referred to as attribute maps. The attribute maps describe a set of attributes associated with the mesh surface. An example of attribute would be texture information (see Figures 2, 3, 23). A set of vertex attributes could also be associated with the vertices of the mesh such as colors, normal, transparency, etc.

[00128] When coding a dynamic mesh, a pre-processing stage, such as described above, may be applied to produce a more compression-friendly version of the input mesh. Such preprocessing can involve re-meshing (i.e., re-sampling) and re-parameterization (i.e., computing a new atlas of parameterization with a fewer number of patches and low parameterization distortion). Once the mesh is re-sampled, an attribute transfer may be performed. The attribute transfer can include computing new attribute maps coherent with the re-meshed and/or reparametrized mesh. For example, Figure 23 illustrates an example 2300 of attribute transfer after re-meshing. First, an original input mesh 2302 and associated patches 2304 are obtained. The original input mesh 2302 may be re-meshed/re-sampled via the pre-processing stage described herein, resulting in a re-meshed mesh 2306. As illustrated by the updated patches 2308 associated with the re-meshed mesh 2306, a new atlas associated with the re-meshed mesh 2306 may be computed, which may provide a fewer number of patches and low paramterization distortion. Thus, once the original mesh 2302 is re-meshed/resampled, new atribute maps coherent with the re-meshed mesh 2306 may be computed via an attribute transfer from an original texture map 2310 to an updated texture map 2312 associated with the re-meshed mesh 2306.

[00129] A detailed discussion of the attribute transfer process is described below with respect to Figures 24-27. Figure 24 illustrates the attribute transfer process 2400. Figure 25 provides an example implementation 2500 of the attribute transfer process. For clarity, these figures will be discussed together.

[00130] Turning to Figure 24, the process 2400 may be run for each pixel A(i, j) of the attribute map to be generated A. First, the texture coordinate (u, v) for each pixel (i, j) of the attribute map to be generated A(i, j) is computed (block 2401). For example, in Figure 25, the attribute map to be generated A (e.g., the texture map 2312 associated with the re-meshed mesh 2306) pixel A(i,j) 2502 is associated with coordinate (u, v) 2504 in the texture domain 2506 that includes the updated patches 2308 associated with the re-meshed mesh 2306.

[00131] Next, a determination is made as to whether the point P(u, v) in the texture space belongs to any triangles of the re-meshed mesh (block 2402). For example, in Figure 25, a determination is made as to whether the point P(u, v) associated with the coordinate (u, v) 2504 is associated with at least one of the updated patches 2308.

[00132] If P(u, v) does not belong to any triangles (block 2403, No), this pixel is marked as an empty pixel (block 2404) that will filled as described by the process 2700 of Figure 27 and the empty pixel padding process described below.

[00133] Otherwise, if P(u, v) belongs to a triangle (T) defined by the three vertices (A, B, C) (block 2403, Yes), the pixel is marked as filled (block 2405). Barycentric coordinates (alpha, beta, gamma) of the point P(u, v) according to the triangle T in the parametric space are computed (block 2406).

[00134] The 3D point M(x, y, z) associated the texture coordinate P(u, v) is computed by using the barycentric coordinates (alpha, beta, gamma) and the 3D positions associated with the Triangle T in 3D space (block 2407). For example, in Figure 25, the 3D point M(x, y, z) of the 3D domain of the re-meshed mesh 2306 is identified.

[00135] Next, the nearest 3D point M’(x’, y’ , z’) located on the triangle T’ of the original mesh is identified (block 2408). The barycentric coordinates (alpha’, beta’, gamma’) of M’ are computed according to T’ in 3D space (block 2409). For example, as illustrated in Figure 25, the 3D point M’(x, y, z) is identified in the 3D domain of the original mesh 2302 based upon the the point M(x, y, z) of the re-meshed mesh 2306.

[00136] The point P’(u’, v’) associated with M’ is computed by using the barycentric coordinates (alpha’, beta’, gamma’) with the 2D parametric coordinates associated with the three vertices of T’ (block 2410). For example, as illustrated in Figure 25, the point P’(u’, v’) is identified based upon the point M’(x, y, z) and a Triangle (T’) of M’.

[00137] The texture coordinates (u’, v’) of the point P’(u’, v’) are computed to sample the original texture map and compute the attribute value A’(i’, j’) of the input attribute map (block

2411). Bilinear interpolation, nearest neighbor interpolation, patch-aware bilinear interpolation, patch-aware nearest neighbor interpolation, and/or other interpolation method may be used to compute these coordinates. The attribute value A’(i’, j’) may then be assigned to the pixel (i, j) of the attribute map A(i, j), resulting in an attribute transfer to the generated attribute map (block

2412). For example, the generated texture map 2312 illustrates pixels filled with values from the original texture map 2310 upon completion of the process 2400 for each pixel of the generated texture map 2312.

[00138] When implemented, the process described in Figures 24 and 25 may generate discontinuities on the parameterization seams, as illustrated in the left side image 2601 of Figure 26 as compared to right side image 2602. Indeed, edges located on the parameterization seams may correspond to the patch boundaries. Each seam edge in 3D space is mapped to two edges in the texture space due to the cut operation used to flatten the mesh. Because the algorithm described in the previous section computes the color for each pixel in the texture domain independently without considering the seams, inconsistent colors may be produced on the edges. This can be further exacerbated by the bilinear interpolation used during the rendering process. Potential solutions to address such problems are described in [Cl] and [C2],

[00139] However, References [Cl] and [C2] may be complex solutions that utilize significant processing resources and/or processing time. Accordingly, a relatively lower complexity alternative to [Cl] and [C2], which results in the remediation of discontinuities illustrated in the right side of Figure 26, is provided in Figure 27. In the seam edge discontinuity process 2700 of Figure 27, for each empty pixel A(i, j) adjacent to a filled pixel A(k, 1) (e.g., as marked by the process of Figure 24) the triangle T used to fill A(k, 1) is determined (block 2701).

[00140] Next, the process 2400 described above with respect to Figure 24 is applied (block 2702), while considering the barycentric coordinates computed for A(i, j) with respect to the triangle T. In other words, when a pixel A(i, j) is empty, the process 2400 may use a triangle T used to fill an adjacent pixel in determining an attribute value to transfer for A(i, j).

[00141] This results in an attribute transfer for A(i, j), despite the pixel A(i, j) being empty, resulting in a reduction of seam edge discontinuity. Accordingly, the pixel A(i, j) may be marked as filled and the index of the triangle T may be stored (block 2703).

[00142] The process of Figure 27 may be applied one or multiple times. The number iterations of the process of Figure 27 may be controlled via a parameter that may be provided by a user and/or computed automatically (e.g., based upon processing resource availability, mesh characteristics, etc.). As may be appreciated, the process of Figure 27, by focusing on empty pixels, does not change values of pixels computed in the attribute transfer process of Figure 24. Instead, the process of Figure 27 fills the pixels adjacent to those filled in process of Figure 27 to favor consistent colors on seam edges by leveraging the attribute consistency in the 3D space.

[00143] After applying the attribute transfer process 2400 of Figure 24 and the seam edge mitigation process 2700 of Figure 27, only a subset of the attribute map pixels may be filled. For example, the occupancy map 2801 of Figure 28 indicates whether a pixel is empty or full. A padding algorithm may be used to fill the remaining empty pixels with colors, making the attributes smoother and/or easier to compress. In particular, a push-pull algorithm, such as that described in Reference [C3] may be applied to the initial attribute map 2802 to identify an initial padding solution used to fill the empty pixels. The initial solution can be refined by applying the iterative algorithm described in Applicant’s co-pending U.S. Pat. Appl. No. 16/586,872, entitled “Point Cloud Compression Image Padding”, filed on September 27, 2019, which is incorporated by reference herein. This technique includes filling empty spaces in the image frame with a padding, wherein pixel values for the padding are determined based on neighboring pixels values such that the padding is smoothed in the image frame. The resulting padding can be combined with initial attribute map 2802 to produce padded attribute map 2803.

[00144] References for the preceding section relating to Attribute Transfer for Efficient Dynamic Mesh Coding, each of which is incorporated by reference in its entrety:

[Cl] https://www.sebastiansylvan.com/post/LeastSquaresTextureSeam s/

[C2] https://cragl.cs.gmu.edu/seamless/

[C3] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21 9.7566&rep=repl& type=pdf

Section 4: Motion Compression for Efficient Dynamic Mesh Coding

[00145] As noted above, a static/dynamic mesh can be represented as a set of 3D Meshes M(0), M(l), M(2), ..., M(n). Each mesh M(i) can be defined by be a connectivity C(i), a geometry G(i), texture coordinates T(i), and a texture connectivity CT(i). Each mesh M(i) can be associated with one or more 2D images A(i, 0), A(i, 1).. . , A(i, D-l), called also attribute maps, describing a set of attributes associated with the mesh surface. An example of attribute would be texture information (see Figures 2-3). A set of vertex attributes could also be associated with the vertices of the mesh such as colors, normal, transparency, etc. [00146] Dynamic meshes may exhibit high temporal correlation because they can correspond to smooth motion and/or smooth changes in attribute characteristics. When coding a dynamic mesh as described above in Section 1, attribute temporal correlations can be efficiently exploited by video encoders to provide more efficient compression. Disclosed herein are techniques for compressing motion data— i.e., the geometry and vertex attribute changes from one frame to another— associated with such representations. An input mesh (e.g, M(i)) can be subdivided into a set of patches P(i, j), where i is the frame index and j is the patch index. The input data could come with a time consistent structure, which can ensure that at least a subset of patches in a current frame have the same connectivity as corresponding patches in a reference frame. If the input data does not have such a time consistent structure, a pre-processing step that includes applying a time consistent remeshing could be applied as described in Section 2, above.

[00147] In either case, once a time consistent mesh sequence is received, P(i, j) can be a patch j of current frame i, and P(k, 1) can be a corresponding patch the patch 1 of a reference frame k. Because of the above-described time consistency, P(i, j) and P(k, 1) can have the same connectivity (z.e., the same number of vertices and faces). P(i ,j) and P(k,l) may thus differ only in terms of the respective positions or vertex attributes. These differing positions and/or vertex attributes can be compressed by applying quantization (Fig. 29, 2901), spatio-temporal prediction (Fig. 29, 2902), and entropy coding (Fig. 29, 2903), as described in greater detail below.

Quantization

[00148] In some embodiments, the Quantization module (Fig. 29, 2901) uniform quantization can be applied to vertex positions and/or vertex attributes. Using uniform quantization, the same number of quantization bits (quantitation levels) may be applied to all vertices or attributes. Alternatively, in some embodiments, adaptive quantization schemes may be employed. Adaptive quantization schemes can use coarser quantization (fewer bits/levels) for some regions with finer quantitation (more bits/levels) for other regions. Such adaptive quantization schemes may adaptively change the quantization step size based on used-provided input (e.g., user identification of a region of interest (ROI), such as a face for a mesh depicting a person). Additionally or alternatively, an adaptive quantization scheme can adaptively change the quantization step size based on analysis of the dynamic mesh, as described in Reference [DI], for example. The analysis can take place either online or offline. In any case, care may need to be taken to avoid introducing cracks at patch boundaries, e.g., because of different quantization levels on either side of the boundary.

Spatio-Temporal Prediction

[00149] The Prediction Module (Fig. 29, 2902) can leverage either or both of the shared connectivity information (spatial information) and the reference frame P(k, Z) (temporal information) to efficiently predict geometry and/or vertex attributes associated with the vertices of the patch P(i, j)~ . A variety of “predictors” may be implemented by the Prediction Module. These predictors may be used individually or in combination as appropriate for a given embodiment. Tn this foregoing predictor description, the following notations are used

• Pos(i,j, v) is the vertex position of vertex v in the current patch

• Pos(i,j> v 0 ), ■ ■ ■, P° s (i>j> v n -i) are the positions of the neighboring vertices vo ... v n -i (neighbors of vertex v) that have already been encoded or decoded and are available to be used for prediction;

• Pos(k, I, v) is the position of vertex v in reference patch P(k, V) and

• Pos(k, I, v 0 ), .. ., Pos(k, I, Vn-i) are the positions of the neighboring vertices vo ... v n -i in the reference patch P(k, Z).

[00150] With the above-described notation in mind, the Prediction Module 2902 can implement different predictors as described below. As a few non-limiting examples:

• A delta temporal predictor can use temporal information (but not spatial information) to generate the residual v) (defining the difference between the current frame and the reference frame) as follows:

• An average spatial predictor can use spatial information (but not temporal information) to generate the residual p(i,j, v) as follows: An average predictor can use temporal and spatial information to generate the residual

A spatial parallelogram predictor can use spatial information to predict residuals based on parallelograms as follows: where: n(i,j, v) = Pos(i,j, v a ) + Pos(i,j, v b — Pos i,j, v c )

• A spatial-temporal parallelogram prediction predictor can use both spatial and temporal information to predict residuals based on parallelograms as follows:

• Geometry-guided predictors as described in Applicant’s co-pending U.S. Provisional Patent Applications 63/197,288, entitled “Compression of Attribute Values Comprising Unit Vectors,” and 63/197,287, entitled “Attribute Value Compression for a Three- Dimensional Mesh Using Geometry Information to Guide Prediction,” both filed June 4, 2021.

• Other predictors, such as those described in References [D6], [D7], [D8], [D9], [DIO], [Dl l], [D12],

[00151] In some embodiments, the encoder could evaluate multiple different predictors and choose the one that produces the rate distortion performance, i.e., the best tradeoff between number of bits used to encode the motion information and the distortion effects of the encoded mesh as compared to the original mesh. For whatever predictor is used, the index of the predictor (z. e. , the identification of the predictor used) together with the prediction residuals can be entropy encoded as described below for transmission to a decoder.

Entropy Coding [00152] As noted above, the predictor index and prediction residuals can be efficiently coded by applying an entropy encoder (2903, Fig. 29). Examples of suitable entropy encoders can include, but are not limited to context adaptive binary arithmetic coders (CAB AC) (see, e.g., Reference [D2]), Huffman encoders see, e.g., References [D3] and [D4]) combined with universal codes (see, e.g., Reference [D5]), etc. Figure 30 shows an example employing a CABAC encoder together with Exponential Golomb codes to compress both predictor index and prediction residuals. The example of Fig. 30 is but one possible implementation of such an algorithm, and other implementations and/or other algorithms are possible and contemplated.

Other Extensions

[00153] In at least some embodiments, motion encoding of mesh data may be extended in other ways. As one example, one could use the MPEG FAMC (Frame-based Animated Mesh Compression) standard. See, e.g., References [DI 5], [DI 6], In at least some embodiments, wavelet-based coding schemes, such as the one described in References [D13], [D14] could be used. In at least some embodiments, principal component analysis (PCA) based coding (as described in Reference [DI 7]) could be used.

[00154] References for the preceding section relating to Motion Compression for Efficient Dynamic Mesh Coding, each of which is incorporated by reference in its entirety:

[D 1 ] https://www. sciencedirect.com/topics/computer-science/adaptive-quantizat ion

[D2] https://en.wikipedia.org/wiki/Context-adaptive_binary_arithm etic_coding

[D3] https://en.wikipedia.org/wiki/Huffrnan_coding

[D4] https://en.wikipedia.org/wiki/Asymmetric_numeral_systems

[D5] https://en.wikipedia.org/wiki/Universal_code_(data_compressi on)

[D6] L. Ibarria et J. Rossignac. Dynapack : space-time compression of the 3D animations of triangle meshes with fixed connectivity. In Eurographics Symposium on Computer Animation, pages 126-133, San Diego, E tats-Unis,

2003. [D7] N. Stefanoski et J. Ostermann. Connectivity-guided predictive compression of dynamic 3D meshes. In IEEE International Conference on Image Processing, pages 2973-2976, Atlanta, E tats-Unis, 2006.

[D8] J.-H. Yang, C.-S. Kim, et S.-U. Lee. Compression of 3-D triangle mesh sequences based on vertex-wise motion vector prediction. IEEE Transactions on Circuits and Systems for Video Technology, 12(12) :1178-1184, 2002.

[D9] N. Stefanoski, P. Klie, X. Liu, et J. Ostermann. Scalable linear predictive coding of time-consistent 3D mesh sequences. In The True Vision - Capture, Transmission and Display of 3D Video, pages 1-4, Kos Island, Greece, 2007.

[D10] N. Stefanoski, X. Liu, P. Klie, et J. Ostermann. Layered predictive coding of timeconsistent dynamic 3D meshes using a non-linear predictor. In IEEE International Conference on Image Processing, pages 109-112, San Antonio, E tats-Unis, 2007.

[Dl l] V. Libor et S. Vaclav. Coddyac : Connectivity driven dynamic mesh compression. In 3DTV International Conference : True Vision-Capture, Transmission and Display of 3D Video, Kos Island, Greece, 2007.

[D12] M. Sattler, R. Sarlette, et R. Klein. Simple and efficient compression of animation sequences. In Eurographics Symposium on Computer Animation, pages 209-217, Los Angeles, E tats-Unis, 2005.

[D13] I. Guskov et A. Khodakovsky. Wavelet compression of parametrically coherent mesh sequences. In Eurographics Symposium on Computer Animation, pages 183-192, Grenoble, France, 2004.

[D14] J.W. Cho, M.S. Kim, S. Valette, H.Y. lung, et R. Prost. 3D dynamic mesh compression using wavelet-based multiresolution analysis. In IEEE International Conference on Image Processing, pages 529-532, Atlanta, E tats-Unis, 2006.

[D15] K. Mamou, T. Zaharia, F. Preteux, A skinning approach for dynamic 3D mesh com- pression, Computer Animation and Virtual Worlds, Vol. 17(3-4), July 2006, p. 337-346. [D16] K. Mamou, N. Stefanoski, H. Kirchhoffer, K. Muller, T. Zaharia, F. Preteux, D. Marpe, J. Ostermann, The new MPEG-4/FAMC standard for animated 3D mesh compression, 3DTV Conference (3DTV-C0N 2008), Istanbul, Turkey, May 2008.

[D17] K. Mamou, T. Zaharia, F. Preteux, A. Kamoun, F. Payan, M. Antonini. Two optimizations of the MPEG-4 FAMC standard for enhanced compression of animated 3D meshes. IEEE International Conference on Image Processing (2008)

[D 18] https://www.researchgate.net/publication/224359352_Two_Optim izations_of_ the MPEG- 4_FAMC_standard_for_Enhanced_Compression_of_Animated_3D_ Meshes/link/0912f50b3802603f34000000/download

Section 5: V-Mesh Bitstream Structure Including Syntax Elements and Decoding Process with Reconstruction

[00155] To better support Video Dynamic Mesh Coding (V-DMC) in the context of the V3C specification, new syntax elements may be introduced for handling the mesh information. Unlike V-PCC, V-DMC can be seen a scalable coding solution where an initial representation of the mesh is provided through what is referred to as the base mesh. Additional information is then included through the V3C framework, which enhances that representation. One of the enhancements introduced here is the inclusion of the base mesh information in a new substream, the base mesh data substream. This substream is, similar to the atlas and video coded sub streams, a timed series of coded mesh information. For more information about this substream we will refer the reader to Reference [D5],

[00156] Also, In V-DMC, the encoded geometry data are actually transformed and quantized data and their transformations may be inverted before used for the reconstruction process. In particular, after decoding the geometry video, the decoded data may be also processed through what is referred to as a “displacement” decoder. This decoder performs a dequantization process followed by an inverse transform process, as specified through instructions in the atlas data substream includes information about the quantization as well as the transform method used when encoding the geometry information. [00157] Similar to the geometry information, additional processing may be performed to the base mesh information after its decoding. More specifically, after decoding the base mesh data, the resulting meshes may be subdivided through a mesh subdivision process. This process requires information, e g. the subdivision method to be used among others, which may be indicated/included in the atlas data substream. The subdivided/resampled meshes are then refined by adding the displacements from the geometry displacement decoder. Additional information from the atlas data substream may be used to perform this final process. For example, the subpart id may be used to pair the displacements from the displacement decoder with the vertices in the resampled meshes.

[00158] To assist with the understanding of the concepts introduced in V-DMC we first introduce some essential terms and definitions:

Base meshes are the output of the base mesh substream decoder. Each base mesh can have a smaller number of vertices than the expected output of the V-DMC decoder.

Resampled base meshes are the output of the mesh subdivision process Each base mesh can have the same number of vertices as the expected output of the V-DMC decoder.

A displacement video is the output of the displacement decoder. The inputs to the process is the decoded geometry video as well information from the atlas on how to interpret/process this video. The displacement video contains displacement values to be added to the corresponding vertices.

A facegroupld is one of the attribute types assigned to each triangle face of the resampled base meshes. Facegroupld can be compared with the ids of the subparts in a patch to determine the corresponding facegroups to the patch. If facegrould is not conveyed through the base mesh substream decoder, it is derived by the information in the atlas data substream.

A submeshld is one of the attribute types assigned to each vertex of the resampled base meshes. Submeshld can be compared with the ids of a segment to determine the corresponding vertices to the segment. If it is not conveyed through basemesh substream decoder, it is derived by the information in the atlas data substream.

[00159] As illustrated in Figure 31, a 3D textured static and/or dynamic mesh may be encoded by V-mesh encoder 3101 into a V-mesh bitstream 3102 for subsequent decoding by a V-mesh decoder 3103. The V-mesh bitstream structure may be an extension of V3C for efficient processing, as discussed in more detail below. Details pertaining to encoding of the 3D textured static and/or dynamic meshes are discussed elsewhere herein, e.g., Section 1.

[00160] Figure 32 illustrates an embodiment of a V-mesh decoder framework 3200. When a v-mesh bitstream is provided to the decoder, the decoder demultiplexes (3201) the bitstream into V3C parameter sets 3202, Mesh subbitstreams 3203, Geometry subbitstreams 3204, Attribute subbitstreams 3205 and Atlas data subbitstreams 3206, which may be incorporated into the V-mesh bitstream via the encoding process described above in Section 1. The parameter sets 3202 may be decoded via a parameter set decoder 3203 and the atlas data subbitstreams 3206 may be decoded via an atlas data subbitstream decoder 3208.

[00161] With the information provided through Parameter sets 3202 and Atlas data subbitstreams 3206, the other subbitstreams can be converted to proper forms through the normalization processes. For example, the mesh subbitstream 3203 can be decoded by the mesh subbitstream decoder 3209 into a base mesh, which can be normalized via the mesh subdivision/mesh normalization process 3210. The geometry subbitstream 3204 can be decoded by the video decoder 3211 into geometry images. The geometry images can be normalized via the displacement decoder/geometry normalization process 3212, resulting in displacement values. The attribute subbitstream 3205 can be decoded by the video decoder 3213 and the decoded output can be normalized by the attribute normalization process 3214, resulting in attribute images. (Video decoder 3211 can be the same as video decoder 3209 or a different video decoder as selected for a particular implementation.) The output mesh geometry, texture coordinates and connectivities are calculated by mesh position refinement process (3215) that combines the output of the mesh normalization process 3210 and the outputs of geometry normalization process 3212 to derive the resultant meshes (e g., described by mesh geometry, texture coordinates, and connectivity). In comparison with Fig. 18 and 19 above, the geometry normalization process includes inverse quantization and an inverse wavelet transformation. The attribute normalization process can also include color space conversion. The mesh normalization process can include some form of reconstructing the deformed mesh as described above.

[00162] As mentioned above, compressed base meshes may be signalled in a new substream, named as the Base Mesh data substream (e.g., with a unit type V3C_MD). As with other v3c units, the unit type, and its associated v3c parameter set id and atlas id are signalled in the v3c_unit_header() . The suggested format of the mesh data sub stream is discussed further in Reference [D5], To facilitate signaling of the compressed base mesh, the encoded bitstreams may be encoded in a manner that extends V3C. Accordingly, described below are enumerated syntax element examples that may be used specifically for 3D textured static and/or dynamic mesh decoding as well as existing syntax elements that may be configured for use with 3D textured static and/or dynamic mesh decoding.

V3C Parameter Set Extension Elements

[00163] Starting first with a discussion of modifications to V3C parameter set 3202 to support V-DMC, additional parameters and/or modified use of parameters n the VLC parameter set 3202 may be implemented. Below are examples of such additional parameters and/or modified use of existing parameters in the V3C parameter set 3202 to support

[00164] V3C Unit type, V3C-MD - Identifier V3C_MD, tentatively vuh_unit_type=5, is assigned to indicate mesh subbitstreams. With this identifier, v3c_unit_header() and v3c_unit_payload() include processes for mesh subbitstreams as following:

[00165] vuh_mesh_data_sample_stream_flag indicates that the mesh subbitstream has a format of sample stream as defined herein. When the flag is 0, the mesh subbitstream is fully decoded with external methods.

[00166] vuh mesh data motion field present flag indicates the mesh subbitstream contains data which can be used for the inter-prediction between mesh data in the mesh subbitstream. In some embodiments, vuh_mesh_data_motion_field_present_flag indicates the mesh subbitstream requires more than one decoder engine to decode the contained data.

[00167] In some embodiments, an extension may be introduced in the v3c_parameter_set syntax structure to facilitate handling dynamic mesh coding (V-DMC) The following discussion focuses on several new parameters that may be introduced in this extension to handle V-DMC.

[00168] v3c_vmesh_extension in V3C Parameterset 3202 is an extention that provides several new parameters to the V3C Parameterset 3202 to enable V-DMC. To signal basemesh information in V3C parameterset 3202, the extension flag, vps_extension_present_flag may be set 1 and (vps_extension_8bits»N)&l may be 1. Here the “N” is smaller than 8 and may be decided when the 2 nd edition of 23090-5 (Reference [E2]) is finalized. In the following example, N is set as 4. V3C_vmesh_extension can signaled as following:

[00169] vps vmesh extension length minusl indicates the size of vps_v3 c_vmesh_extension.

[00170] In some embodiments, the following parameters may be added via the V3C Parameterset 3202 extension:

[00171] vps geometry frame width and vps geometry frame height indicates the nominal width and height of the geometry video bitstream. vps_disp_frame_width and vps_disp_frame_height overwrite the geometry video bitstreams. vps_frame_width and vps frame height correspond only to the width and the height of the non-geometry video bitstream.

[00172] vps atlas data substream present flag indicates the presence of atlas subbitstream 3206 in the bitstream. If the flag is false, the atlas substreams 3206 should not be present in the bitstream. If such bitstreams are present, such bitstreams should be ignored. In some embodiments, this flag is not signaled but set to 1 always for the v-mesh codec.

[00173] vps_mesh_substream_present_flag indicates the presence of mesh subbitstream in the bitstream. In some embodiments, this flag is not signaled but set to 1 always for the v-mesh codec.

[00174] basemesh_information may be added in V3C Parameterset to signal information for the mesh subbitstream, v3c_parameter_set is extended to add this element. The basemesh information may include the following: [00175] mi_datatype_count indicates the number of different data types in the mesh bitstream. It is set as 1 when vuh_mesh_data_motion_field_present_flag is false or when vuh_mesh_data_sample_stream_flag is false.

[00176] mi_type_id[ atlasTD ][ i ] indicates the datatype. For example, it can be coded to indicate mesh data or motion fields.

[00177] mi_codec_id[ atlasID ][ i ] indicates the codec used to decode the associated data type. The current techniques do not limit the format of this element. The value can be a 4cc code or a number explicitly defined in the v-mesh codec. For example, mi_codec_id[atlasID][0]=DRACO indicates the data with data type=0 is decoded by a mesh codec, DRACO. mi_ codec id[atlaslD][0]=lNTERNAL indicates the data with data type=0 is decoded by a decoder defined in the v-mesh codec.

[00178] mi_basemesh_geometry_3d_bit_depth_minusl indicates the nominal bit depth of positions of the meshes conveyed in the mesh subbstream.

[00179] mi_basemesh_geometry_MSB_align_flag indicates how the decoded basemesh positions are converted to samples at the nominal geometry bit depth.

[00180] mi_basemesh_meshattribute_count indicates the number of attributes of the meshes conveyed in the mesh subbistream, such as color, normal or reflectance.

[00181] mi_basemesh_meshattribute_type_id indicates the attribute type of the meshes conveyed in the mesh subbitstream, such as color, normal, reflectance etc.

[00182] mi basemesh meshattribute bit depth minusl indicates bit depth of a basemesh attribute.

[00183] mi_basemesh_meshattribute_MSB_align flag indicates how the decoded basemesh attributes are converted to samples at the nominal bit depth.

[00184] In some embodiments, the following parameters may be added via the V3C

Parameterset 3202 extension:

[00185] vps_ext_atlas_data_substream_present_flag, which indicates the presence of an Atlas Data substream in the bitstream. If the flag is false, the atlas substreams should not be present in the bitstream. If such bitstreams are present, such bitstreams should be ignored.

[00186] vps ext mesh data substream present flag indicates the presence of a Mesh Data substream in the bitstream. If the flag is false, the base mesh substreams should not be present in the bitstream. If such bitstreams are present, such bitstreams should be ignored.

[00187] vps_ext_mesh_data_facegroup_id_attribute_present flag equals 1 indicates that one of the attribute types present in the base mesh data stream is the facegroup Id.

[00188] vps_ext_mesh_data_submesh_id_attribute_present_flag equals 1 indicates that one of the attribute types for the base mesh data stream is the submesh Id.

[00189] vps_ext_mesh_data_attribute_count indicates the number of total attributes in the base mesh including the attributes signalled through the base mesh data substream and the attributes signalled in the video sub streams (using ai attribute count). When vps_ext_mesh_data_facegroup_id_attribute_present flag equals 1, it shall be greater or equal to ai_attribute_count+l.When vps_ext_mesh_data_submesh_id_attribute_present_flag equals 1, it shall be greater or equal to ai_attribute_count+l. This can be constrained by profile/levels.

[00190] The types of attributes that are signalled through the base mesh substream and not through the video substreams are signalled are signaled as vps_ext_mesh_attribute_type data types.

[00191] When vps_ext_mesh_data_facegroup_id_attribute_present_flag equals 1, one of the vps_ext_mesh_attribute_type may be a facegroup_id.

[00192] When vps_ext_mesh_data_submesh_id_attribute_present_flag equals 1, one of the vps_ext_mesh_attribute_type may be a submesh_id.

[00193] vps_ext_mesh_data_substream_codec_id indicates the identifier of the codec used to compress the base mesh data. This codec may be identified through the profiles a component codec mapping SEI message, or through means outside this document.

[00194] vps_ext_attribute_frame_width[i] and vps_ext_attribute_frame_height[i] indicate the corresponding with and height of the video data corresponding to the i-th attribute among the attributes signalled in the video substreams.

Mesh SubBitStream

[00195] As mentioned above, a Mesh data bitstream may be added to an encoded bitstream to facilitate V-DMC. A discussion of this bitstream is provided below.

[00196] Mesh sub-bitstream 3202 contains data to generate base meshes to be fed to the mesh subdivision/mesh normalization process 3210. Mainly, it contains one or more parameter sets and one or more mesh frame layers, which consists of a data unit. Each data unit has a data type and the size in its header. Based on the data type, a corresponding decoding engine indicated in basemesh_information() is used.

[00197] In some embodiments, the information related to data type id(mi type id), data codec id(mi codec id) can be signaled in the mesh subbitstream as a part of Mesh sequence parameter set. Further, In some embodimentss, the data type and data codec id can be signaled per data unit. [00198] When vuh mesh data sample stream flag is true, the subbitstream has sample stream format. sample_stream_nal_header() may include two values, ssnh_unit_size_precision_bytes_minusl and ssnh_reserved_zero_5bits as defined in Annex D.2.1 in Reference [El] and sample_stream_mesh_nal_unit() can be same as defined in Annex D.2.2 in Reference [El] or as defined in the v-mesh codec. An example of the case is as following:

H .

[00199] mesh_nal_unit() has a header and rbsp byte. The header can be same as 8.3.5.1 nal_unit() in Reference [El] or as defined in the v-mesh codec. An example of the case is as following:

[00200] mesh_nal_unit_type indicates the NAL type of the current mesh nal unit. It can be assigned the reserved values of Nal unit type code (e.g., Table 4 in Reference [El]) or can be defined in v-mesh.

[00201] mesh_unit_data_type indicates the data type of the mesh nal unit. For example, mesh_unit_data_type=MESH_MSPS when the data unit is a sequence parameter set. When mesh nal unit type indicates the nalu type of the current mesh is a sequence parameter set, mesh unit data type should be MESH MSPS. In some embodiments, it is not signaled in the case mesh unit data type =MESH_BODY when the data unit is a coded mesh data which can be decoded with a designated mesh codec such as Draco, and mesh unit data type =MESH_M0TI0N when the data unit contains motion vectors between two meshes which can be decoded by a designated entropy codec. The data type must be associated with one of mi_type_id signaled in basemesh information. Designated codecs are decided based on mesh unit data type. In some embodiments, mesh nal unit header can signal only mesh nal unit type. In some embodiments, mesh unit data type can be signaled in mesh_frame_header() instead of mesh_nal_unit_header().

[00202] Mesh sequence parameter set(MSPS) contains information on the mesh data bitstream. An example of mesh sequence parameter set rbsp is provided in the following.

[00203] mesh ref list struct is equivalent to the ref list struct in Reference [El], [00204] In some embodiments, some of information signaled in basemesh_information() such as mi_geometry_MSB_align_flag, mi_meshattribute_count, mi_meshattribute_type_id, mi meshattribute bit depth minusl, mi_ meshattribute MSB align flag can be signaled in mesh_sequence_parameter_set_rbsp().

[00205] Mesh frame layer unit RSBP is signaled when mesh unit data type does not indicate the data unit is mesh_sequence _parameter_set.

[00206] Mesh frame header is signaled per frame.

[00207] mfh mesh sequence parameter set id indicates the id of mesh sequence parameter set used for this mesh data unit.

[00208] mfh_frame_type indicates if the data may require another mesh data to generate the corresponding mesh. For example, mfh_frame_type can be I_FRAME which indicates it does not require any other meshes to generate a mesh, mfh frame type can be P FRAME or SKIP FRAME which indicate it requires other meshes to generate a mesh corresponding to the data unit. [00209] In some embodiments, mfh frame type is not signaled but derived from mesh unit data type. When mesh unit data type is MESH BODY, mfh frame type is set as I FRAME and when mesh unit data type is MESH MOTION, mfh frame type is set as P FRAME.

[00210] mfh_mesh_frm_order_cnt_lsb indicates the frame index of the mesh data.

[00211] mfh_num_of_reference_frame indicates the number of reference frames used for this frame.

[00212] In some embodiments, mfh num of reference frame is not signaled but set as 1.

[00213] mfh_frm_diff indicates the difference between the current frame index and the reference frame index when mfh frame type is not I FRAME.

[00214] In some embodiments, instead of signaling mfh frm diff it can be used reference list structure as described below. The implementation is equivalent to the one in the V3C spec Reference [El],

[00215] mesh_frame_data has a chunk of data which can be decoded using a designated codec. For example, when mesh unit data type = MESH BODY and a codec designated for the data type is Draco, a chunk of data sized ssnu_mesh_nal_unit_size - size of mesh nal unit header - size of mesh frame header is fed to the codec and a mesh is generated. When mesh unit data type = MESH MOTION and a codec designated for the data type is INTERNAL, the chunk sized ssnu mesh nal unit size - size of mesh nal unit header - size of mesh frame header is decoded with the decoding process provided by the v-mesh codec.

[00216] In some embodiments, mesh subbitstream can be separate into more than two independent subbitstreams. Each subbistream contains only one type of mesh nuit data type. Subsequently, basemesh_information() is duplicated as many as the number of subbitstreams with updated information accordingly. [00217] In some embodiments, to signal the attribute indices for submesh id and facegroup_id, the indices of the elements can be explicitly signalled after the present flags, as illustrated below.

[00218] In some embodiments, vps ext mesh attribute type, Facegroupld can appear only once, vps ext mesh attribute type, submeshid can appear only once in the vps_ext_mesh_data_attribute_count[ j ]-attributes.

[00219] In some embodiments, vps_ext_mesh_data_submesh_id_attribute_present_flag is not signalled but the outputs of the base mesh substream decoder is a sequence of submesh frames instead of a sequence of mesh frames.

Atlas Parameter Set Sequence Extension

[00220] To support V-DMC, an Atlas Data SubBitStream 3206 sequence extension may also be provided. A discussion of some possible parameters of the extension are provided below.

[00221] asps_vmesh_extension in atlas sequence parameter set to signal information related to v-mesh in atlas sequence parameter set, the extension flag vps_extension_present_flag should be set 1 and (asps_extension_7bits»N)&l should be 1. Here the “N” is smaller than 7 and may be decided when V3C 2 nd edition (Reference [D2]) is finalized. The N-th bit is tentatively named as asps vmesh extension present flag in the following example, and N is set as 6. asps vmesh extension can signaled as following:

[00222] In some embodiments, the parameters of the Atlas SubBit Stream 3206 sequence extension may include:

[00223] asps vmc ext atlas width displacement and asps_vmc_ext_atlas_height_displacement indicate the width and the height of the atlas.

[00224] In some embodiments, asps vmc ext atlas width displacement and asps vmc ext atlas height displacement are not signaled but set as asps frame width and asps frame height are used. [00225] asps vmc ext prevent geometry video conversion flag prevents the outputs of geometry video stream from being converted. When the flag is true, the outputs are used as they are without any conversion process from Annex B in Reference [El], When the flag is true, the size of geometry video shall be same as nominal video sizes indicated in the bitstream.

[00226] asps_vmc_ext_prevent_ attribute video conversion flag prevents the outputs of attribute video streams from being converted. When the flag is true, the outputs are used as they are without any conversion process from Annex B in Reference [El], When the flag is true, the size of attribute video shall be same as nominal video sizes indicated in the bitstream.

[00227] In some embodiments, asps vmc ext j>revent_geometry_video_conversion_flag and asps_vmc_ext_prevent_attribute_video_conversion_flag can be in V3C Parameter set.

[00228] asps_vmc_ext_geometry_3d_bitdepth indicates the bit depth of positions of the output meshes.

[00229] asps_vmc_ext_coordinate_2d_bitdepth indicates the bit depth of texture coordinates of the output meshes.

[00230] A number of subdivision approaches may be performed. In some embodiments, asps vmc ext subdivision method and asps_vmc_ext_subdivision_ iteration_count_ minus ! signal information about the subdivision method.

[00231] asps_vmc_ext_subdivision_method is given to the mesh normalization process and indicates the method to increase the number of vertices in the base meshes. In some embodiments, when this parameter is set to 0 (or other pre-determined value), a midpoint subdivision method is used. In some embodiments, when this parametier is 0, the base meshes are not modified/normalized through the mesh normalization process. In some embodiments, asps vmc ext subdivision method can indicate any resampling method to be applied to resample the vertices in the basemesh. In this case asps_vmc_ext_subdivision_iteration_count_minusl might.

[00232] asps_vmc_ext_subdivision_iteration_count_minusl indicates the number of iteration the subdivision method requires. In some embodiments, when asps_vmc_ext_subdivision_method is 0, it is not signaled but set 0. [00233] asps vmc ext displacement coordinate system indicates the coordinate system applied during the conversion process from geometry images to displacements as described above.

[00234] asps vmc ext transform index indicates a method used to convert pixel values from the geometry image to displacement. For example, when set to 0, this may indicate it is NONE, and no transform is applied to the pixel values from the output geometry images but the values are directly added to the output of the mesh normalization process. In some embodiments, when set to 1, the transform is set to linear lifting. In such embodiments, the necessary parameters for this method may be signaled as vmc lifting transform parameters.

[00235] In some embodiments, if the method is not wavelet transform described above, related variables can be signaled in a SEI message.

[00236] asps vmc ext segment mapping method indicates how to map a segment Id to each vertex. When asps_vmc_ext_segment_mapping_method is set to 0, this may indicate that the decoded base mesh includes an attribute of such an id. When asps vmc ext segment mapping method is set to 1 this may indicate that the submesh Id is derived by the patch information in a tile. Each tile in the atlas data substream corresponds to one submesh. Otherwise, the base mesh is segmented by a method as defined by the syntax element asps vmc ext segment mapping method.

[00237] asps_vmc_ext_patch_mapping_method indicates how to map a subpart of a base mesh or a submesh to a patch. When asps vmc ext patch mapping method is equal to 0, all the triangles in the segment indicated by mdu_segment_id are associated with the current patch. In this case, there is only one patch associated with the segment. asps_vmc_ext_patch_mapping_method cannot be 0 when asps vmc ext segment mapping method is equal to 1.

[00238] In some embodiments, all triangles in the the segments indicated in the atlas tile header are associated with the current patch. In this case, a tile has only one patch.

[00239] When asps vmc ext patch mapping method is equal to 1, the indices of the subparts corresponding to patches are explicitly signalled in the mesh patch data unit. [00240] In some embodiments, when asps_vmc_ext_patch_mapping_method is equal to 2, the indices of the triangle faces corresponding to a patch are explicitly signalled in the mesh patch data unit.

[00241] Tn some embodiments, when asps_vmc_ext_patch_mapping_method is equal to 2, the indices of the vertices corresponding to a patch are explicitly signalled in the mesh patch data unit.

[00242] In other cases, the vertices (or triangle faces) in the segment indicated by mdu segment id are further segmented into subparts by the method indicated by asps_vmc_ext_patch_mapping_method. In this case, the i-th subpart determined by the corresponding method is mapped to the i-th patch. Each mesh patch corresponds to only one subpart of the base mesh.

[00243] In some embodiments, when asps vmc ext patch mapping method > 2, multiple subparts can be mapped to a patch. In this case, from i-th to (i+mdu_num_subparts [patchindex] -l)th subpart correspond to the patchlndex-th patch, i is accumulated mdu num subparts till patchindex- 1.

[00244] asps vnic ext t junction i enioving niethod indicates the method to remove t-junctions created by different subdivision methods or by a subdivision iteration of two triangles sharing an edge.

[00245] asps_vmc_ext_multilocated_vertex_merge_method indicates the method to merge multiple geometry positions for one vertex caused when the vertex is shared by two different patches.

[00246] asps_vmc_ext_num_attribute indicates the total number of attributes that the corresponding mesh carries. Its value shall be less or equal to vps_ext_mesh_data_attribute_count.

[00247] asps_vmc_ext_attribute_type is the type of the i-th attribute and it shall be one of ai_attribute_type_ids or vps_ext_mesh_attribute_types.

[00248] asps_vmc_ext_direct_atrribute_projection_enabled flag indicates that the 2d locations where attributes are projected are explicitly signalled in the mesh patch data units. Therefore, the projection id and orientation index used in Reference [D4] can be also signalled. [00249] In some embodiments, asps vmc ext transform index is not signaled but always set as wavelet transform described above.

[00250] To perform transforms, in some embodiments, transform parameters may be provided as follows:

[00251] As illustrated, in the current embodiment, asps_vmc_extension_ transform para eters contains the parameters:

[00252] asps vmc ext lifting skip update, asps_vmc_ext_lifting_quantization_parameters_x, asps vmc ext lifting quantization parameters y, asps_vmc_ext_lifting_quantization_parameters_z, asps_vmc_ext_log2_lifting_lod_inverseScale_x, asps_vmc_ext_log2_lifting_lod_inverseScale_y, asps_vmc_ext_log2_lifting_lod_inverseScale_z, asps_vmc_ext_log2_lifting_update_weight, asps vmc ext log2 lifting prediction weight, which are conversion related variables.

[00253] In some embodiments, asps_vmc_extension_transform_parameters can be signaled persistently. In some embodiments, asps_vmc_extension_transform_parameters can be signaled only when asps vmc ext transform index indicates the method used to conversion is wavelet transform as described above.

[00254] asps_vmc_ext_num_attribute indicates number of attributes which a transform is applied apart from the geometry images.

[00255] asps_vmc_ext_attribute_type indicates an attribute type the following transform is applied to.

[00256] asps vmc ext attribute transform index indicates a transform which is applied to the attribute type asps_vmc_ext_attribute_type.

[00257] asps vmc extension transform paranieters preseiit flag indicates asps_vmc_extension_transform_parameters () is signaled for the attribute type. If not, the flag is false, the values are copied from the previously signaled attribute type. In some embodiments, the attribute type the asps_vmc_extension_transform_parameters () is copied from can be explicitly signaled.

[00258] In some embodiments, all the syntax elements except asps_vmc_ext_geometry_3d_bitdepth and asps_vmc_ext_coordinate_2d_bitdepthcan can be signaled in V3C parameter set or v3c unit header. In some embodiments, all the syntax elements except asps_vmc_ext_geometry_3d_bitdepth and asps_vmc_ext_coordinate_2d_bitdepthcan can be signaled in atlas frame parameter set.

[00259] Some syntax elements in the atlas sequence parameter set can be overridden by the same syntax elements in the atlas frame parameter set. afps_vmc_extension() is signaled when the first bit of afps_extension_8bits is 1.

[00260] In some embodiments, the parameters of the Atlas SubBit Stream 3206 sequence extension may include:

[00261] The lifting transform paramters may include:

[00262] In some embodiments, asps vmc ext attribute transform index, asps_vmc_ extension_transform_parameters_present flag, vmc_lifting_transform_parameter and asps_ vmc_ext_direct_atrribute_projection_enabled_flag can be signalled only for the attributes signalled through the video streams.

Altlas Parameter Set 3206 Frame Extension

[00263] To support V-DMC, an Atlas Data SubBitStream 3206 frame extension (afps) may also be provided. A discussion of some possible parameters of the extension are provided below.

[00264] afps_vmc_ext_direct_attribute_projection_enabled indicates direct attribute projection can be used.

[00265] afps_vmc_ext_overriden_flag indicates any additional information to be signaled to override the syntax elements in ASPS.

[00266] afps vmc ext subdivision enable flag indicates afps_vmc_ext_subdivision_method and afps_vmc_ext_subdivision_iteration_count_minusl are used instead of asps vmc ext subdivision method and asps_vmc_ext_subdivision_iteration_count_minusl.

[00267] afps_vmc_ext_displacement_coordinate_system_enable_flag indicates afps_vmc_e x t_displacement_coordinate_system is used instead of asp s_vmc_ext_di spl acement_coordinate_sy stem .

[00268] afps vmc ext transform index enable flag indicates afps_vmc_ext_transform_inde x is used instead of asps_vmc_ext_transform_index.

[00269] afps_vmc_ext_transform_parameters_enable_flag indicates afps vmc extension transform parametersi) is signaled to be used instead of asps_vmc_extension_transform_parameters().

[00270] afps_vmc_ext_num_attribute_enable_flag indicates afps vmc ext num attribute attributes use overridden parameters.

[00271] afps_vmc_ext_attribute_type indicates an attribute type. [00272] afps_vmc_ext_attribute_transform_index_enable flag indicates afps_vmc_ext_attribute_transform_index is used instead of asps_vmc_ext_attribute_transform_index of the corresponding attribute type.

[00273] afps vmc ext attribute transform parameters enable flag indicates afps_vmc_cxtcnsion_transform_paramctcrs() is signaled to be used instead of asps vmc extension transform parameters!) for the corresponding attribute type.

[00274] In some embodiments, all the parameters can be always signaled without enable flags. [00275] afps_vmc_ext_transform_lifting_quantization_parameters_enabl e flag indicates afps_vmc_ext_transform_lifting_quantization_parameters_x, afps_vmc_ext_transform_lifting_quantization_parameters_y and afps_vmc_ext_transform_lifting_quantization_parameters_z are signaled to be used instead of asps_vmc_ext_transform_lifting_quantization_parameters_x, asps_vmc_ext_transform_lifting_quantization_parameters_y and asps_vmc_ext_transform_lifting_quantization_parameters_z, respectively.

[00276] afps_vmc_ext_transform_log2_lifting_lod_inverseScale_enable_ flag indicates afps_vmc_ext_transform_log2_lifting_lod_inverseScale_x, afps_vmc_ext_transform_log2_lifting_lod_inverseScale_y and afps_vmc_ext_transform_log2_lifting_lod_inverseScale_z are signaled to be used instead of asps_vmc_ext_transform_log2_lifting_lod_inverseScale_x, asps_vmc_ext_transform_log2_lifting_lod_inverseScale_y and asps_vmc_ext_transform_log2_lifting_lod_inverseScale_z.

[00277] afps_vinc_ext_transform_log2_lifting_update_weight_enable_fl ag indicates afps_vmc_ext_transform_log2_lifting_update_weight is signaled to be used instead of asps_vmc_ext_transform_log2_lifting_update_weight.

[00278] afps vmc ext transform log2 lifting prediction weight enable flag indicates afps vmc ext transform log2 lifting prediction weight is signaled to be used instead of asps_vmc_ext_transform_log2_lifting_prediction_weight.

[00279] afps vmc ext transform lifting skip update is used instead of asps_vmc_ext_transform_lifting_skip_update.

[00280] In some embodiments, all the parameters can be always signaled without enable flags.

[00281] In some embodiments, the Altals Data SubBit Stream 3206 Frame extension may include:

[00282] afps_vmc_ext_single_segment_in_frame flag indicates there is only one segment for the atlas frame.

[00283] afps_vinc_ext_siiigle_attribute_tile_in_frame_flag indicates there is only one tile for each attribute signalled in the video streams.

[00284] In some embodiments, afps_vmc_ext_single_attribute_tile_in_frame_flag is signalled only when afti_single_tile_in_atlas_frame_flag is not true, afps vmc ext single attribute tile in frame flag is inferred as true when afti_single_tile_in_atlas_frame_flag is true.

[00285] In some embodiments, patch mapping method can be signalled in this afps vmc extension. [00286] In some embodiments, patch mapping method override flag is signalled and only when the flag is true, patch mapping method is signalled. In the case, the patch mapping method is used instead of asps vmc ext patch mapping method.

[00287] When afps_vmc_ext_overriden_flag in afps_vmc_extension() is true, the subdivision method, displacement coordinate system, transform index, transform parameters, and attribute transform parameters can be signalled again and the information may override the one signalled in asps_vmc_extension().

[00288] In some embodiments, afps_vmc_ext_displacement_coordinate_system_enable_ flag is not signalled but afps_vmc_ext_displacement_coordinate_system is always signalled.

[00289] Tile information for attributes signaled through the video substreams may be provided in afps_ext_vmc_attribute_tile_information() as follows :

Patch Data Unit

[00290] As with the V-PCC Patch data units, Mesh patch data units are signalled in the Atlas Data SubBitStream 3206. Mesh Intra patch data unit, Mesh Inter patch data unit, Mesh Merge patch data unit, and Mesh Skip patch data unit can be used. A discussion of these is provided below.

[00291] ath_type = SKIP_PLUS_TILE indicates all the patches are copied from its reference tile except a few patches. In atlas tile data unit, the patch indices which are not copied are explicitly signaled. Also atlas_tile_data_unit specifies if any new patches are added.

[00292] In some embodiments, SKIP PLUS TILE copies only patches for mesh information but not patches whose patch type is P SKIP, P MERGE, P INTRA, P INTER, P RAW, P EOM, I INTRA, I RAW, or I E0M.

[00293] In some embodiments, SKIP PLUS TILE copies only patches for mesh information but not patches whose information is related to explicit geometry or attribute positions, tentatively names as RAW_MESH.

[00294] atdu patch data present flag indicates there are patch data(patch_information_data()) signalled in the tile. [00295] atdu num deleted patchgroups indicates the number of patch groups not copied from the reference tile. Each patch has a grouplndex and patches with the same group index are considered as in the same group.

[00296] atdu deleted patchgroup idx indicates patch group indices which are not copied. When atdu num deleted patchgroups is not 0, this list is derived from the patch indices in the patchgroup.

[00297] atdu_num_deleted_patches indicates the number of patches not copied from the reference tile. When atdu_num_deleted_patchgroups is not 0, atdu num deleted patches need to be derived by counting the number of patches in each patch group.

[00298] atdu_deleted_patch_idx indicates patch indices which are not copied.

[00299] In some embodiments, these indices can be recalculated by reordering the patches. For example, when MESH RAW is in between two non-MESH_RAW mesh patches, the MESH RAW patch can be removed for the index calculation.

[00300] Throughout the patches in the reference tile, if the patch index is one of those in the atdu deleted atchgroup idx the patch is copied by skip_patch_data_unit().

[00301] If atdu_patch_data_present_flag is true, which indicates there are more patch information in the tile, patch_information_data() is signalled and the patch index for the coming patches is started with RefAtduTotalNumPatches[ tilelD ]-atdu_num_deleted_patches when atdu_num_deleted_patches is not 0.

[00302] atdu_patchgroup_index indicates the patchgroup index of the following atdu group information. PATCHGROUP END indicates there is no more atdu group informati on .

[00303] In some embodiments atdu_patchgroup_index is not signaled but always set as PATCHGROUP END .

[00304] atdu_patchgroup_information has common information which can be shared by all the patches which have the same patchgroup index.

[00305] atdu patchgroup overriden flag indicates any additional information that is to be signaled to override the syntax elements in the corresponding AFPS and/or in the corresponding ASPS.

[00306] atdu patchgroup subdivision enable flag indicates atdu_patchgroup_subdivision_method and atdu_patchgroup_subdivision_iteration_count_minusl are used instead of asps vmc ext subdivision method and asps_vmc_ext_subdivision_iteration_count_minusl or afps vmc ext subdivision method and afps_vmc_ext_subdivision_iteration_count_minusl .

[00307] atdu patchgroup displacement coordinate system enable flag indicates atdu_patchgroup_displacement_coordinate_system is used instead of asp s_vmc_ext_di spl acement_coordinate_sy stem .

[00308] atdu patchgroup transform index enable flag indicates atdu patchgroup transform index is used instead of asps vmc ext transform index or afps vmc ext transform index.

[00309] atdu patchgroup transform parameters enable flag indicates atdu_patchgroup_transform_parameters() is signaled to be used instead of asps_vmc_extension_transform_parameters() or afps_vmc_extension_transform_parameters().

[00310] atdu patchgroup num attribute eiiable flag indicates atdu_patchgroup_num_attribute-attributes use overridden parameters.

[00311] atdu patchgroup attribute type indicates an attribute type. [00312] atdu patchgroup attribute transform index enable flag indicates atdu_patchgroup_attribute_transform_index is used instead of asps_vmc_ext_attribute_transform_index or afps_vmc_ext_attribute_transform_index of the corresponding attribute type.

[00313] atdu_patchgroup_attribute_transform_parameters_enable_flag indicates atdu transform_parameters() is signaled to be used instead of asps_vmc_extension_transform_parameters() or afps_vmc_extension_transform_parameters() for the corresponding attribute type.

[00314] In some embodiments, all the parameters can be always signaled without enable flags.

[00315] atdu_patchgroup_transform_lifting_quantization_parameters_en able_flag indicates atdu_patchgroup_transform_lifting_quantization_parameters_x, atdu_patchgroup_transform_lifting_quantization_parameters_y and atdu_patchgroup_transform_lifting_quantization_parameters_z are signaled to be used instead of the corresponding syntax elements signalled in the ASPS and/or in the AFPS.

[00316] atdu_patchgroup_transform_log2_lifting_lod_inverseScale_enab le_flag indicates atdu_patchgroup_transform_log2_lifting_lod_inverseScale_x, atdu patchgroup transform log2 lifting lod inverseScale_y and atdu_patchgroup_transform_log2_lifting_lod_inverseScale_z are signaled to be used instead of the corresponding syntax elements signalled in the ASPS and/or in the AFPS.

[00317] atdu_patchgroup_transform_log2_lifting_update_weight_enable_ flag indicates atdu_patchgroup_transform_log2_lifting_update_weight is signaled to be used instead of the corresponding syntax element signalled in the ASPS and/or in the AFPS.

[00318] atdu patchgroup transform log2 lifting prediction weight enable flag indicates atdu_patchgroup_transform_log2_lifting_prediction_weight is signaled to be used instead of the corresponding syntax element signalled in the ASPS and/or in the AFPS.

[00319] atdu patchgroup transform lifting skip update is used instead of asps_vmc_ext_transform_lifting_skip_update or afps_vmc_ext_transform_lifting_skip_update. In some embodiments, all the parameters can be always signaled without enable flags.

[00320] Patch mode, P_MESH, M MESH, RAW MESH, 1_MESH indicates the modes of patches contain information for v-mesh.

[00321] In one embedment, an implementation of signalling mesh patch data units may include:

[00322] In some embodiments, an implementation of signalling mesh patch data units may include:

Mesh intra patch data unit

[00323] Mesh intra data unit has information to connect the geometry video, the texture video and basemeshes. These values are given to the mesh normalization process, the geometry normalization process and the attribute normalization process. In one embodiment, the mesh intra patch data unit may be implemented as follows:

[00324] mdu patchgroup index indicates the group index of the patch.

[00325] mdu patch parameters enable flag indicates wheter certain parameters are copied from atdu pathgroup information or not. In some embodiments, mdu_patch_parameters_enable_flag is not signaled but always set as true. [00326] mdu_geometry_2d_pos_x and mdu_geometry_2d_pos_y indicate the left top comer of the corresponding area in the geometry video frame.

[00327] mdu_geometry_2d_size_x_minusl and mdu_geometry_2d_size_y_minusl indicate the size of the corresponding area in the geometry video frame

[00328] mdu_attributes_2d_pos_x and mdu_attributes_2d_pos_y indicate the left top comer of the corresponding area in the attribute video frame.

[00329] mdu_attributes_2d_size_x_minusl and mdu_attributes_2d_size_y_minusl indicate the size of the corresponding area in the attribute video frame. In some embodiments, mdu_attributes_2d_pos_x, mdu_attributes_2d_pos_y, mdu_attributes_2d_size_x_minusl and mdu_attributes_2d_size_y_minusl can be signaled only when vmc_ext_direct_attribute_projection_enabled is true.

[00330] mdu_3d_offset_u, mdu_3d_offset_v and mdu_3d_offset_d indicate the offset of the corresponding 3D space. In some embodiments, these three values can be signaled in a SEI message.

[00331] mdu_3d_range_d specifies the nominal maximum value of the shift expected to be present in the reconstructed bit depth patch geometry samples. In some embodiments, mdu_3d_range_d can be signaled in a SEI message.

[00332] mdu vertex count minusl indicates number of vertices corresponding to this patch in the normalized meshes.

[00333] mdu_triangle_count_minusl indicates number of triangles corresponding to this patch in the normalized meshes. In some embodiments, mdu vertex count minusl and/or mdu triangle count minus! can be the numbers in the base meshes. In some embodiments, mdu vertex count minusl and/or mdu triangle count minusl can be signaled in a SEI message. In some embodiments, mdu vertex count minusl and mdu triangle count minusl can be derived from mdu vertex index list with or without mdu subdivision iteration count.

[00334] mdu_head_vertex_index indicates the index of the first vertex corresponding to this patch in the normalized mesh. It is the smallest vertex index among the vertices corresponding to this patch. [00335] mdu num sequential vertex index indicates number of vertices whose indices are sequential from the mdu head vertex index.

[00336] mdu_ vertex_index_diff indicates the difference between vertex indices in the mdu vertex index li st.

[00337] mdu_vertex_index_list lists vertex indices corresponding to the patch. It can be derived from mdu head vertex index, mdu num sequential vertex index and mdu vertex index diff. For example, the list can be set as following:

[00338] In some embodiments, mdu head vertex index can be the index of the first vertex corresponding to this patch in the base mesh. mdu_vertex_index_list is a list of vertex indices corresponding to the patch in the base mesh, in this case mdu vertex index list can be increased up to the size of (mdu vertex count minusl + 1) during the mesh normalization process when mdu vertex count minusl indicates the number of corresponding vertexes in the normalized mesh. In some embodiments, the total size of mdu vertex index list can be derived during the mesh normalization process.

[00339] In some embodiments, a patch whose index is p corresponds with p-th connected component of the base mesh, mdu vertex index list is derived from the connected component without signaling mdu_head_vertex_index, mdu_num_sequential_vertex_index or mdu_ vertex_index_diff.

[00340] In some embodiments, a patch whose index is p corresponds with mdu cc index- th connected component of the base mesh, mdu cc index is signaled and mdu vertex index list is derived from the connected component without signaling mdu head vertex index, mdu num sequential vertex index or mdu_ vertex index diff [00341] In some embodiments, the index (p)used to find a corresponding connected component of the base mesh can be derived by not including non-mesh related patches.

[00342] In some embodiments, all the parameters used to find connected components of a mesh can be delivered in v-mesh extension in the atlas sequence parameter set.

[00343] In some embodiments, a patch whose index is p corresponds with p-th connected component of the normalized mesh, mdu vertex index list is derived from the connected component without signaling mdu head vertex index, mdu num sequential vertex index or mdu_ vertex index diff

[00344] mdu_projection_id indicates the values of the projection mode and of the index of the normal to the projection plane for the patch as similar as in Reference [El],

[00345] mdu_orientation_index indicates the patch orientation index as similar as in Reference [El ],

[00346] mdu_lod_enabled_flag indicates that the LOD parameters are present for the current patch p.

[00347] mdu_lod_ scale x minusl and mdu_lod_scale_y_idc indicate scaling factors for x and y coordinate as similar as in Reference [El],

[00348] In some embodiments, mdu_projection_id mdu orientation index, mdu lod enabled flag, mdu lod scale x minusl and mdu_lod_scale_y_idc can be signalled in atdujoatchgroup information and overridden in this mesh intra data unit.

[00349] mdu subdivision enable flag indicates mdu subdivision method and mdu_subdivision_iteration_count_minusl are used instead of the corresponding syntaxs element signaled in the ASPS, in the AFPS and/or atdu patchgroup information.

[00350] mdu displacement coordinate system enable flag indicates mdu displacement coordinate system is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup information.

[00351] mdu transform index enable flag indicates mdu transform index is is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdii patchgroup information. [00352] mdu transform parameters enable flag indicates mdu_transform_parameters() is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup information.

[00353] mdu_num_attribute_enable_flag indicates mdu num attribute attributes use overridden parameters.

[00354] mdu_attribute_type indicates an attribute type.

[00355] mdu attribute transform index enable flag indicates mdu attribute transform index is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup information.

[00356] mdu attribute transform parameters enable flag indicates mdu_transform_parameters() is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup information.

[00357] In some embodiments, all the parameters can be always signaled without enable flags. Below is an embodiment of an implementation of the mdu transform parameters/).

[00358] mdu_transform_lifting_quantization_parameters_enable_flag indicates mdu_transform_lifting_quantization_parameters_x, m du tran sform l i fti ng quanti zati on_p aram eters_y an d mdu_transform_lifting_quantization_parameters_z are signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroLip information.

[00359] mdu_transform_log2_lifting_lod_inverseScale_enable_flag indicates mdu_transform_log2_lifting_l od_inver se S cal e_x, mdu_transform_log2_lifting_lod_inverseScale_y and mdu_transform_log2_lifting_lod_inverseScale_z are signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup i nform ati on .

[00360] mdu_transform_log2_lifting_update_weight_enable_flag indicates mdu_transform_log2_lifting_update_weight is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup i nform ati on.

[00361] mdu_transform_log2_lifting_prediction_weight_enable_flag indicates mdu_transform_log2_lifting_prediction_weight is signaled to be used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroupjnformation.

[00362] mdu transform lifting skip update is used instead of the corresponding syntax elements signaled in the ASPS, in the AFPS and/or atdu patchgroup i nform ati on. [00363] In some embodiments, all the parameters can be always signaled without enable flags.

[00364] An embodiment of an implementation ot the mesh intra patch data unit is provided below.

[00365] In the current embodiment, mdu_segment_id indicates the segment ID associated with the current patch. [00366] When asps_vmc_ext_segment_mapping_method is equal to 0, the associated segment (a set of connected vertices) is the union of the vertices whose submesh Id is equal to mdu segment id and the associated information (connectivities and/or attributes). In this case, vps_ext_mesh_data_submesh_id_attribute_present_flag shall be true and one of vps ext mesh attribute type is submeshld.

[00367] When asps vmc ext segment mapping method is equal to 1, the associated segment is derived from the patch information. The segment is the union of the vertices mapped to the patches in one tile.

[00368] Otherwise, the associated segment is the mdu_segment_id-th segment determined by asps_vmc_ext_segment_mapping_method.

[00369] In some embodiments, one output of the base mesh substream decoder is mapped with one mdu segment id when the output of the base mesh substream decoder is a sequence of submesh frames.

[00370] mdu vertex count minusl and mdu triangle count minusl indicate the number of vertices and triangles associated with the current patch.

[00371] When asps vmc ext patch mapping method is equal to 0, all the triangles in the segment indicated by mdu segment id are associated with the current patch. In this case, there is only one patch associated with the segment, asps vmc ext patch mapping method cannot be equal to 0 when asps vmc ext segment mapping method is equal to 1.

[00372] In some embodiments, all the triangles in the more than one segment indicated in the atlas tile header are associated with the current patch. In this case, a tile has only one patch.

[00373] When asps vmc ext patch mapping method is equal to 1, the syntax elements mdu_num_subparts and mdu subpart id are signalled, the associated triangle faces are the union of the triangle faces whose facegroup id is equal to mdu subpart id.

[00374] In some cases, the associated triangle faces are the union of the triangle faces in mdu_subpart_id-th subpart determined by asps_vmc_ext_patch_mapping_method.

[00375] When mdu patch parameters enable flag is true, the subdivision method, displacement coordinate system, transform index, transform parameters, and attribute transform parameters can be signalled again and the information overrides the corresponding information signalled in in asps_vmc_extension().

[00376] In some embodiments, mdu displacement coordinate system enable flag is not signalled but mdu_displacement_coordinate_system is always signalled.

[00377] In some embodiments, segment id can be signalled in atlas tile header and the segment id for all the patches in the atlas tile is same as the one. An example of this implementation is provided below.

[00378] In some embodiments, a tile can be associated with more that one mesh segment. In this case, each patch belongs to this tile has segmented.

[00379] In some embodiments, in the mesh patch data unit, mdu segment id can indicate the order of appearance instead of the segment id itself. [00380] In some embodiments, patch data units may not contain 2d_pos_x, 2d_pos_y, 2d_size_x_minusl or 2d_size_y minus 1. The 2D position and the 2D size is signalled once in the atlas tile header. In this case, all the patches in the atlas tile has the same segment id.

Inter Patch Data Unit

[00381 J Turning now to a discussion of the inter patch data unit, an embodiment of an implementation of the the Mesh Inter Patch Data Unit is provided below.

[00382] Patch mode P_MESH indicates a patch is predicted from the reference tile. The syntax elements not signaled are copied from the reference patch.

[00383] midu ref index indicates the reference index for the reference tile.

[00384] midu patch index indicates the reference index for the reference patch.

[00385] In some embodiments, midu_patch_index indicates a recalculated patch index which derived without non-mesh related patches and/or RAW MESH patches.

[00386] midu_geometry_2d_pos_x and midu_geometry_2d_pos_y indicate the left top comer of the corresponding area in the geometry video frame.

[00387] midu_geometry_2d_delta_size_x and midu_geometry_2d_delta_size_y indicate the size difference between the corresponding area and the area corresponding to the reference patch in the geometry video frame.

[00388] midu_attributes_2d_pos_x and midu_attributes_2d_pos_y indicate the left top corner of the corresponding area in the attribute video frame.

[00389] midu_attributes_2d_size_x_minusl and midu_attributes_2d_size_y_minusl indicate the size difference between the corresponding area and the area corresponding to the reference patch in the attribute video frame.

[00390] In some embodiments, midu_attributes_2d_pos_x, midu_attributes_2d_pos_y, midu_attributes_2d_delta_size_x and mdu_attributes_2d_delta_size_y can be signaled only when vmc_ext_direct_attribute_projection_enabled is true.

[00391] midu _3d_offset_u, midu _3d_offset_v and midu _3d_offset_d indicate the offset of the corresponding 3D space. [00392] In some embodiments, these three values can be signaled in a SEI message.

[00393] midu_3d_range_d specifies the nominal maximum value of the shift expected to be present in the reconstructed bit depth patch geometry samples.

[00394] In some embodiments, mdu 3d range d can be signaled in a SEI message.

[00395] Provided below is another embodiment of an implementation of the Mesh Inter Data Unit.

Mesh Merge Patch Data Unit

[00396] Patch mode M_MESH indicates a patch is copied from the reference frame but some of the information is overwritten. An example of the Mesh Merge Data Unit is provided below.

Mesh Skip

[00397] The mesh skip patch mode indicates that the data unit should be skipped. One embodiment f an implementation of this mode is provided below.

Raw Mesh Data Unit

[00398] The raw mesh patch mode indicates the data unit contains explicit information about the positions and the attributes. An embodiment of the Mesh Skip/ Mesh Raw Patch Data Unit implementation is provided below.

[00399] rmdu patch in auxiliary video flag indicates whether the geometry and attribute data associated with the patch are encoded in an auxiliary video sub-bitstream. [00400] rmdu geometry 2d pos x and rmdu_geometry_2d_pos_y indicate the left top corner of the corresponding area in the geometry video frame.

[00401] rmdu geometry 2d size x minusl and rmdu_geometry_2d_size_y_minusl indicate the size of the corresponding area in the geometry video frame

[00402] rmdu_attributes_2d_pos_x and rmdu_attributes_2d_pos_y indicate the left top corner of the corresponding area in the attribute video frame.

[00403] rmdu_attributes_2d_size_x_minusl and rmdu_attributes_2d_size_y_minusl indicate the size of the corresponding area in the attribute video frame.

[00404] In some embodiments, rmdu_attributes_2d_pos_x, rmdu_attributes_2d_pos_y, rmdu_attributes_2d_size_x_minusl and rmdu_attributes_2d_size_y minusl can be signaled only when vmc_ext_direct_attribute_projection_enabled is true.

[00405] rmdu_3d_offset_u, rmdu_3d_offset_v and rmdu_3d_offset_d indicate the offset of the corresponding 3D space.

[00406] In some embodiments, these three values can be signaled in a SEI message.

[00407] rmud_head_vertex_index indicates the index of the first vertex corresponding to this patch. In some embodiments, this value is not signaled but the outputs of this patch are appended to the end of the position list of the corresponding mesh.

[00408] rmdu_vertex_count_minusl indicates number of vertices corresponding to this patch.

[00409] rmdu triangle count minusl indicates number of triangles corresponding to this patch.

[00410] In some embodiments, mesh_raw_patch_data_unit can be separated into two patch data unit mesh_raw_geometry_patch_data_unit and mesh_raw_attribute_patch_data_unit.

[00411] Another embodiment of an implementation of the raw mesh patch mode using “mrdu” in lieu of “rmdu” is provided below.

Normalization

[00412] Having discussed the de-multiplexed subbitstreams and bitstream syntax elements of the encoded bitstreams representative of 3D textured static and/or dynamic meshes, the discussion now turns to the decoder normalization processes. As mentioned above, decoder normalization processes include a mesh normalization, a geometry normalization, and an attribute normalization.

[00413] Starting first with the mesh normalization, as mentioned above, the base mesh decoded from the Mesh subbitstream 3203 de-multiplexed from the encoded bitstream may be normalized via a mesh normalization process 3210. In the mesh normalization process 3210, the outputs of the mesh subbitstreams 3203 are processed to be added to the outputs of the geometry normalization process 3212. The inputs of this process are the output meshes of the mesh subbitstreams 3203, vmc ext subdivision method and vmc ext subdivision iteration count from vmesh extension in the atlas sequence parameters and patch information. The outputs are meshes and the total number of vertices in these meshes are same as the total number of displacements generated from the geometry normalization process 3212 unless the v-mesh codec specifies otherwise.

[00414] When vmc ext subdivision method is 0 or (mdu_patch_subdivisionmethod enable flag is true and mdu subdivision method index is 0), no additional process is applied to the corresponding area of the base mesh. The corresponding area of the normalized mesh is same as the area of the base mesh.

[00415] When vmc ext subdivision method is not 0 or (mdu_patch_subdivisionmethod enable flag is true and mdu subdivision method index is not 0), the corresponding area of the mesh is populated with vertices by the method indicated by vmc ext subdivision method or mdu subdivision method index, in a manner as described above.

[00416] Figures 33 and 34 show examples of an input and its output of this normalization process. Figure 33 illustrates an input mesh 3300 that is provided as input to the mesh normalization process 3210. [00417] Figure 34 illustrates an output 3400 of the mesh normalization process 3210. As depicted in Figure 34, the number of vertices are increased via the mesh normalization process. This results in a more refined 3D mesh, as illustrated in output 3400.

[00418] Based on patch information associated with areas in the mesh, different subdivision methods may be applied. For example, Figure 35 illustrates an example 3500 where the left part 3501, right part 3502, and head part 3503 each correspond to different patches (e.g., patchO, patch 1, and patch 2, respectively). Each part can be subdivided differently (e.g., as the corresponding patch information indicates), resulting in a different number of verticies and, thus, refinement. For example, in example 3500, which only changes subdivision iteration counts for each of the three parts (e.g., left part 3501, right part 3502, and head part 3503), different number of verticies and refinement for different portions of the 3D mes are provided. As illustrated, patch2 may dictate that fewer subdivision iterations are desired than with patchO and/or patchl, resulting in a less-refined head part 3503 with fewer verticies. Further, patchl may indicate that more detailed refinement is warranted, resulting in a relatively higher number of subdivision iterations. This results in more verticies in the right part 3502, resulting in relatively more refinement in this portion of the 3D mesh.

[00419] Having discussed the inputs and outputs of the mesh normalization process 3210, the discussion turns to an example of the subdivision process described herein. Figures 36 and 37 provide a simplified example of the current process. Starting first with Figure 36, assumming vO, vl, and v3 are connected (ccO), vO, vl, and v2 are connected (ccl), and v2, vl, and v4 are connected(cc2). The connected components correspond to a patch, patch[0], patch[l] and patch[2], respectively.

[00420] In this example, vmc_ext_subdivision_method = 1 and vmc_ext_subdivision_ iteration count = 2. Further:

1) For patch[0], mdu_patch_subdivisionmethod_enable_flag is true and mdu subdivision method index is 1 and mdu subdivision iteration count is 1. Then, as illustrated in patch [0] of Figure 37, the area corresponding to the patch, the triangle constructed by vO, vl and v3, is populated with vertices by a method whose index is 1 and the iteration count will be 1. mdu_vertex_index_list is set as { vO, vl, v3, v5, v6, v7 }. The order of vertex indices can be determined by subdivision method and the order of the corresponding displacement is aligned with this order.

2) For patch[l], mdu_patch_subdivisionmethod_enable_flag can be false. Therefore the subdivision method is set as vmc ext subdivision method and mdu subdivision iteration count is set as vmc ext subdivision iteration count. As illustrated in Figure 37, the area corresponding to the patch, the triangle constructed by vO, vl and v2, is populated with vertices accordingly. mdu_vertex_index_list is set as { vO, vl, v2, v5, v7, v8, v9, vlO, vl 1, vl2, vl3, vl4, vl5, vl6, vl7 }. The order of vertex indices can be determined by subdivision method and the order of the corresponding displacement is aligned with this order.

3) And for patch [2] mdu_patch_subdivisionmethod_enable_flag is true and mdu subdivision method index is 0. Therefore, as illustrated in Figure 37, the area corresponding to the patch, the triangle constructed by v2, vl and v4, remains as same as the input. mdu_vertex_index_list is set as { v2, vl, v4}. The order of vertex indices can be determined by subdivision method and the order of the corresponding displacement is aligned with this order.

[00421] Thus, the mesh normalization process 3210 may result in different numbers of subdivisions and therefore a dynamically adjustable number of verticies for different patches within a mesh. In this manner, the mesh normalization process may provide significant flexibility in determining of a level of refinement associated with each patch (or subsets of patches) of a 3D mesh.

[00422] Having discussed the mesh normalization the process 3210, the discussion now turns to the geometry normalization process 3212. As mentioned above, the geometry images decoded from the Geomerty subbitstream 3204, de-multiplexed from the encoded bitstream, may be normalized via a geometry normalization process 3212. In the Geometry normalization process 3212, the outputs of the geometry video subbitstreams 3204 are processed to be added to the outputs of the mesh normalization process 3210. The inputs of this process 3212 are the output geometry images of the geometry video subbitstreams 3204, vmesh extension in the atlas data subbitsstreams 3206 and patch information (e g., from the parameter set 3202). The pixel values in the corresponding area in the geometry image can converted by the methods described in Section 1, above. The converted values, namely, displacement values, are added to corresponding vertices as indicated in the patches. If mdu_transform_index is NONE, the pixel value of n-th pixel in the area is added to the position of the vertex with n-th index in the mdu vertex index list without any conversion. Otherwise, the displacement generated from n- th pixel value is added to the position of the vertex whose index is n-th index in the mdu vertex index li st.

[00423] Figure 38 illustrates a luma plane 3800 of a geometry image and Figure 39 illustrates an example of the geometry image. In this example the size of image is 256x48. The figure only shows one of 3 planes. The first few pixels triplets of the plane are depicted herein. Even in this case, the attribute image can be 2048x2048.

[00424] The pixel values in the geometry image corresponding to each patch (e.g., patch 0 in Figure 38) are converted to the displacements and added to vertices in the corresponding area in the base mesh(in this example, patch 0 in Figure 35).

[00425] Assuming, for patch[0], (mdu_geometry_2d_ pos_x, mdu_geometry_2d_pos_y) is (X0, Y0) and (mdu_geometry_2d_size_x_minusl, mdu_geometry_2d_size_y_minusl) is (sizeX0-l, sizeY0-l):

1. For patch[0], I(X0,Y0), pixel value at (X0, Y0) is converted to D(X0,Y0) as described in Section 1, above. D(X0,Y0) is added to the position of vertex[mdu_vertex_index_list [0]], vertex[v0]. And I(X0+l,Y0), pixel value at(X0+l, Y0) is converted to D(X0+l,Y0). D(X0+l,Y0) is added to the position of vertexfvl].

2. I(x,y) indicates a 3-tuple value at (x,y). It can indicate 3 numbers from 3 different planes. In some embodiments, 3 values of the 3-tuple can spread over a plane. For example, I(x,y)[0] = pixel_value_y(x,y), I(x,y)[l] = pixel_value_y (x+m, y+n), I(x,y)[2] = pixel_value_y (x+k, y+1), when pixel_value_y indicates a pixel value of a certain plane such as the luma plane. The generalized procedure for each patch can be as the following: i. pos(i) indicates the position of i-th vertex in the output mesh of the v- mesh bitstream. ii. mdu_vertex_count_minusl +1 indicates the total number of vertices corresponding to the current patch in the normalized mesh. iii. geometry VideoBlockSize indicates 1« asps_log2_patch_packing_block_size. iv. patchWidthlnBlocks is set as (mdu geometry 2d size x minus 1 +l))/geometry VideoBlockSize. The process of Convert(I(xv,yv)) and apply(D, pos) are described in Section 1, above.

5. If a patch is RAW MESH, the pixel values in the area corresponding to the patch in the geometry image are directly interpreted as positions of the mesh. The process can be described as the following: a. patchWidthlnBlocks is set as (rmdu_geometry_2d_size_x_minusl +l))/geometryVideoBlockSize b. and other values are set as described above.

6. In some embodiments, the process can be described as the following: [00426] If a patch is RAW MESH, the pixel values in the area corresponding to the patch in the attribute image are directly interpreted as positions of the mesh. The process can be described as the following: patchWidthlnBlocks is set as (rmdu_attribute_2d_size_x_minusl+l)) / geometry VideoBlockSize. And other values are described as above.

1) In other embodiment, the process can be described as the following:

Decoding

[00427] The positions of the mesh is reconstructed by adding the i-th displacement in the area corresponding to the current patch data unit in the displacement video to the i-th vertex in the subpart associated with the current patch data unit in the resampled base mesh. [00428] The location of the displacement i ) is counted from geometry _2d_pos_x and geometry _2d_pos_y (left top comer of the corresponding area) of the current patch.

[00429] The list of vertices is created from the triangle faces (with the same facegroup id) associated with the current patch. The non-overlapping vertex indices are saved into the list based on the order of their appearance.

[00430] To illustrate this, Figure 40 provides an example of vertex indices in a subpart associated with a patch. Looking at the example of Figure 40, if a patch (mesh_intra_patch_data_unit[O]) has subpart_id 0, then triangle faces with fi(facegroupld) 0 are associated with this patch, which are f 1/2/4, f 2/4/5 and f 0/1/2. As illustrated in the example of Figure 40, in some embodiments, this correlation between triangle faces and facegroupld’s may be indicated via a correlated ordering between a listing of the triangle faces and a list of associated facegroupld’s. For example, in Figure 40, the ordered list of triangle faces (e.g., f 1/2/4, f 2/4/5, f 2/5/3, and f 0/1/2) and corresponding ordered list of facegroupld’s (e.g., fl 0, fi 0, fi 1, and fiO) indicate that each of f 1/2/4, f 2,/4/5, and f 0/1/2) are associated with facegroupld 0 and that triangle face f 2/5/3 is associated with facegroupld 1.

[00431] Then, the associated vertices are ordered as 1,2, 4, 5,0. Therefore, the first displacement is added to vertexl(xl,yl,zl) and the last displacement is added to vertex0(x0,y0,z0). For patch[l], the associated vertices are ordered as 2,5,3.

[00432] In some embodiments, the non-overlapped vertex indices are saved into the list by the order of the size. In the same example above, the associated vertices are ordered as 0,1, 2, 4, 5. The first displacement is added to vertex0(x0,y0,z0) and the last displacement is added to vertex5(x5,y5,z5). For patch[l], the associated vertices are ordered as 2,5,3.

[00433] When the vertices shared by two patches (v2 and v5 in the example) result in different geometry positions, they can be merged by the method indicated by asps_vmc_ext_multilocated_vertex_merge_method.

[00434] In some embodiments, the displacements for the vertices shared by multiple patches are signalled only once in the geometry image. For example, the displacements corresponding to vertex2 and 5 in patch[l] don’t exist in the displacement image, therefore nothing is added to vertex2 and vertex5. The number of displacements in the area corresponding to patchO is 5 and the number for patchl is 1.

[00435] In some embodiments, the displacements for the vertices shared by multiple patches are ignored after the first appearance. For example, the displacements corresponding to vertex2 and 5 in patch[l] still exist in the displacement image but they are not added to the vertices. The number of displacements in the area corresponding to patchO is 5 and the number for patchl is 3.

[00436] In some embodiments, the displacement that is added to the i-th vertex in the segment indicated by the current patch data unit is determined by the geometry positions in the tile associated with the current patch data unit. The location of the displacement i ) is counted from ath_geometry_2d _pos_x and ath_geometry_2d _pos_y in the atlas tile header.

[00437] The list of vertices is created from the triangle faces (with the same facegroup id) associated with the current patch. The non-overlapping vertex indices are saved into the list based on the order of their appearance. In the example of Figure 40, the associated vertices are ordered as 1,2, 4, 5, 0,3. Then, the first displacement (at ath_geometry_2d_pos_x, ath_geometry_2d_pos_y ) is added to vertex l(xl,yl,zl) and the last displacement is added to vertex3(x3,y3,z3). In some embodiments, the non-overlapped vertex indices are saved into a list by the order of their size.

[00438] References for the preceding section relating to V-Mesh Bitstream Structure Including Syntax Elements and Decoding Process with Reconstruction, each of which is incorporated by reference in its entrety:

[El] ISO/IEC 23090-5 ISO/IEC Information technology — Coded Representation of Immersive Media — Part 5: Visual Volumetric Video-based Coding (V3C) and Video-based Point Cloud Compression (V-PCC)

[E2] K. Mammou, J. Kim, A. Tourapis, D. Podborski, K. Kolarov, “[V-CG] Apple’s Dynamic Mesh Coding CfP Response,” ISO/IEC JTCl/SC29/WG7/m59281, Online, April 2022.

[E3] A. Tourapis, J. Kim, D. Podborski, K. Mammou, “Base mesh data substream format for VDMC ,” ISO/IEC JTCl/SC29/WG7/m60362, Online, July 2022.

I l l Section 6: Adaptive Tessellation for Efficient Dynamic Mesh Encoding, Decoding, Processing, and Rendering

|00439] As described above, a static/dynamic mesh can be represented as a set of 3D Meshes M(0), M(l), M(2), . . ., M(n). Each mesh M(i) can be defined by be a connectivity C(i), a geometry G(i), texture coordinates T(i), and a texture connectivity CT(i). Each mesh M(i) can be associated with one or more 2D images A(i, 0), A(i, 1) . ., A(i, D-l), called also attribute maps, describing a set of attributes associated with the mesh surface. An example of attribute would be texture information (see Figures 2-3). A set of vertex attributes could also be associated with the vertices of the mesh such as colors, normal, transparency, etc.

[00440] While geometry and attribute information could again be mapped to 2D images and efficiently compressed by using video encoding technologies, connectivity information cannot be encoded efficiently by using a similar scheme. Dedicated coding solutions optimized for such information are needed. In the next sections we present an efficient framework for static/dynamic mesh compression.

[00441] Figures 4 and 5 show a high-level block diagram of the proposed encoding and decoding processes, respectively. Note that the feedback loop during the encoding process makes it possible for the encoder to guide the pre-processing step and changes its parameters to achieve the best possible compromise according to various criteria, including but not limited to:

• Rate-distortion,

• Encode/decode complexity,

• Random access,

• Reconstruction complexity,

• Terminal capabilities,

• Encode/decode power consumption, and/or

• Network bandwidth and latency.

[00442] On the decoder side, an application consuming the content could provide feedback to guide both the decoding and the post-processing blocks. As but one example, based on the position of the dynamic mesh with respect to a camera frustum, the decoder and the post processing block may adaptively adjust the resolution/accuracy of the produced mesh and/or its associated attribute maps.

Post-Processing

[00443] Additional post-processing modules could also be applied to improve the visual/objective quality of the decoded meshes and attribute maps and/or adapt the resolution/quality of the decoded meshes and attribute maps to the viewing point or terminal capabilities. One example of such post processing includes adaptive tessellation, as described in References [EA], [EB], [EC], [ED],

[00444] The dynamic mesh compression scheme described in Section 1 teaches, among other things, a subdivision structure to achieve high rate distortion compression performance. While optimization and control of compression performance can help enable a wide variety of applications (e.g., augmented reality/virtual reality (AR/VR), 3D mapping, autonomous driving, etc ), other functionalities, such as scalable decoding and rendering, can also be useful to allow for a wide deployment through various networks (e.g, with different bandwidth and latency properties and constraints) as well as on various platforms (e.g, with different processing/rendering capabilities and power constraints). Described below is an adaptive tessellation scheme that can adapts the resolution of a dynamic mesh (e.g, number of vertices/faces, resolution of the attribute maps, etc.) to adapt to network conditions and/or the capabilities and constraints of a consuming device/platform.

Adaptive Tessellation As Post-Processing

[00445] Figure 5, discussed above, shows the interactions between: (1) the adaptive tessellation post-processor module 503, (2) the decoder 502, and (3) application modules 501. More specifically, the adaptive tessellation module 503 can take as inputs:

• Metadata metadata(i) describing various information about the mesh structure. For example, this could include patch/patch group information, subdivision scheme, subdivision iteration count, bounding box, tiles, etc.;

• A decoded base mesh m'(i), which may (but need not) have per vertex/face/edge attributes describing saliency and importance/priority information; A set of displacements d’(i) associated with the subdivided mesh vertices; and

Optionally, one or multiple attribute maps A’(i) describing information associated with the mesh surface.

These inputs may be computed or otherwise determined as described above.

[00446] The application module 501 can provide control parameters to guide both the decoding module 502 and the adaptive tessellation module 503. Such control parameters could include:

• Current and/or future (potentially predicted) 3D camera position and viewing frustum;

• Available processing and rendering capabilities, such as capabilities of the Application and/or the device on which it runs;

• Power consumption constraints of the Application and/or the device on which it runs; and

• Region of Interest (ROI) information that identifies one or more portions of the mesh as regions of interest where more detail may be desired as compared to other regions of the mesh.

[00447] The tessellation module 503 can take advantage of the subdivision structure described above, together with information provided by the decoder 502 and/or the application 501 to generate the mesh M”(i) to be used for rendering or for processing by the application 501. (One example of processing by the application 501 could include collision detection, although any of a variety of operations on the mesh M”(i) are contemplated.) Exemplary strategies to take advantage of the subdivision structure can include adjusting global mesh resolution through varying the subdivision iteration count (see Figure 41). For example, the adaptive tessellation module 503 may produce the base mesh 4201 if the model is far away from the camera or if the terminal has limited rendering capability. The adaptive tessellation module 503 could then progressively switch to higher resolutions meshes such as 4102, 4103, and 4104 as the object approaches the camera. These higher resolution meshes 4102-4104 correspond to subdivision iteration performed on the base mesh. As an example, mesh 4102 can correspond to one subdivision of base mesh 4101. Mesh 4103 can correspond to a further subdivision of mesh 4102, i.e., two subdivisions of base mesh 4101. Mesh 4104 can correspond to a further subdivision of mesh 4103, i.e., three subdivisions of base mesh 4101.

[00448] An alternative strategy to take advantage of the subdivision structure can include locally adjusting the mesh resolution in certain areas based on various criteria. Various approaches to localized mesh adjustment are possible, including those described in References [El], [E2], [E3], [E4], [E5], [E6], [E7], [E8], [E9], An example of a localized mesh resolution that is simple and efficient solution could proceed as follows:

• Analyze the local properties of the mesh such as: o Displacements associated with vertices of the mesh; o Explicitly encoded attributes associated with the base mesh or the subdivided mesh describing saliency and importance/priority information; o Implicitly derived saliency and importance/priority information obtained by analyzing the mesh and attribute information. Examples could include surface curvature, gradient of vertex attributes or attribute maps, edge length, etc.;

• Based on the analyzed local properties, determine for each edge of the mesh whether that should be subdivided it or not For example, if the displacements associated with the vertices of an edge are lower than a user-defined or automatically-derived threshold, one might decide not to subdivide the edge. Otherwise, (if there are relatively larger displacements associated with the edge/vertices) one might decide to subdivide it;

• For each triangle (or other polygon), based on the number of edges to be subdivided, apply a subdivision scheme. An exemplary subdivision scheme (for triangles) is illustrated in Fig. 42.

• Repeat the above steps N times, with N being the number of subdivision iteration count to generate the final output mesh.

[00449] Figure 42 (cf. Fig. 11) illustrates a technique for subdividing a triangle based on the number of edges determined to be subdivided based on the algorithm described above. In 4201a, each edge is to be subdivided, resulting in the triangle being split into four triangles as shown. In (b), two edges of the triangle are to be subdivided (the edges other than the base), resulting in the triangle being split into three triangles as shown. In (c) only one edge is to be subdivided, resulting in the triangle being split into two triangles as shown. In (d), no edges are to be subdivided, meaning the original triangle is preserved. This is just one possible approach, and other subdivision approaches could be applied, for example in the case of higher order polygons, etc.

Adaptive Tessellation For Progressive/Scalable Encoding

[00450] Adaptive tessellation could also be achieved during the pre-processing and encoding stages by adaptively adjusting the subdivision scheme based on various criteria. Such criteria could include:

• Available processing on both the encoder and decoder side. In general, if more processing is available, more detailed meshes (e.g, meshes with more vertices, edges, higher resolution attribute maps, etc.) may be provided and vice versa.

• Rendering capabilities of the consuming terminal(s). Relatedly, if the consuming terminal(s) have limited rendering capabilities, less detailed meshes (e.g., meshes with fewer vertices, edges, lower resolution attribute maps, etc.) may be provided and vice versa. In some cases, the scalable nature of the encoding process may allow for different layers of mesh information to be provided, with more capable terminals consuming multiple layers to provide higher levels of detail and less capable terminals consuming fewer layers, or even just a base layer, to provide lower levels of detail.

• Power consumption constraints on the encoder and decoder sides. Like processing capability, power consumption limits (such as battery powered mobile devices) may serve to limit the ability of a consuming device to process, render, and/or display higher resolution meshes, even if the computational resources would otherwise be available. Tn such cases, the tessellation may be tailored to the power consumption constraints, which may be thought of as acting as a constraint on computational limits of a consuming device or devices.

Region of Interest (ROI) information. As described above, for some applications certain region of a mesh may be more important than others. As one example the facial region of a mesh representing a person may be of more interest than a body region Tn such cases, region of interest may be taken into account to guide the subdivision on the pre- processor/encoder side. Region of interest may either be given explicitly by the consuming application or may be inferred implicitly from information about the meshes.

• Saliency and importance/priority information provided by the user or obtained by analyzing the mesh and attribute data (such as, surface curvature, gradient of the vertex attributes or attribute maps, edge length). Like ROI information, other forms of saliency and importance/priority information may be used to inform the encoder/pre-processor side tessellation process.

[00451] For any combination of the foregoing, behavior of the subdivision scheme could be adjusted in the same manner as described above with respect to decoder/post-processor side tessellation. In the encoder/pre-processor side case, displacement and vertex attribute information can be encoded based on the adaptively subdivided mesh.

[00452] In some embodiments, the decimation stage described in Section 2 could be updated to consider the criteria described above while generating the base mesh. For instance, a higher resolution could be generated in a ROI provided by the user or by analyzing the attribute map information associated with the region.

[00453] References for the preceding section relating to Adaptive Tessellation for Efficient Dynamic Mesh Encoding, Decoding, Processing, and Rendering, each of which is incorporated by reference in its entrety:

[FA] https://developer.nvidia.com/gpugems/gpugems2/part-i-geometr ic- complexity/chapter-7-adaptive-tessellation-subdivisi on-surfaces

[FB] https://niessnerlab.org/papers/2015/0dynamic/schaefer2015dyn amic.pdf

[FC] https://giv.cpsc.ucalgary.ca/publication/c5/

[FD] https://projet.liris.cnrs.fr/imagine/pub/proceedings/ICME- 2007/pdfs/0000468.pdf

[Fl] https://www.researchgate.net/publication/221434740_Increment al_Adaptive _Loop_Sub di vi si on [F2] https://www.researchgate.net/publication/2554610_Adaptive_Su bdivision _Schemes_for_Triangular_Meshes/link/546e58c30cf2b5fcl76074c3 /download

[F3] http://diglib.eg.org/bitstream/handle/10.2312/osg20031418Z05 settgast.pdf

[F4] http://www.graphics.stanford.edu/~niessner/brainerd2016effic ient.html

[F5] https://www.researchgate.net/publication/220954613 Near- Optimum_Adaptive_Tessellation_of_General_Catmull- Clark_Subdivision_Surfaces/link/00b7d53ae32d0c726a000000/dow nload

[F6] https://www.cs.cmu.edu/afs/cs/academic/class/15869- fl l/www/readings/fisher09_diagsplit.pdf

[F6] http://www.graphics.stanford.edu/~niessner/papers/2015/0dyna mic/ schaefer2015 dynamic, pdf

[F7] https://www.cise.ufl.edu/research/SurfLab/papers/05adapsub.p df

[F8] https://anjulpatney.com/docs/papers/2009_Patney_PVT.pdf

[F9] http://research.michael-schwarz.com/publ/files/cudatess-eg09 .pdf

CONCLUSION

[00454] The foregoing describes exemplary embodiments of mesh encoders and decoders employing video/image encoders/decoders for displacements and attributes. Although numerous specific features and various embodiments have been described, it is to be understood that, unless otherwise noted as being mutually exclusive, the various features and embodiments may be combined various permutations in a particular implementation. Thus, the various embodiments described above are provided by way of illustration only and should not be constructed to limit the scope of the disclosure. Various modifications and changes can be made to the principles and embodiments herein without departing from the scope of the disclosure and without departing from the scope of the claims.

[00455] With the preceding in mind and to help illustrate machines that may be used to implement the processes described herein, an electronic device 4300 including an electronic display 4302 is shown in Figure 43. As is described in more detail below, the electronic device 4300 may be any suitable electronic device, such as a computer, a mobile phone, a portable media device, a tablet, a television, a virtual-reality headset, a vehicle dashboard, and the like. Thus, it should be noted that Figure 43 is merely one example of a particular implementation and is intended to illustrate the types of components that may be present in an electronic device 4300.

[00456] The electronic device 4300 includes the electronic display 4302, one or more input devices 4304, one or more input/output (I/O) ports 4306, a processor core complex 4308 having one or more processing circuitry(s) or processing circuitry cores, local memory 4310, a main memory storage device 4312, a network interface 4314, and a power source 4316 (e.g., power supply). The various components described in Figure 43 may include hardware elements (e.g., circuitry), software elements (e.g., a tangible, non-transitory computer-readable medium storing executable instructions), or a combination of both hardware and software elements. It should be noted that the various depicted components may be combined into fewer components or separated into additional components. For example, the local memory 4310 and the main memory storage device 22 may be included in a single component.

[00457] The processor core complex 4308 is operably coupled with local memory 4310 and the main memory storage device 4312. Thus, the processor core complex 4308 may execute instructions stored in local memory 4310 or the main memory storage device 4312 to perform operations, such as generating or transmitting image data to display on the electronic display 4302. As such, the processor core complex 4308 may include one or more general purpose microprocessors, one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or any combination thereof.

[00458] In addition to program instructions, the local memory 4310 or the main memory storage device 4312 may store data to be processed by the processor core complex 4308. Thus, the local memory 4310 and/or the main memory storage device 4312 may include one or more tangible, non-transitory, computer-readable media. For example, the local memory 4310 may include random access memory (RAM) and the main memory storage device 4312 may include read-only memory (ROM), rewritable non-volatile memory such as flash memory, hard drives, optical discs, or the like.

[00459] The network interface 4314 may communicate data with another electronic device or a network. For example, the network interface 4314 (e g., a radio frequency system) may enable the electronic device 4300 to communicatively couple to a personal area network (PAN), such as a Bluetooth network, a local area network (LAN), such as an 802.1 lx Wi-Fi network, or a wide area network (WAN), such as a 4G, Long-Term Evolution (LTE), or 5G cellular network. The power source 4316 may provide electrical power to one or more components in the electronic device 4300, such as the processor core complex 4308 or the electronic display 4302. Thus, the power source 4316 may include any suitable source of energy, such as a rechargeable lithium polymer (Li-poly) battery or an alternating current (AC) power converter. The I/O ports 4306 may enable the electronic device 4300 to interface with other electronic devices. For example, when a portable storage device is connected, the I/O port 4306 may enable the processor core complex 4308 to communicate data with the portable storage device.

[00460] The input devices 4304 may enable user interaction with the electronic device 4300, for example, by receiving user inputs via a button, a keyboard, a mouse, a trackpad, or the like. The input device 4304 may include touch-sensing components in the electronic display 4302. The touch sensing components may receive user inputs by detecting occurrence or position of an object touching the surface of the electronic display 4302.

[00461] In some embodiments, pixel or image data may be generated by an image source, such as the processor core complex 4308, a graphics processing unit (GPU), or an image sensor. Additionally, in some embodiments, image data may be received from another electronic device 4300, for example, via the network interface 4314 and/or an I/O port 4306. Similarly, the electronic display 4302 may display frames based on pixel or image data generated by the processor core complex 4308, or the electronic display 4302 may display frames based on pixel or image data received via the network interface 4314, an input device, or an I/O port 4306.

[00462] Entities implementing the present technology should take care to ensure that, to the extent any sensitive information is used in particular implementations, that well-established privacy policies and/or privacy practices are complied with. In particular, such entities would be expected to implement and consistently apply privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. Implementers should inform users where personally identifiable information is expected to be transmitted, and allow users to “opt in” or “opt out” of participation. [00463] Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, data de-identification can be used to protect a user’s privacy. De-identification may be facilitated, when appropriate, by removing identifiers, controlling the amount or specificity of data stored (e.g., collecting location data at city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods such as differential privacy. Robust encryption may also be utilized to reduce the likelihood that communication between inductively coupled devices are spoofed.