Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR STREAMING VOLUMETRIC VIDEO
Document Type and Number:
WIPO Patent Application WO/2023/175234
Kind Code:
A1
Abstract:
The embodiments relate to a method and technical equipment for equipment for encoding and decoding. The method for encoding comprises receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information; demultiplexing the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component; encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and providing to the client information allowing the client to identify an RTP session containing the atlas components.

Inventors:
KONDRAD LUKASZ (DE)
ILOLA LAURI ALEKSI (FI)
Application Number:
PCT/FI2023/050128
Publication Date:
September 21, 2023
Filing Date:
March 08, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04N19/597; H04N19/70
Foreign References:
US20210021664A12021-01-21
Other References:
ILOLA L KONDRAD L: "RTP Payload Format for Visual Volumetric Video-based Coding (V3C) draft-ilola-avtcore-rtp-v3c-00; draft-ilola-avtcore-rtp-v3c-00.txt", 20 January 2022 (2022-01-20), pages 1 - 44, XP015149757, Retrieved from the Internet [retrieved on 20220120]
"Text of ISO/IEC FDIS 23090-10 Carriage of Visual Volumetric Video-based Coding Data", no. n20303, 31 August 2021 (2021-08-31), XP030297806, Retrieved from the Internet [retrieved on 20210831]
"Text of ISO/IEC DIS 23090-5 Visual Volumetric Video-based Coding and Video-based Point Cloud Compression 2nd Edition", no. n20761, 23 July 2021 (2021-07-23), XP030296513, Retrieved from the Internet [retrieved on 20210723]
Attorney, Agent or Firm:
BERGGREN OY (FI)
Download PDF:
Claims:
Claims:

1 . An apparatus for encoding, comprising:

- means for receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information;

- means for demultiplexing the bitstream into a number of subbitstreams, each sub-bitstream representing one volumetric video component;

- means for encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format;

- means for sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and

- means for providing to the client information allowing the client to identify an RTP session containing the atlas components.

2. The apparatus according to claim 1 , wherein the atlas information comprises a common atlas component and at least one atlas component.

3. The apparatus according to claim 1 or 2, further comprising means for creating a session specific file format describing each RTP session and providing information allowing to client to identify which of the RTP sessions contains the atlas components

4. The apparatus according to claim 3, further comprising means for providing information allowing the client to identify which of the RTP session contains the atlas components by defining a new payload format parameter

5. The apparatus according to claim 3, further comprising means for providing information allowing the client to identify which of the RTP session contains the atlas components by defining a new dedicated V3C level attribute.

6. The apparatus according to claim 4 or 5, further comprising means for recording in RTP packet payload information indicating atlas identifiers for each encapsulated NAL unit

7. The apparatus according to claim 4 or 5, further comprising means for signaling that an RTP packet header is extended to contain atlas identifiers.

8. The apparatus according to claim 7, further comprising means for extending RTP packet header and storing information indicating atlas identifiers in the extension.

9. An apparatus for decoding, comprising:

- means for receiving a plurality of sub-bitstreams over a plurality of RTP sessions;

- means for receiving information by which an RTP session containing atlas components is identified from the plurality of RTP sessions;

- means for decapsulating sub-bitstreams representing atlas components from the identified RTP session;

- means for reconstructing a bitstream representing a volumetric video from the sub-bitstreams.

10. A method for encoding, comprising

- receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information;

- demultiplexing the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component;

- encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format;

- sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and

- providing to the client information allowing the client to identify an RTP session containing the atlas components.

11 .The method according to claim 10, wherein atlas information comprises a common atlas component and at least one atlas component.

12. The method according to claim 10 or 11 , further comprising creating a session specific file format describing each RTP session and providing information allowing the client to identify which of the RTP sessions contains the atlas components

13. The method according to claim 12, further comprising providing information allowing the client to identify which of the RTP session contains the atlas components by defining new payload format parameter.

14. The method according to claim 12, further comprising providing information allowing the client to identify which of the RTP session contains the atlas components by defining new dedicated V3C level attribute.

15. The method according to claim 13 or 14, further comprising means for recording in RTP packet payload information indicating atlas identifiers for each encapsulated NAL unit

16. The method according to claim 13 or 14, further comprising means for signaling that an RTP packet header is extended to contain atlas identifiers.

17. The method according to claim 16, further comprising means for extending RTP packet header and storing information indicating atlas identifiers in the extension.

18. A method for decoding, comprising:

- receiving a plurality of sub-bitstreams over a plurality of RTP sessions;

- receiving information by which an RTP session containing atlas components is identified from the plurality of RTP sessions;

- decapsulating sub-bitstreams representing atlas components from the identified RTP session; and

- reconstructing a bitstream representing a volumetric video from the sub-bitstreams.

19. An apparatus for encoding, comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least

- receive a bitstream representing coded volumetric video, the bitstream comprising atlas information;

- demultiplex the bitstream into a number of sub-bitstreams, each subbitstream representing one volumetric video component;

- encapsulate sub-bitstreams representing atlas components to a Realtime Transfer Protocol (RTP) payload format;

- send the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and

- provide to the client information allowing the client to identify an RTP session containing the atlas components.

20. The apparatus according to claim 19, wherein the atlas information comprises a common atlas component and at least one atlas component.

21 .The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to create a session specific file format describing each RTP session and provide information allowing to client to identify which of the RTP sessions contains the atlas components

22. The apparatus according to claim 21 , further comprising computer program code to cause the apparatus to provide information allowing the client to identify which of the RTP session contains the atlas components by defining a new payload format parameter

23. The apparatus according to claim 21 , further comprising computer program code to cause the apparatus to provide information allowing the client to identify which of the RTP session contains the atlas components by defining a new dedicated V3C level attribute.

24. The apparatus according to claim 22 or 23, further comprising computer program code to cause the apparatus to record in RTP packet payload information indicating atlas identifiers for each encapsulated NAL unit

25. The apparatus according to claim 22 or 23, further comprising computer program code to cause the apparatus to signal that an RTP packet header is extended to contain atlas identifiers.

26. The apparatus according to claim 25, further comprising computer program code to cause the apparatus to extend RTP packet header and storing information indicating atlas identifiers in the extension.

27. An apparatus for decoding, comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least

- receive a plurality of sub-bitstreams over a plurality of RTP sessions;

- receive information by which an RTP session containing atlas components is identified from the plurality of RTP sessions;

- decapsulate sub-bitstreams representing atlas components from the identified RTP session; and

- reconstruct a bitstream representing a volumetric video from the subbitstreams.

28. A computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to

- receive a bitstream representing coded volumetric video, the bitstream comprising atlas information;

- demultiplex the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component;

- encapsulate sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format;

- send the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and

- provide to the client information allowing the client to identify an RTP session containing the atlas components. computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to - receive a plurality of sub-bitstreams over a plurality of RTP sessions;

- receive information by which an RTP session containing atlas components is identified from the plurality of RTP sessions;

- decapsulate sub-bitstreams representing atlas components from the identified RTP session; and

- reconstruct a bitstream representing a volumetric video from the sub-bitstreams.

Description:
A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR STREAMING VOLUMETRIC VIDEO

Technical Field

The present solution generally relates to streaming of volumetric video.

Background

Volumetric video data represents a three-dimensional (3D) scene or object, and can be used as input for AR (Augmented Reality), VR (Virtual Reality), and MR (Mixed Reality) applications. Such data describes geometry (Shape, size, position in 3D space) and respective attributes (e.g., color, opacity, reflectance, ...), and any possible temporal transformations of the geometry and attributes at given time instances (like frames in 2D video). Volumetric video can be generated from 3D models, also referred to as volumetric visual objects, i.e., CGI (Computer Generated Imagery), or captured from real-world scenes using a variety of capture solutions, e.g., multi-camera, laser scan, combination of video and dedicated depth sensors, and more. Also, a combination of CGI and real-world data is possible. Examples of representation formats for volumetric data comprise triangle meshes, point clouds, or voxels. Temporal information about the scene can be included in the form of individual capture instances, i.e., “frames” in 2D video, or other means, e.g., position of an object as a function of time.

Summary

The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.

Various aspects include a method, an apparatus and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims. According to a first aspect, there is provided an apparatus comprising means for receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information; means for demultiplexing the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component; means for encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and means for providing to the client information allowing the client to identify an RTP session containing the atlas components.

According to a second aspect, there is provided an apparatus for decoding, comprising means for receiving a plurality of sub-bitstreams over a plurality of RTP sessions; means for receiving information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; means for decapsulating sub-bitstreams representing atlas components from the identified RTP session; means for reconstructing a bitstream representing a volumetric video from the sub-bitstreams.

According to a third aspect, there is provided a method for encoding, comprising receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information; demultiplexing the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component; encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and providing to the client information allowing the client to identify an RTP session containing the atlas components.

According to a fourth aspect, there is provided a method for decoding comprising receiving a plurality of sub-bitstreams over a plurality of RTP sessions; receiving information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; decapsulating subbitstreams representing atlas components from the identified RTP session; and reconstructing a bitstream representing a volumetric video from the subbitstreams. According to a fifth aspect, there is provided an apparatus for encoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following receive a bitstream representing coded volumetric video, the bitstream comprising atlas information; demultiplex the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component; encapsulate sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and provide to the client information allowing the client to identify an RTP session containing the atlas components.

According to a sixth aspect, there is provided an apparatus for decoding comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following receive a plurality of sub-bitstreams over a plurality of RTP sessions; receive information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; decapsulate sub-bitstreams representing atlas components from the identified RTP session; and reconstruct a bitstream representing a volumetric video from the sub-bitstreams.

According to seventh aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a bitstream representing coded volumetric video, the bitstream comprising atlas information; demultiplex the bitstream into a number of sub-bitstreams, each sub-bitstream representing one volumetric video component; encapsulate sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and provide to the client information allowing the client to identify an RTP session containing the atlas components.

According to an eighth aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a plurality of subbitstreams over a plurality of RTP sessions; receive information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; decapsulate sub-bitstreams representing atlas components from the identified RTP session; and reconstruct a bitstream representing a volumetric video from the sub-bitstreams.

According to an embodiment, the atlas information comprises a common atlas component and at least one atlas component.

According to an embodiment, a session specific file format is created to describe each RTP session and provide information allowing to client to identify which of the RTP sessions contains the atlas components.

According to an embodiment, information is provided to allow the client to identify which of the RTP session contains the atlas components by defining new payload format parameter.

According to an embodiment, information is provided to allow the client to identify which of the RTP session contains the atlas components by defining new dedicated V3C level attribute.

According to an embodiment, information is recorded in RTP packet payload to indicate atlas identifiers for each encapsulated NAL unit.

According to an embodiment, it is signaled that an RTP packet header is extended to contain atlas identifiers.

According to an embodiment, the RTP packet header is extended and information indicating atlas identifiers is stored in the extension.

According to an embodiment, the computer program product is embodied on a non-transitory computer readable medium.

Description of the Drawings In the following, various embodiments will be described in more detail with reference to the appended drawings, in which

Fig. 1 shows an example of a compression process of a volumetric video;

Fig. 2 shows an example of a de-compression process of a volumetric video;

Fig. 3 shows an example of a V3C bitstream originated from ISO/IEC 23090-5;

Fig. 4 shows an example of an extension header;

Fig. 5 shows an example architecture of a V3C bitstream delivery over multiple RTP sessions with a client reconstructing the V3C bitstream;

Fig. 6 shows an example architecture of a V3C bitstream delivery over number of RTP session with a client sending V3C sub-bitstream with associated V3C unit header;

Fig. 7 shows an example architecture of a V3C bitstream delivery over one RTP session with a client reconstructing V3C bitstream;

Fig. 8 shows an example architecture of a V3C bitstream delivery over one RTP session with a client sending V3C sub-bitstream with associated V3C unit header;

Fig. 9 shows an example architecture of a V3C bitstream delivery where one RTP session contains all common atlases and atlases of V3C bitstream and a client reconstructing V3C bitstream;

Fig. 10 shows an example architecture of a V3C bitstream delivery where one RTP session contains all common atlases and atlases of V3C bitstream and a client sending V3C sub-bitstream with associated V3C unit header; Fig. 11 is a flowchart illustrating a method according to an embodiment;

Fig. 12 is a flowchart illustrating a method according to another embodiment; and

Fig. 13 shows an example of an apparatus. Embodiments

The following description and drawings are illustrative and are not to be construed as unnecessarily limiting. The specific details are provided for a thorough understanding of the disclosure. However, in certain instances, well- known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, reference to the same embodiment and such references mean at least one of the embodiments.

Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment in included in at least one embodiment of the disclosure.

Figure 1 illustrates an overview of an example of a compression process of a volumetric video. Such process may be applied for example in MPEG Videobased Point Cloud Coding (V-PCC). The process starts with an input point cloud frame 101 that is provided for patch generation 102, geometry image generation 104 and texture image generation 105.

The patch generation 102 process aims at decomposing the point cloud into a minimum number of patches with smooth boundaries, while also minimizing the reconstruction error. For patch generation, the normal at every point can be estimated. An initial clustering of the point cloud can then be obtained by associating each point with one of the following six oriented planes, defined by their normals:

- (1.0, 0.0, 0.0),

- (0.0, 1.0, 0.0), - (0.0, 0.0, 1.0),

- (-1 .0, 0.0, 0.0),

- (0.0, -1.0, 0.0), and

- (0.0, 0.0, -1.0)

More precisely, each point may be associated with the plane that has the closest normal (i.e., maximizes the dot product of the point normal and the plane normal).

The initial clustering may then be refined by iteratively updating the cluster index associated with each point based on its normal and the cluster indices of its nearest neighbors. The final step may comprise extracting patches by applying a connected component extraction procedure.

Patch info determined at patch generation 102 for the input point cloud frame 101 is delivered to packing process 103, to geometry image generation 104 and to texture image generation 105. The packing process 103 aims at mapping the extracted patches onto a 2D plane, while trying to minimize the unused space, and guaranteeing that every TxT (e.g., 16x16) block of the grid is associated with a unique patch. It should be noticed that T may be a user- defined parameter. Parameter T may be encoded in the bitstream and sent to the decoder.

The used simple packing strategy iteratively tries to insert patches into a WxH grid. W and H may be user-defined parameters, which correspond to the resolution of the geometry/texture images that will be encoded. The patch location is determined through an exhaustive search that is performed in raster scan order. The first location that can guarantee an overlapping-free insertion of the patch is selected and the grid cells covered by the patch are marked as used. If no empty space in the current resolution image can fit a patch, then the height H of the grid may be temporarily doubled, and search is applied again. At the end of the process, H is clipped so as to fit the used grid cells.

The geometry image generation 104 and the texture image generation 105 are configured to generate geometry images and texture images respectively. The image generation process may exploit the 3D to 2D mapping computed during the packing process to store the geometry and texture of the point cloud as images. In order to better handle the case of multiple points being projected to the same pixel, each patch may be projected onto two images, referred to as layers. For example, let H(u, y) be the set of points of the current patch that get projected to the same pixel (u, v). The first layer, also called a near layer, stores the point o H u, v) with the lowest depth DO. The second layer, referred to as the far layer, captures the point of H(u, v) with the highest depth within the interval [DO, DO+A], where is a user-defined parameter that describes the surface thickness. The generated videos may have the following characteristics:

• Geometry: WxH YUV420-8bit,

• Texture: WxH YUV420-8bit,

It is to be noticed that the geometry video is monochromatic. In addition, the texture generation procedure exploits the reconstructed/smoothed geometry in order to compute the colors to be associated with the re-sampled points.

The geometry images and the texture images may be provided to image padding 107. The image padding 107 may also receive as an input an occupancy map (OM) 106 to be used with the geometry images and texture images. The occupancy map 106 may comprise a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. In other words, the occupancy map (OM) may be a binary image of binary values where the occupied pixels and non-occupied pixels are distinguished and depicted respectively. The occupancy map may alternatively comprise a non-binary image allowing additional information to be stored in it. Therefore, the representative values of the DOM (Deep Occupancy Map) may comprise binary values or other values, for example integer values. It should be noticed that one cell of the 2D grid may produce a pixel during the image generation process. Such an occupancy map may be derived from the packing process 103.

The padding process 107 aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression. For example, in a simple padding strategy, each block of TxT (e.g., 16x16) pixels is compressed independently. If the block is empty (i.e., unoccupied, i.e., all its pixels belong to empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., occupied, i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels (i.e., edge block), then the empty pixels are iteratively filled with the average value of their non-empty neighbors.

The padded geometry images and padded texture images may be provided for video compression 108. The generated images/layers may be stored as video frames and compressed using for example the HM16.16 video codec according to the HM configurations provided as parameters. The video compression 108 also generates reconstructed geometry images to be provided for smoothing 109, wherein a smoothed geometry is determined based on the reconstructed geometry images and patch info from the patch generation 102. The smoothed geometry may be provided to texture image generation 105 to adapt the texture images.

The patch may be associated with auxiliary information being encoded/decoded for each patch as metadata. The auxiliary information may comprise index of the projection plane, 2D bounding box, 3D location of the patch.

For example, the following metadata may be encoded/decoded for every patch:

- index of the projection plane o Index 0 for the planes (1 .0, 0.0, 0.0) and (-1 .0, 0.0, 0.0) o Index 1 for the planes (0.0, 1 .0, 0.0) and (0.0, -1 .0, 0.0) o Index 2 for the planes (0.0, 0.0, 1 .0) and (0.0, 0.0, -1 .0)

- 2D bounding box (uO, vO, ul, vl)

- 3D location (xO, yO, z0) of the patch represented in terms of depth 30, tangential shift sO and bitangential shift rO. According to the chosen projection planes, (50, sO, rO) may be calculated as follows: o Index 0, 30= xO, s0=z0 and rO = y0 o Index 1, 30= yO, s0=z0 and rO = x0 o Index 2, 30= zO, s0=x0 and rO = yO

Also, mapping information providing for each TxT block its associated patch index may be encoded as follows:

- For each TxT block, let L be the ordered list of the indexes of the patches such that their 2D bounding box contains that block. The order in the list is the same as the order used to encode the 2D bounding boxes. L is called the list of candidate patches.

- The empty space between patches is considered as a patch and is assigned the special index 0, which is added to the candidate patches list of all the blocks.

- Let I be index of the patch, which the current TxT block belongs to, and let J be the position of I in L. Instead of explicitly coding the index I, its position J is arithmetically encoded instead, which leads to better compression efficiency.

The occupancy map consists of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. One cell of the 2D grid produces a pixel during the image generation process.

The occupancy map compression 110 leverages the auxiliary information described in previous section, in order to detect the empty TxT blocks (i.e., blocks with patch index 0). The remaining blocks may be encoded as follows: The occupancy map can be encoded with a precision of a BOxBO blocks. B0 is a configurable parameter. In order to achieve lossless encoding, B0 may be set to 1 . In practice B0=2 or B0=4 results in visually acceptable results, while significantly reducing the number of bits required to encode the occupancy map.

The compression process may comprise one or more of the following example operations:

• Binary values may be associated with BOxBO sub-blocks belonging to the same TxT block. A value 1 associated with a sub-block, if it contains at least a non-padded pixel, and 0 otherwise. If a sub-block has a value of 1 it is said to be full, otherwise it is an empty sub-block.

• If all the sub-blocks of a TxT block are full (i.e., have value 1 ). The block is said to be full. Otherwise, the block is said to be non-full.

• A binary information may be encoded for each TxT block to indicate whether it is full or not.

• If the block is non-full, an extra information indicating the location of the full/empty sub-blocks may be encoded as follows: o Different traversal orders may be defined for the sub-blocks, for example horizontally, vertically, or diagonally starting from top right or top left corner o The encoder chooses one of the traversal orders and may explicitly signal its index in the bitstream. o The binary values associated with the sub-blocks may be encoded by using a run-length encoding strategy.

■ The binary value of the initial sub-block is encoded.

■ Continuous runs of Os and 1 s are detected, while following the traversal order selected by the encoder.

■ The number of detected runs is encoded.

■ The length of each run, except of the last one, is also encoded.

Figure 2 illustrates an overview of a de-compression process for MPEG Point Cloud Coding (PCC). A de-multiplexer 201 receives a compressed bitstream, and after de-multiplexing, provides compressed texture video and compressed geometry video to video decompression 202. In addition, the de-multiplexer 201 transmits compressed occupancy map to occupancy map decompression 203. It may also transmit a compressed auxiliary patch information to auxiliary patch-info compression 204. Decompressed geometry video from the video decompression 202 is delivered to geometry reconstruction 205, as are the decompressed occupancy map and decompressed auxiliary patch information. The point cloud geometry reconstruction 205 process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels may be computed by leveraging the auxiliary patch information and the geometry images.

The reconstructed geometry image may be provided for smoothing 206, which aims at alleviating potential discontinuities that may arise at the patch boundaries due to compression artifacts. The implemented approach moves boundary points to the centroid of their nearest neighbors. The smoothed geometry may be transmitted to texture reconstruction 207, which also receives a decompressed texture video from video decompression 202. The texture reconstruction 207 outputs a reconstructed point cloud. The texture values for the texture reconstruction are directly read from the texture images. The point cloud geometry reconstruction process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels are computed by levering the auxiliary patch information and the geometry images. More precisely, let P be the point associated with the pixel (u, v) and let (30, sO, rO) be the 3D location of the patch to which it belongs and (uO, vO, ul, vl) its 2D bounding box. P can be expressed in terms of depth 3(u, v), tangential shift s(u, v) and bi-tangential shift r(u, v) as follows:

3(u, v) = 30 + g(u, v) s(u, v) = sO - uO + u r(u, v) = rO - vO + v where g(u, v) is the luma component of the geometry image.

For the texture reconstruction, the texture values can be directly read from the texture images. The result of the decoding process is a 3D point cloud reconstruction.

There are alternatives to capture and represent a volumetric frame. The format used to capture and represent the volumetric frame depends on the process to be performed on it, and the target application using the volumetric frame. As a first example a volumetric frame can be represented as a point cloud. A point cloud is a set of unstructured points in 3D space, where each point is characterized by its position in a 3D coordinate system (e.g., Euclidean), and some corresponding attributes (e.g., color information provided as RGBA value, or normal vectors). As a second example, a volumetric frame can be represented as images, with or without depth, captured from multiple viewpoints in 3D space. In other words, the volumetric video can be represented by one or more view frames (where a view is a projection of a volumetric scene on to a plane (the camera plane) using a real or virtual camera with known/computed extrinsic and intrinsic). Each view may be represented by a number of components (e.g., geometry, color, transparency, and occupancy picture), which may be part of the geometry picture or represented separately. As a third example, a volumetric frame can be represented as a mesh. Mesh is a collection of points, called vertices, and connectivity information between vertices, called edges. Vertices along with edges form faces. The combination of vertices, edges and faces can uniquely approximate shapes of objects. Depending on the capture, a volumetric frame can provide viewers the ability to navigate a scene with six degrees of freedom, i.e., both translational and rotational movement of their viewing pose (which includes yaw, pitch, and roll). The data to be coded for a volumetric frame can also be significant, as a volumetric frame can contain many numbers of objects, and the positioning and movement of these objects in the scene can result in many dis-occluded regions. Furthermore, the interaction of the light and materials in objects and surfaces in a volumetric frame can generate complex light fields that can produce texture variations for even a slight change of pose.

A sequence of volumetric frames is a volumetric video. Due to large amount of information, storage and transmission of a volumetric video requires compression. A way to compress a volumetric frame can be to project the 3D geometry and related attributes into a collection of 2D images along with additional associated metadata. The projected 2D images can then be coded using 2D video and image coding technologies, for example ISO/IEC 14496- 10 (H.264/AVC) and ISO/IEC 23008-2 (H.265/HEVC). The metadata can be coded with technologies specified in specification such as ISO/IEC 23090-5. The coded images and the associated metadata can be stored or transmitted to a client that can decode and render the 3D volumetric frame.

In the following, a short reference of ISO/IEC 23090-5 Visual Volumetric Videobased Coding (V3C) and Video-based Point Cloud Compression (V-PCC) 2nd Edition is given. ISO/IEC 23090-5 specifies the syntax, semantics, and process for coding volumetric video. The specified syntax is designed to be generic, so that it can be reused for a variety of applications. Point clouds, immersive video with depth, and mesh representations can all use ISO/IEC 23090-5 standard with extensions that deal with the specific nature of the final representation. The purpose of the specification is to define how to decode and interpret the associated data (for example atlas data in ISO/IEC 23090-5) which tells a Tenderer how to interpret 2D frames to reconstruct a volumetric frame.

Two applications of V3C (ISO/IEC 23090-5) have been defined, V-PCC (ISO/IEC 23090-5) and MIV (ISO/IEC 23090-12). MIV and V-PCC use number of V3C syntax elements with a slightly modified semantics. An example on how the generic syntax element can be differently interpreted by the application is pdu_projection_id.

In case of V-PCC, the syntax element pdu_projection_id specifies the index of the projection plane for the patch. There can be 6 or 18 projection planes in V- PCC, and they are implicit, i.e., pre-determined. In case of MIV, pdu_projection_id corresponds to a view ID, i.e., identifies which view the patch originated from. View IDs and their related information is explicitly provided in MIV view parameters list and may be tailored for each content.

MPEG 3DG (ISO SC29 WG7) group has started a work on a third application of V3C - the mesh compression. It is also envisaged that mesh coding will reuse V3C syntax as much as possible and can also slightly modify the semantics.

To differentiate between applications of V3C bitstream that allow a client to properly interpret the decoded data, V3C uses the ptl_profile_toolset_idc parameter.

V3C bitstream is a sequence of bits that forms the representation of coded volumetric frames and the associated data making one or more coded V3C sequences (CVS). V3C bitstream is composed of V3C units that contain V3C video sub-bitstreams, V3C atlas sub-bitstreams, or V3C Parameter Set (VPS). Figure 3 illustrates an example of VC3 bitstream. Video sub-bitstream and atlas sub-bitstreams can be referred to as V3C sub-bitstreams. Each V3C unit has a V3C unite header and a V3C unit payload. A V3C unit header in conjunction with VPS information identify which V3C sub-bitstream a V3C unit contains and how to interpret it. An example of this is shown herein below:

V3C bitstream can be stored according to Annex C of ISO/IEC 23090-5, which specifies syntax and semantics of a sample stream format to be used by applications that deliver some or all of the V3C unit stream as an ordered stream of bytes or bits within which the locations of V3C unit boundaries need to be identifiable from patterns in the data.

CVS start with a VPS (V3C Parameter Set), which allows to interpret each V3C unit that vuh_v3c_parameter_set_id specifies the value of vps_v3c_parameter_set_id for the active V3C VPS. The VPS provides the following information about V3C bitstream among others:

• Profile, tier, and level to which the bitstream is conformant

• Number of atlases that constitute to the V3C bitstream

• Number of occupancies, geometry, attributes video-sub bitstreams • Number of maps for each geometry and attribute video components

• Mapping information from attribute index to attribute type

In contrast to a fixed number of camera views and only one atlas in V-PCC, in MIV specification the number of cameras, camera extrinsic, camera intrinsic information is not fixed and may change during the V3C bitstream. In addition,

1 Under preparation. Stage at time of publication: ISO/IEC CD 23090-12:2020 the camera information may be shared among all atlases within V3C bitstream. In order to support such flexibility, the ISO/IEC 23090-5 2 nd edition introduces a concept for common atlas data. Common atlas data is carried in a dedicated V3C unit type equal to V3C_CAD which contains a number of non-ACL NAL unit types, such as NAL_CASPS that carry common atlas sequence parameter set syntax structure, NAL CAFJDR and NAL CAF TRAIL that contain common atlas frames.

A Real Time Transfer Protocol (RTP) is intended for an end-to-end, real-time transfer or streaming media and provides facilities for jitter compensation and detection of packet loss and out-of-order delivery. RTP allows data transfer to multiple destinations through IP multicast or to a specific destination through IP unicast. The majority of the RTP implementations are built on the User Datagram Protocol (UDP). Other transport protocols may also be utilized. RTP is used in together with other protocols such as H.323 and Real Time Streaming Protocol RTSP.

The RTP specification describes two protocols: RTP and RTCP. RTP is used for the transfer of multimedia data, and the RTCP is used to periodically send control information and QoS parameters.

RTP sessions may be initiated between client and server using a signalling protocol, such as H.323, the Session Initiation Protocol (SIP), or RTSP. These protocols may use the Session Description Protocol (RFC 8866) to specify the parameters for the sessions.

RTP is designed to carry a multitude of multimedia formats, which permits the development of new formats without revising the RTP standard. To this end, the information required by a specific application of the protocol is not included in the generic RTP header. For a class of applications (e.g., audio, video), an RTP profile may be defined. For a media format (e.g., a specific video coding format), an associated RTP payload format may be defined. Every instantiation of RTP in a particular application may require a profile and payload format specifications. The profile defines the codecs used to encode the payload data and their mapping to payload format codecs in the protocol field Payload Type (PT) of the RTP header.

For example, RTP profile for audio and video conferences with minimal control is defined in RFC 3551. The profile defines a set of static payload type assignments, and a dynamic mechanism for mapping between a payload format, and a PT value using Session Description Protocol (SDP). The latter mechanism is used for newer video codec such as RTP payload format for H.264 Video defined in RFC 6184 or RTP Payload Format for High Efficiency Video Coding (HEVC) defined in RFC 7798.

An RTP session is established for each multimedia stream. Audio and video streams may use separate RTP sessions, enabling a receiver to selectively receive components of a particular stream. The RTP specification recommends even port number for RTP, and the use of the next odd port number of the associated RTCP session. A single port can be used for RTP and RTCP in applications that multiplex the protocols.

Each RTP stream consists of RTP packets, which in turn consist of RTP header and payload parts.

RTP packets are created at the application layer and handed to the transport layer for delivery. Each unit of RTP media data created by an application begins with the RTP packet header.

The RTP header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application. The fields in the header are as follows:

• Version: (2 bits) Indicates the version of the protocol.

• P (Padding): (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet.

• X (Extension): (1 bit) Indicates the presence of an extension header between the header and payload data. The extension header is application or profile specific. • CC (CSRC count): (4 bits) Contains the number of CSRC identifiers that follow the SSRC

• M (Marker): (1 bit) Signalling used at the application level in a profilespecific manner. If it is set, it means that the current data has some special relevance for the application.

• PT (Payload type): (7 bits) Indicates the format of the payload and thus determines its interpretation by the application.

• Sequence number: (16 bits) The sequence number is incremented for each RTP data packet sent and is to be used by the receiver to detect packet loss and to accommodate out-of-order delivery.

• Timestamp: (32 bits) Used by the receiver to play back the received samples at appropriate time and interval. When several media streams are present, the timestamps may be independent in each stream. The granularity of the timing is application specific. For example, video stream may use a 90 kHz clock. The clock granularity is one of the details that is specified in the RTP profile for an application.

• SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of the stream. The synchronization sources within the same RTP session will be unique.

• CSRC: (32 bits each) Contributing source IDs enumerate contributing sources to a stream which has been generated from multiple sources.

• Header extension: (optional, presence indicated by Extension field) The first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension in 32-bit units, excluding the 32 bits of the extension header. The extension header data is shown in Figure 4.

In this disclosure, the Session Description Protocol (SDP) is used as an example of a session specific file format. SDP is a format for describing multimedia communication sessions for the purposes of announcement and invitation. Its predominant use is in support of conversational and streaming media applications. SDP does not deliver any media streams itself, but is used between endpoints for negotiation of network metrics, media types, and other associated properties. The set of properties and parameters is called a session profile. SDP is extensible for the support of new media types and formats. The Session Description Protocol describes a session as a group of fields in a text-based format, one field per line. The form of each field is as follows:

<character>=<value><CRXLF> where <character> is a single case-sensitive character and <value> is structured text in a format that depends on the character. Values may be UTF- 8 encoded. Whitespace is not allowed immediately to either side of the equal sign.

Session descriptions consist of three sections: session, timing, and media descriptions. Each description may contain multiple timing and media descriptions. Names are only unique within the associated syntactic construct.

Fields appear in the order, shown below; optional fields are marked with an asterisk: v= (protocol version number, currently only 0 ) o= ( originator and session identifier : username , id, version number, network address ) s= ( session name : mandatory with at least one UTF- 8 - encoded character) i=* ( session title or short information) u=* (URI of description) e=* ( zero or more email address with optional name of contacts ) p=* ( zero or more phone number with optional name of contacts ) c=* ( connection inf ormation— not required if included in all media) b=* ( zero or more bandwidth information lines )

One or more time descriptions ("t=" and "r=" lines ; see below) z=* ( time zone adj ustments ) k=* (encryption key) a=* ( zero or more session attribute lines ) Zero or more Media descriptions (each one starting by an "m=" line ; see below)

Time description (mandatory): t= ( time the session is active ) r=* ( zero or more repeat times )

Media description (optional): m= (media name and transport address ) i=* (media title or information field) c=* ( connection information — optional if included at session level ) b=* ( zero or more bandwidth information lines ) k=* (encryption key) a=* ( zero or more media attribute l ines — overriding the Session attribute lines )

Below is a sample session description from RFC 4566. This session is originated by the user “jdoe” at IPv4 address 10.47.16.5. Its name is “SDP Seminar” and extended session information (“A Seminar on the session description protocol”) is included along with a link for additional information and an email address to contact the responsible party, Jane Doe. This session is specified to last two hours using NTP timestamps, with a connection address (which indicates the address clients must connect to or - when a multicast address is provided, as it is here - subscribe to) specified as IPv4 224.2.17.12 with a TTL of 127. Recipients of this session description are instructed to only receive media. Two media descriptions are provided, both using RTP Audio Video Profile. The first is an audio stream on port 49170 using RTP/AVP payload type 0 (defined by RFC 3551 as PCMU), and the second is a video stream on port 51372 using RTP/AVP payload type 99 (defined as “dynamic”). Finally, an attribute is included which maps RTP/AVP payload type 99 to format h263-1998 with a 90 kHz clock rate. RTCP ports for the audio and video streams of 49171 and 51373 respectively are implied. v=0 o=jdoe 2890844526 2890842807 IN IP4 10.47.16.5 s=SDP Seminar i=A Seminar on the session description protocol u=http : / / www . example . com/ seminars/ sdp . pdf e= j . doe@example . com (Jane Doe) c=IN IP4 224.2.17.12/127 t=2873397496 2873404696 a=recvonly m=audio 49170 RTP/AVP 0 m=video 51372 RTP/AVP 99 a=rtpmap:99 h263-1998/90000

SDP uses attributes to extend the core protocol. Attributes can appear within the Session or Media sections and are scoped accordingly as session-level or media-level. New attributes can be added to the standard through registration with IANA. A media description may contain any number of “a=” lines (attributefields) that are media description specific. Session-level attributes convey additional information that applies to the session as a whole rather than to individual media descriptions.

Attributes are either properties or values: a=<attribute-name> a=<attribute-name> : <attribute-value>

Examples of attributes defined in RFC8866 are “rtpmap” and “fmpt”.

“rtpmap” attribute maps from an RTP payload type number (as used in an "m=" line) to an encoding name denoting the payload format to be used. It also provides information on the clock rate and encoding parameters. Up to one "a=rtpmap:" attribute can be defined for each media format specified. This can be the following: m=audio 49230 RTP/AVP 96 97 98 a=rtpmap:96 L8/8000 a=rtpmap:97 L16/8000 a=rtpmap:98 L16/11025/2 In the example above, the media types are “audio/L8” and “audio/L16”.

Parameters added to an "a=rtpmap:" attribute may only be those required for a session directory to make the choice of appropriate media to participate in a session. Codec-specific parameters may be added in other attributes, for example, "fmtp".

"fmtp" attribute allows parameters that are specific to a particular format to be conveyed in a way that SDP does not have to understand them. The format can be one of the formats specified for the media. Format-specific parameters, semicolon separated, may be any set of parameters required to be conveyed by SDP and given unchanged to the media tool that will use this format. At most one instance of this attribute is allowed for each format. An example is: a=fmtp : 96 prof ile-level-id=42e016 ; max-mbps=l 08000 ; max- fs=3600

For example RFC7798 defines the following sprop-vps, sprop-sps, sprop-pps, profile-space, profile-id, tier-flag, level-id, interop-constraints, profilecompatibility-indicator, sprop-sub-layer-id, recv-sub-layer-id, max-recv-level- id, tx-mode, max-lsr, max-lps, max-cpb, max-dpb, max-br, max-tr, max-tc, max-fps, sprop-max-don-diff, sprop-depack-buf-nalus, sprop-depack-buf- bytes, depack-buf-cap, sprop-segmentation-id, sprop-spatial-segmentation- idc, dec-parallel-cap, and include-dph.

“group” and “mid” attributes defined in RFC 5888 allows to group “m” lines in SDP for different purposes. An example can be for lip synchronization or for receiving a media flow consisting of several media streams on different transport addresses.

An example would be in a given session description, each “m” line is identified by a token, which is carried in a “mid” attribute below the “m” line. The session description carries session-level “group” attributes that group different “m” lines (identified by their tokens) using different group semantics. The semantics of a group describe the purpose for which the “m” lines are grouped. In the example below, the “group” line indicates that the “m” lines identified by tokens 1 and 2 (the audio and the video “m” lines, respectively) are grouped for the purpose of lip synchronization (LS). v=0 o=Laura 289083124 289083124 IN I P4 one . example . com c=IN I P4 192 . 0 . 2 . 1 t=0 0 a=group : LS 1 2 m=audio 30000 RTP/AVP 0 a=mid : 1 m=video 30002 RTP/AVP 31 a=mid : 2

RFC5888 defines two semantics for group Lip Synchronization (LS), as used in the example above, and Flow Identification (FID). RFC5583 defines another type Decoding Dependency (DDP). RFC8843 defines another grouping type BUNDLE, which among other is utilized when multiple types of media are sent in a single RTP session as described in RFC8860.

“depend” attribute defined in RFC5583 allows to signal two types of decoding dependencies: layered and multi-description.

The following dependency-type values are defined in RFC5583:

• lay: Layered decoding dependency identifies the described media stream as one or more Media Partitions of a layered Media Bitstream. When “lay” is used, all media streams required for decoding the Operation Point must be identified by identification-tag and fmt- dependency following the “lay” string.

• mdc: Multi-descriptive decoding dependency signals that the described media stream is part of a set of a MDC Media Bitstream. By definition, at least N-out-of-M media streams of the group need to be available to form an Operation Point. The values of N and M depend on the properties of the Media Bitstream and are not signaled within this context. When “mdc” is used, all required media streams for the Operation Point must be identified by identification-tag and fmt- dependency following the “mdc” string. The example below shows a session description with three media descriptions, all of type video and with layered decoding dependency (“lay”). Each of the media descriptions includes two possible media format descriptions with different encoding parameters as, e.g., “packetization-mode” (not shown in the example) for the media subtypes “H264” and “H264-SVC” given by the “a=rtpmap:” -line. v=0 o=svcsrv 289083124 289083124 IN IP4 host.example.com s=LAYERED VIDEO SIGNALING Seminar t=0 0 c=IN IP4 192.0.2.1/127 a=group:DDP LI L2 L3 m=video 40000 RTP/AVP 96 97 b=AS : 90 a=framerate : 15 a=rtpmap:96 H264/90000 a=rtpmap:97 H264/90000 a=mid : LI m=video 40002 RTP/AVP 98 99 b=AS : 64 a=framerate : 15 a=rtpmap:98 H264-SVC/90000 a=rtpmap:99 H264-SVC/90000 a=mid : L2 a=depend:98 lay LI: 96, 97; 99 lay LI : 97 m=video 40004 RTP/AVP 100 101 b=AS : 128 a=framerate : 30 a=rtpmap:100 H264-SVC/90000 a=rtpmap:101 H264-SVC/90000 a=mid : L3 a=depend:100 lay Ll:96, 97; 101 lay LI : 97 L2:99 As defined in RFC3550 and RFC3551 RTP was designed to support multimedia sessions, containing multiple types of media sent simultaneously, by using multiple transport-layer flows, i.e., RPT sessions. This approach, however, is not always beneficial and can

• increase delay to establish a complete session;

• increase state and resource consumption in the middleboxes;

• increase risk that a subset of the transport-layer flows will fail to be established.

Therefore, in some cases using fewer RTP sessions can reduce the risk of communication failure and can lead to improved reliability and performance. It might seem appropriate for RTP-based applications to send all their RTP streams bundled into one RTP session, running over a single transport-layer flow. However, this was initially prohibited by the RTP specifications RFC3550 and RFC3551 , because the design of RTP makes certain assumptions that can be incompatible with sending multiple media types in a single RTP session.

RFC8860 updates RFC3550 and RFC3551 to allow sending an RTP session containing RTP streams with media from multiple media types such as audio, video, and text.

From signalling perspective, it shall be

• ensured that any participant in the RTP session is aware that this is an RTP session with multiple media types;

• ensured that the payload types in use in the RTP session are using unique values, with no overlap between the media types;

• ensured that RPT session-level parameters, for example, the RTCP RR and RS bandwidth modifiers, RTP/AVPF trr-int parameter, transport protocol, RTCP extensions in use, and any security parameters, are consistent across the session; and

• ensured that RTP and RTCP functions that can be bound to a particular media type are reused where possible, rather than configuring multiple code points for the same thing.

When using SDP signalling, the BUNDLE extension RFC8843 is used to signal RTP sessions containing multiple media types. The RTP and RTCP packets are then demultiplexed into the different RTP streams based on their SSRC. While the RPT payload type is then used to select the correct media-decoding pathway for each RTP stream. In case where not enough payload type values are available, then to associate RTP streams multiplexed on the same transport flow with their respective SDP media description a urn:ietf:params:rtp-hdrext:sdes:mid RTP header extension from RFC7941 can be used to provide media description identifier that matches the value of the SDP a=mid attribute defined in RFC5888.

This document defines a mechanism by which two entities can utilize SDP in a scenario where two entities, i.e., a server and a client) negotiate at a common understanding and setup of a multimedia session between them. In such scenario one entity (e.g., a server) offers the other a description of the desired session (or possible option of a session) from their perspective, and the other participant answers with the desired session from their perspective. This offer/answer model is described in RFC3264 and can be used by other protocols for example Session Initiation Protocol (SIP) RFC 3264.

The maximum dimensions of the 2D frame representation of a V3C component depend on the used video codec; as commercially deployed decoders may be constrained in terms of video resolution and frame rate. To circumvent these limitations, V3C allows splitting the projected patches into multiple 2D frame representations and corresponding associated metadata, thus creating multiple atlases. To avoid duplicating data, such as projection parameters, between multiple atlases, V3C defines a common atlas structure, which contains information that applies for all atlases of the presentation.

The prior technology introduces a possibility to stream multiple V3C components in one RTP session or each component separately in its own RTP session.

The drawback of the prior technology is the overhead of each session especially in case of streaming atlas and common atlas as separate sessions. The size of access units for atlas data (especially in case of common atlas data) is relatively small compared to the overhead of syntax structures required for RTP streaming. Requirement to synchronize separate RTP sessions further complicates the streaming design as the number of RTP sessions increases, thus making the design more error prone.

Another drawback of the prior technology is that the V3C video components do not re-use existing RTP Payload Formats (e.g., for HEVC, for WC, for AV1 ) that are tailored for them and do not utilize the whole potential of the existing infrastructure.

A balanced method to minimize overhead and exploit existing real-time video streaming specifications by encapsulating and signalling all V3C atlas components in one RTP session and the rest of V3C components in their own RTP sessions is missing.

Figures 5, 6, 7, and 8 illustrate examples of prior technology of two types of V3C bitstream transmission over RTP protocols.

In the architecture of Figure 5, V3C bitstream is provided to a server. The server demultiplexes the V3C bitstream into a number of V3C sub-bitstreams, each representing one V3C component. Each V3C sub-bitstreams is encapsulated to its own RTP payload format and sent over a dedicated RTP session to a client. Along the RTP sessions, the server creates an SDP file that describes each RTP session as well as provides the novel information that allows a client to identify each RTP session and map to appropriate V3C subbitstream. Using the information provided by the SDP and over RTP session, a client is able to reconstruct V3C bitstream and provide it to a V3C decoder/renderer.

SPD can be provided in a declarative manner, where a client does not have any decision capability, or in case a server has capability to re-encode V3C sub-bitstreams, or V3C sub-bitstreams are provided in number of alternatives, the SDP may be used in offer/answer mode to allow client to choose the most appropriate codecs.

Architecture presented on Figure 6 is similar to architecture presented on Figure 5, but a client does not reconstruct V3C bitstream but sends separate V3C sub-bitstream to V3C decoder/rendered and signal at initialization V3C unit header associated with given V3C sub-bitstream. A client as well can pass to a V3C decoder a V3C parameter set syntax element that does not have to be encapsulated in V3C unit.

Architecture presented on Figure 7 and Figure 8 are similar to architecture presented on Figure 5 and Figure 6, respectively, with the difference that a server creates only one RTP session that multiplex all V3C sub-bitstreams, i.e., all V3C components. Client reconstructs V3C bitstream (Figure 7), or pass V3C sub-bitstream together with associated V3C unit header (Figure 8) to V3C decoder/renderer.

In such prior solution, v3c specific parameters have been defined, for example, v3c-unit-header=<value> v3c-atlas-id=<value>

“v3c-unit-header” provides a V3C unit header byte defined in ISO/IEC 23090- 5. <value> contains base16 [RFC 4648] (hexadecimal) representation of the 4 bytes of V3C unit header.

Alternative encoding schemes may be provided for the <value> such as ASCII, decimal or base64 encoded strings.

“v3c-atlas-id” provide a V3C unit value corresponding to vuh_atlas_id defined in ISO/IEC 23090-5. <value> contains vuh_atlas_id value.

Also, a RTP header extension is defined that can carry bytes describing V3C unit header defined in ISO/IEC 23090-5. A new identifier is defined to indicate that an RTP stream contains header extension as well as to describe how the header extension should be parsed. The new identifier is used with an extmap attribute in media level of the SDP. urn : iet f : params : rtp-hdrext : v3c : vuh

The 8-bit ID is the local identifier and length field are as defined RFC 5285. The 4 bytes of the RTP header extension contains v3c_unit_header() structure as defined in ISO/IEC 23090-5.

The present embodiments relate to a solution for multiplexing multiple atlases in one Real-time Transfer Protocol (RTP) session. In the solution, Visual Volumetric Video Coding (V3C) bitstream with one common atlas and at least one atlas component is obtained. The one common atlas component and the at least one atlas component are streamed in a single RTP session. Also, it is signaled to the receiver that the session contains the one common atlas component and the at least one atlas component. Information is signaled to the receiver, by means of which the receiver is able to correctly extract the one common atlas component and the at least one atlas component from the RTP packets of the session, to reconstruct the V3C bitstream comprising the one common atlas component and the at least one atlas component from the RTP packets of the session. In various embodiments, the multiplexing signaling information can be provided as dedicated media level attribute or payload format parameter. A V3C unit header signaling is extended to carry multiple V3C unit headers.

Figure 9 illustrates an architecture, where a V3C bitstream consisting of common atlas and multiple atlases is obtained by a server 910. The server 910 de-multiplexes the V3C bitstream into a number of V3C sub-bitstreams, each representing one V3C component. All V3C sub-bitstreams representing a common atlas and atlas components are multiplexed into a dedicated RTP payload format, and sent over one RTP session to a client 920. As well as constructing the RPT sessions, the server 910 creates an SDP file that describes each RTP session. The SDP file provides the information that allows a client 920 to identify which of the RTP session contains RTP packets with multiplexed common atlas and atlas V3C components. Using the information provided by the SDP file and in the RTP packets of the RTP session, the client 920 is able to reconstruct V3C bitstream and provide it to a V3C decoder/renderer 930, as described in Figure 9.

Alternatively, a client 920 does not reconstruct V3C bitstream but sends separate V3C sub-bitstreams to V3C decoder/renderer 930, and signals an initialization V3C unit header associated with given V3C sub-bitstream. A client 920 can also pass a syntax element that does not have to be encapsulated in a V3C unit to a V3C decoder 930 in a V3C parameter set, as shown in Figure 10.

According to an embodiment, a new media level V3C specific attribute “v3c- atlas-mux” is defined as follows: a=v3c-atlas -mux : <mux-type> , <atlas - id-bytes> , <atlas - s i ze- bytes> , <nal -unit- s i ze-bytes> , <common-atlas - id>

“v3c-atlas-mux” provides indication that an RTP session consists of RTP packets that can contain one or more NAL units and as well may contain NAL units belonging to more than one atlas.

Parameter <mux-type> provides information of type of multiplexing.

• When the value of <mux-type> is 0, it means that NAL units from common atlas and one or more atlases are multiplexed in one RTP session and they can be also multiplex in one RTP packet payload, i.e., one packet contains data for common atlas and one or more atlases.

• When the value of <mux-type> is 1 , it means that NAL units from common atlas and one or more atlases are multiplexed in one RTP session, but they are not multiplex in one RTP packet payload.

• When the value of <mux-type> is 2, it means that NAL units from one common atlas and one atlas are multiplexed in one RTP session and can be multiplex in one RTP packet payload.

• When the value of <mux-type> is 3, it means that NAL units from one common atlas and one atlas are multiplexed in one RTP session, but they are not multiplex in one RTP packet payload. Parameter <atlas-id-bytes> provides information on how many bytes are used to provide atlas-id information in multiplexed RTP packet payload.

Parameter <atlas-size-bytes> provides information how many bytes are used to provide the size of atlas data information, i.e., size of all NAL units and the bytes describing the NAL unit sizes, in multiplexed RTP packet payload.

Parameter <nal-unit-size-bytes> provides information how many bytes are used to provide the size of one NAL data in multiplexed RTP packet payload.

Parameter <common-atias-id> as common atlas does not have atlas ID this provides information what identifier will be used in common atlas.

Alternative schemes may be provided for the <mux-type>, <atlas-id- bytes>, <atlas-size-bytes>, <nal-unit-size-bytes>, and <common-atlas-id> such as ASCII, decimal or base64 encoded strings.

In an alternative embodiment parameter <atias-size-bytes> can be replaced by <atias-number-nai-units-bytes> which provides the size of the field in RTP payload that indicates number of NAL units for the given atlas indicated by the field in RTP payload signaled through <atlas-id- bytes>

In an alternative embodiment, parameter <atias-size-bytes> can be removed from the parameters of v3c-atlas-mux and the field in RTP payload indicated by <atias-id-bytes> can be used instead to signal atlas-id for every NAL unit separately.

In an alternative embodiment, v3c-atlas-mux may be defined as a new V3C RTP payload format parameter that can be exposed through the related attribute accordingly. v3c-atlas-mux=<mux-type>, <atlas-id-bytes>, <atlas-size- bytes>, <nal-unit-size-bytes>, <common-atlas-id> The semantics of v3c-atlas-mux components mux-type, atlas - id-bytes, atlas - s i ze-bytes, nal -unit- s i ze-bytes, and common-atlas - id are described in the previous embodiment.

According to an alternative embodiment, v3c-atlas-mux components mux- type, atlas - id-bytes, atlas - s i ze-bytes, nal -unit- s i ze-bytes, and common-atlas - id may be provided individually as separate V3C RTP payload format parameters. The semantics of the v3c-atlas-mux components have been described earlier.

In an embodiment, a “v3c-unit-header” v3c specific payload format parameter is extended to enable storage of more than one V3C unit header. v3c-unit-header=<value> , <value>, <value>

“v3c-unit-header” provides V3C unit headers’ bytes defined in ISO/IEC 23090- 5. <vaiue> contains base16 (hexadecimal) representation of the 4 bytes of V3C unit header. Each value is comma-separated.

Alternative encoding schemes may be provided for the <value> such as ASCII, decimal or base64 encoded strings.

According to an alternative embodiment, a v3c specific attribute, “v3c-unit- header”, is extended to provide more than one V3C unit header. a=v3c-unit-header=<value> , <value>, <value>

“v3c-unit-header” provides V3C unit headers’ bytes defined in ISO/IEC 23090- 5. <vaiue> contains base16 (hexadecimal) representation of the 4 bytes of V3C unit header. Each value is comma-separated.

Alternative encoding schemes may be provided for the <value> such as ASCII, decimal or base64 encoded strings.

An example SDP According to an embodiment, a common atlas and atlas are interleaved in one RTP session (type 2) and 2 bytes may be used for signalling atlas-id, size of atlas data and size of each NAL unit while the common atlas identifier may be set to be OxFF. v=0 o=svcsrv 289083124 289083124 IN IP4 host.example.com s=V3C SIGNALING t=0 0 c=IN IP4 192.0.2.1/127 a=group:V3C 1 2 3 4 v3c-parameter-set=AF6F00939921878 ; m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 v3c-unit-header=l 0000000 ; // occupancy a=mid : 1 m=video 40002 RTP/AVP 97 a=rtpmap:97 H264/90000 a=fmtp:96 v3c-unit-header=l 8000000 ; // geometry a=mid : 2 m=video 40004 RTP/AVP 98 a=rtpmap:98 H264/90000 a=fmtp:96 v3c-unit-header=14180000 ; // attribute texture a=mid : 3 a=rtpmap:100 ATLAS/90000 a=v3cmap : 100 v3c-unit-header=08000000, 30000000 a=v3c-atlas-mux 2, 2, 2, 2, OxFF a=mid : 4

An example of RTP packet payload is shown in below. In this example, 2 bytes are used to signal the size of the atlas-id, number size of the atlas as well as the size of each NAL unit.

0 1 2

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4

I RTP payload header (NUT=56) | v3c-atlas-

In previous a method for multiplexing multiple atlases in one RTP session has been discussed. The method according to an embodiment comprises

• obtaining a V3C bitstream with one common atlas and at least one atlas component,

• streaming the one common atlas component and the at least one atlas component in single RTP session,

• signaling to a receiver that the session contains the one common atlas component and the at least one atlas component,

• signaling to the receiver information allowing the receiver to correctly extract the one common atlas component and the at least one atlas component from the RTP packets of the session, and

• reconstructing the V3C bitstream consisting of the one common atlas component and the at least one atlas component from the RTP packets of the session.

The method for encoding according to an embodiment is shown in Figure 11 . The method generally comprises receiving 1105 a bitstream representing coded volumetric video, the bitstream comprising atlas information; demultiplexing 1110 the bitstream into a number of sub-bitstreams, each subbitstream representing one volumetric video component; encapsulating 1115 sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; sending 1120 the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and providing 1125 to the client information allowing the client to identify an RTP session containing the atlas components. Each of the steps can be implemented by a respective module of a computer system.

An apparatus according to an embodiment comprises means for receiving a bitstream representing coded volumetric video, the bitstream comprising atlas information; means for demultiplexing the bitstream into a number of subbitstreams, each sub-bitstream representing one volumetric video component; means for encapsulating sub-bitstreams representing atlas components to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams for atlas components over one RTP session to a client; and means for providing to the client information allowing the client to identify an RTP session containing the atlas components. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 11 according to various embodiments.

The method for decoding according to an embodiment is shown in Figure 12. The method generally comprises receiving 1205 a plurality of sub-bitstreams over a plurality of RTP sessions; receiving 1210 information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; decapsulating 1215 sub-bitstreams representing atlas components from the identified RTP session; and reconstructing 1220 a bitstream representing a volumetric video from the sub-bitstreams. Each of the steps can be implemented by a respective module of a computer system.

An apparatus according to an embodiment comprises means for receiving a plurality of sub-bitstreams over a plurality of RTP sessions; means for receiving information by which an RTP session containing atlas components is identified from the plurality of RTP sessions; means for decapsulating sub-bitstreams representing atlas components from the identified RTP session; means for reconstructing a bitstream representing a volumetric video from the subbitstreams. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 12 according to various embodiments.

An example of an apparatus is disclosed with reference to Figure 13. Figure 13 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an electronic device 50, which may incorporate a codec. In some embodiments the electronic device may comprise an encoder or a decoder. The electronic device 50 may for example be a mobile terminal or a user equipment of a wireless communication system or a camera device. The electronic device 50 may be also comprised at a local or a remote server or a graphics processing unit of a computer. The device may be also comprised as part of a head-mounted display device. The apparatus 50 may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example, the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera 42 capable of recording or capturing images and/or video. The camera 42 may be a multi-lens camera system having at least two camera sensors. The camera is capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video and/or image data for processing from another device prior to transmission and/or storage.

The apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50. The apparatus or the controller 56 may comprise one or more processors or processor circuitry and be connected to memory 58 which may store data in the form of image, video and/or audio data, and/or may also store instructions for implementation on the controller 56 or to be executed by the processors or the processor circuitry. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of image, video and/or audio data or assisting in coding and decoding carried out by the controller.

The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a IIICC (Universal Integrated Circuit Card) and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network. The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system, or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es). The apparatus may comprise one or more wired interfaces configured to transmit and/or receive data over a wired connection, for example an electrical cable or an optical fiber connection.

The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.

If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined. Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.