Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC RE-LIGHTING OF VOLUMETRIC VIDEO
Document Type and Number:
WIPO Patent Application WO/2022/189702
Kind Code:
A2
Abstract:
An apparatus comprising means for obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; means for extracting lighting information from the obtained scene; means for processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and means for encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video- based coding bitstream.

Inventors:
ILOLA LAURI ALEKSI (DE)
KONDRAD LUKASZ (DE)
SCHWARZ SEBASTIAN (DE)
BACHHUBER CHRISTOPH (DE)
Application Number:
PCT/FI2022/050143
Publication Date:
September 15, 2022
Filing Date:
March 08, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
G06T15/50; H04N13/111; H04N13/122; H04N13/161; H04N13/178
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An apparatus comprising: means for obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three- dimensional content; means for extracting lighting information from the obtained scene; means for processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and means for encoding the scene with the at least one pre- processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video- based coding bitstream.

2. The apparatus of claim 1, wherein the lighting information is in the form of explicit light sources comprising point lights or ambient light, or is provided as image based lighting.

3. The apparatus of any one of claims 1 to 2, further comprising: means for extracting geometry data and attribute data from the obtained scene; wherein the geometry data is three-dimensional information, and the attribute data is used to describe rendering details of the geometry data; means for processing the extracted geometry data and attribute data based on volumetric visual video-based compression or another format for compression of volumetric video information; means for encoding the scene with the processed geometry data and attributes data from the obtained scene; and means for storing the geometry data and attribute data along with the pre-processed lighting information in a file format or visual volumetric video-based coding bitstream.

4. The apparatus of any one of claims 1 to 3, wherein the at least one pre-processed lighting map is an environment map that captures the scene with lighting information from a center of the scene.

5. The apparatus of claim 4, further comprising: means for mapping the environment map to a plurality of patches, the plurality of patches respectively representing a cube face; and means for transmitting the plurality of patches as a separate lighting video component or as a lighting attribute identified using the signaled lighting information.

6. The apparatus of claim 5, wherein a patch type describes the lighting video component, and the patch type provides mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components.

7. The apparatus of any one of claims 1 to 6, wherein the at least one pre-processed lighting map is calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map.

8. The apparatus of any one of claims 1 to 7, further comprising: means for transmitting the at least one pre-processed lighting map as one or more patches together with attribute texture.

9. The apparatus of any one of claims 1 to 8, wherein the at least one pre-processed lighting map represents at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre-filtered environment map used to calculate a specular lighting component.

10. The apparatus of claim 9, further comprising: means for providing sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre-processed lighting map or from a plurality of patches in real-time during rendering.

11. The apparatus of any one of claims 1 to 10, wherein the at least one lighting parameter provides information concerning how the at least one pre-processed lighting map is generated and used by a renderer.

12. The apparatus of any one of claims 1 to 11, wherein the at least one lighting parameter comprises at least one of a lighting source type, position, color/strength, or orientation.

13. The apparatus of any one of claims 1 to 12, wherein the at least one lighting parameter is signaled using either a supplemental enhancement information message, or as a video usability information ambient message.

14. The apparatus of any one of claims 1 to 13, wherein the at least one lighting parameter is encoded as a sample of a metadata track using a sample entry.

15. The apparatus of any one of claims 1 to 14, further comprising: means for encoding the at least one pre-processed lighting map as a video sequence; means for transmitting the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; and wherein the at least one pre-processed lighting map is interpreted using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set.

16. The apparatus of any one of claims 1 to 15, further comprising: means for signaling information related to the at least one pre-processed lighting map using an extension to a patch data unit.

17. The apparatus of any one of claims 1 to 16, further comprising: means for signaling information related to the at least one pre-processed lighting map using an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled using either attribute texture patch data, or using one or more lighting patches.

18. The apparatus of any one of claims 1 to 17, wherein: the at least one pre-processed lighting map is encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter is encapsulated with the file format in a scheme information box of a restricted scheme information box.

19. The apparatus of any one of claims 1 to 18, further comprising: means for providing a track reference type used from/to a visual volumetric video-based coding atlas track to/from a visual volumetric video-based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video-based coding units having a lighting video data type.

20. The apparatus of any one of claims 1 to 19, wherein the lighting information is extracted using at least one visual volumetric video-based coding construct or at least one file format level method.

21. The apparatus of any one of claims 1 to 20, further comprising: means for determining at least one region of the scene of three-dimensional information, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; means for coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for signaling non-lambertian characteristics of the scene, the signaling comprising the at least one pre-processed lighting map.

22. The apparatus of claim 21, wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

23. The apparatus of claim 22, wherein the at least one non- lambertian surface comprises a specular surface.

24. The apparatus of any one of claims 22 to 23, wherein the at least one non-lambertian surface comprises a transparent object.

25. The apparatus of any one of claims 21 to 24, further comprising: means for coding or decoding at least one heterogeneous object-specific parameter at a bitstream level.

26. The apparatus of claim 25, wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

27. The apparatus of any one of claims 25 to 26, wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

28. The apparatus of any one of claims 21 to 27, further comprising: means for capturing the at least one region of the scene of three-dimensional information, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

29. The apparatus of claim 28, wherein the means for capturing comprises at least one camera.

30. The apparatus of any one of claims 21 to 29, where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

31. An apparatus comprising: means for receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and means for rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

32. The apparatus of claim 31, wherein the at least one pre- processed lighting map or/and the at least one lighting parameter associated with the scene is signaled using at least one visual volumetric video-based coding construct or at least one file format level method.

33. The apparatus of any one of claims 31 to 32, wherein the at least one pre-processed lighting map or/and the at least one lighting parameter associated with the scene is utilized to render the scene.

34. The apparatus of any one of claims 31 to 33, wherein the at least one pre-processed lighting map is an environment map that captures the scene with lighting information from a center of the scene.

35. The apparatus of claim 34, further comprising: means for receiving a plurality of patches as a separate lighting video component or as a lighting attribute identified with the signaled lighting information; wherein the environment map has been mapped to the plurality of patches, the plurality of patches respectively representing a cube face.

36. The apparatus of claim 35, wherein a patch type describes the lighting video component, and the patch type provides mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components.

37. The apparatus of any one of claims 31 to 36, wherein the at least one pre-processed lighting map is calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map.

38. The apparatus of any one of claims 31 to 37, further comprising: means for receiving the at least one pre-processed lighting map as one or more patches together with attribute texture.

39. The apparatus of any one of claims 31 to 38, wherein the at least one pre-processed lighting map represents at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre-filtered environment map used to calculate a specular lighting component.

40. The apparatus of claim 39, further comprising: means for receiving sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre-processed lighting map or from a plurality of patches in real-time during rendering.

41. The apparatus of any one of claims 31 to 40, wherein the at least one lighting parameter provides information concerning how the at least one pre-processed lighting map is generated and used by a renderer.

42. The apparatus of any one of claims 31 to 41, wherein the at least one lighting parameter comprises at least one of a lighting source type, position, color/strength, or orientation.

43. The apparatus of any one of claims 31 to 42, wherein the at least one lighting parameter is signaled using either a supplemental enhancement information message, or as a video usability information ambient message.

44. The apparatus of any one of claims 31 to 43, wherein the at least one lighting parameter is encoded as a sample of a metadata track using a sample entry.

45. The apparatus of any one of claims 31 to 44, further comprising: means for receiving the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; wherein the at least one pre-processed lighting map is encoded as a video sequence; and means for interpreting the at least one pre-processed lighting map using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set.

46. The apparatus of any one of claims 41 to 45, further comprising: means for receiving information related to the at least one pre-processed lighting map signaled using an extension to a patch data unit.

47. The apparatus of any one of claims 31 to 46, further comprising: means for receiving information related to the at least one pre-processed lighting map through an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled with either attribute texture patch data, or with one or more lighting patches.

48. The apparatus of any one of claims 31 to 47, wherein: the at least one pre-processed lighting map is encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter is encapsulated with the file format in a scheme information box of a restricted scheme information box.

49. The apparatus of any one of claims 31 to 48, further comprising: means for decoding a track reference type used from/to a visual volumetric video-based coding atlas track to/from a visual volumetric video-based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video-based coding units having a lighting video data type.

50. The apparatus of any one of claims 31 to 49, wherein the scene is a three-dimensional scene, and the geometry is three- dimensional information and attribute information is used to describe rendering details of the geometry.

51. The apparatus of any one of claims 31 to 50, wherein the lighting information is in the form of explicit light sources comprising point lights or ambient light, or is provided as image based lighting.

52. The apparatus of any one of claims 31 to 51, further comprising: means for decoding at least one region of the encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; means for decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one pre-processed lighting map.

53. The apparatus of claim 52, wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

54. The apparatus of claim 53, wherein the at least one non- lambertian surface comprises a specular surface.

55. The apparatus of any one of claims 53 to 54, wherein the at least one non-lambertian surface comprises a transparent object.

56. The apparatus of any one of claims 52 to 55, further comprising: means for decoding at least one heterogeneous object- specific parameter at a bitstream level.

57. The apparatus of claim 56, wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

58. The apparatus of any one of claims 56 to 57, wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

59. The apparatus of any one of claims 52 to 58, further comprising: means for capturing the at least one region of the scene, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

60. The apparatus of claim 59, wherein the means for capturing comprises at least one camera.

61. The apparatus of any one of claims 52 to 60, where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

62. An apparatus comprising: means for determining at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; means for coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for signaling non-lambertian characteristics of the scene, the signaling comprising at least one lighting map.

63. The apparatus of claim 62, wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

64. The apparatus of claim 63, wherein the at least one non- lambertian surface comprises a specular surface.

65. The apparatus of any one of claims 63 to 64, wherein the at least one non-lambertian surface comprises a transparent object.

66. The apparatus of any one of claims 62 to 65, further comprising: means for coding or decoding at least one heterogeneous object-specific parameter at a bitstream level.

67. The apparatus of claim 66, wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

68. The apparatus of any one of claims 66 to 67, wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

69. The apparatus of any one of claims 62 to 68, further comprising: means for capturing the at least one region of the scene of three-dimensional content, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

70. The apparatus of claim 69, wherein the means for capturing comprises at least one camera.

71. The apparatus of any one of claims 62 to 70, where the lighting map provides overall lighting information for a plurality of objects within the scene.

72. An apparatus comprising: means for decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; means for decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

73. The apparatus of claim 72, wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

74. The apparatus of claim 73, wherein the at least one non- lambertian surface comprises a specular surface.

75. The apparatus of any one of claims 73 to 74, wherein the at least one non-lambertian surface comprises a transparent object.

76. The apparatus of any one of claims 72 to 75, further comprising: means for decoding at least one heterogeneous object- specific parameter at a bitstream level.

77. The apparatus of claim 76, wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

78. The apparatus of any one of claims 76 to 77, wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

79. The apparatus of any one of claims 72 to 78, further comprising: means for capturing the at least one region of the scene, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

80. The apparatus of claim 79, wherein the means for capturing comprises at least one camera.

81. The apparatus of any one of claims 72 to 80, where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

82. A method comprising: obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three- dimensional content; extracting lighting information from the obtained scene; processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre- processed lighting map; and encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream.

83. A method comprising: receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video- based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and rendering a reconstruction of the scene with view- dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

84. A method comprising: determining at least one region of a scene of three- dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and signaling non-lambertian characteristics of the scene, the signaling comprising at least one lighting map.

85. A method comprising: decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

Description:
Dynamic Re-Lighting Of Volumetric Video

TECHNICAL FIELD

[0001] The examples and non-limiting embodiments relate generally to volumetric video coding, and more particularly, to dynamic re-lighting of volumetric video.

BACKGROUND

[0002] It is known to perform video coding and decoding.

SUMMARY

[0003] In accordance with an aspect, an apparatus includes means for obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three- dimensional content; means for extracting lighting information from the obtained scene; means for processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and means for encoding the scene with the at least one pre- processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video- based coding bitstream.

[0004] In accordance with an aspect, an apparatus includes means for receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video- based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and means for rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

[0005] In accordance with an aspect, an apparatus includes means for determining at least one region of a scene of three- dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; means for coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for signaling non-lambertian characteristics of the scene, the signaling comprising at least one lighting map.

[0006] In accordance with an aspect, an apparatus includes means for decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; means for decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

[0007] In accordance with an aspect, a method includes obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two- dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; extracting lighting information from the obtained scene; processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre- processed lighting map; and encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream.

[0008] In accordance with an aspect, a method includes receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

[0009] In accordance with an aspect, a method includes determining at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and signaling non-lambertian characteristics of the scene, the signaling comprising at least one lighting map.

[0010] In accordance with an aspect, a method includes decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein:

[0012] FIG. 1 is an encoder side apparatus configured to implement the examples described herein.

[0013] FIG. 2 is a decoder side apparatus configured to implement the examples described herein.

[0014] FIG. 3 shows six patches in a frame each representing one cube map face.

[0015] FIG. 4 shows nine patches in a frame, six patches representing common information and three patches representing a specific object.

[0016] FIG. 5 shows six patches containing interleaved texture and lighting information.

[0017] FIG. 6 is an example diagram how lighting signaling can be utilized on the rendering side.

[0018] FIG. 7 is an ISOBMFF structure containing a V3C bitstream with lighting information as a new video component. [0019] FIG. 8 is an example apparatus to implement dynamic re-lighting of volumetric video, based on the examples described herein.

[0020] FIG. 9 is an example method to implement dynamic re lighting of volumetric video, based on the examples described herein.

[0021] FIG. 10 is another example method to implement dynamic re-lighting of volumetric video, based on the examples described herein.

[0022] FIG. 11 is an example method to code a scene with non- Lambertian characteristics.

[0023] FIG. 12 is an example method to decode a scene with non-Lambertian characteristics.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

[0024] The examples described herein relate to defining and signaling constructs for enabling dynamic re-lighting of volumetric data. In particular, support and signaling in ISO/IEC 23090-5 Visual Volumetric Video-based Coding (V3C) and related systems specification ISO/IEC 23090-10 Carriage of Visual Volumetric Video-based Coding Data (Carriage of V3C) are described.

[0025] Volumetric video may be considered as information which represents three dimensional information over a period of time. Visual volumetric video-based coding (V3C) provides a mechanism for encoding volumetric video. Visual volumetric frames are coded by converting the three-dimensional information into a collection of 2D images and associated data by projecting a three dimensional volume into a 2 dimensional view. The converted 2D images are coded using widely available video and image coding specifications and the associated data, i.e., atlas data, is coded according to ISO/IEC FDIS 23090-5.

[0026] In general, the compression of volumetric video is achieved by converting the 3D volumetric information into a collection of 2D frames, for which traditional 2D video coding technologies may be applied, and associated data (so called atlas data). The 3D scene is segmented into regions according to heuristics based on, for example, spatial proximity and/or similarity of the data in the region. The segmented regions are projected into 2D patches, where each patch may contain depth, occupancy, texture or other attribute channels. The depth channel contains information based on which the 3D position and shape of the surface voxels can be determined. The patches are further packed into video frames that can be compressed and streamed as a regular 2D video.

[0027] The associated metadata, i.e., atlas data, contains information about the patch projection (in 3D) and position of patches in video frames (2D). Client or server-side view synthesis is utilized to reconstruct novel 2D views from patches and associated atlas data. Video encoded frames describing the visual and geometric information of the compressed 3D scene may be streamed over a network using conventional video distribution technologies such as DASH. Atlas data may be streamed as an additional timed data track.

[0028] V3C (Visual Volumetric Video-based Coding)

[0029] A V3C bitstream consists of one or more CVSs. A CVS starts with a VPS, included in at least one V3C unit or provided through external means, and contains one or more V3C units that can be factored into V3C composition units. [0030] A CVS consists of multiple V3C sub-bitstreams, with each V3C sub-bitstream associated with a V3C component. V3C component is atlas, occupancy, geometry, or attribute of a particular type that is associated with a V3C volumetric content representation.

[0031] At the highest level, V3C data is carried in V3C units, which consist of header and payload pairs. The unit header identifies the type of payload, whereas the payloads carry 2D video bitstreams or atlas data bitstreams depending on the type of payload.

[0032] V3C: Atlas data

[0033] Atlas data is contained in atlas_sub_bistream () which may contain a sequence of NAL units including header and payload data. nal_unit_header() is used to define how to process the payload data. NumBytesInNalUnit specifies the size of the NAL unit in bytes. This value is required for decoding of the NAL unit. Some form of demarcation of NAL unit boundaries is necessary to enable inference of NumBytesInNalUnit. One such demarcation method is specified in Annex D of ISO/IEC 23090-5 for the sample stream format.

[0034] An atlas coding layer (ACL) is specified to efficiently represent the content of the atlas data. The NAL is specified to format that data and provide header information in a manner appropriate for conveyance on a variety of communication channels or storage media. All data are contained in NAL units, each of which contains an integer number of bytes. A NAL unit specifies a generic format for use in both packet-oriented and bitstream systems. The format of NAL units for both packet-oriented transport and sample streams is identical except that in the sample stream format specified in Annex D of ISO/IEC 23090-5 each NAL unit can be preceded by an additional element that specifies the size of the NAL unit.

[0035] In the nal_unit_header() syntax nal_unit_type specifies the type of the RBSP data structure contained in the NAL unit as specified in Table 4 of ISO/IEC 23090-5. nal_layer_id specifies the identifier of the layer to which an ACL NAL unit belongs or the identifier of a layer to which a non-ACL NAL unit applies. Decoders conforming to a profile specified in Annex A of the current version of ISO/IEC 23090- 5 are to ignore (i.e., remove from the bitstream and discard) all NAL units with values of nal_layer_id not equal to 0.

[0036] ISO/IEC 23090-5 subclauses 8.3.8/8.4.8 specify that such SEI message consists of the variables specifying the type payloadType and size payloadSize of the SEI message payload. SEI message payloads are specified in Annex F of ISO/IEC 23090- 5. The derived SEI message payload size payloadSize is specified in bytes and are to be equal to the number of bytes in the SEI message payload.

[0037] Table 1 is SEI message metadata syntax as specified in 8.3.8 in ISO/IEC 23090-5

Table 1

[0038] Non-essential SEI messages are not required by the decoding process. Conforming decoders are not required to process this information for output order conformance.

[0039] Specification for presence of non-essential SEI messages is also satisfied when those messages (or some subset of them) are conveyed to decoders (or to the hypothetical reference decoder (HRD)) by other means not specified in ISO/IEC 23090-5. When present in the bitstream, non-essential SEI messages obey the syntax and semantics as specified in Annex F of ISO/IEC 23090-5. When the content of a non-essential SEI message is conveyed for the application by some means other than presence within the bitstream, the representation of the content of the SEI message is not required to use the same syntax specified in Annex F of ISO/IEC 23090-5. For the purpose of counting bits, only the appropriate bits that are actually present in the bitstream are counted.

[0040] Essential SEI messages are an integral part of the V3C bitstream and should not be removed from the bitstream. The essential SEI messages are categorized into two types, Type-A essential SEI message and Type-B essential SEI messages.

[0041] Type-A essential SEI messages contain information required to check bitstream conformance and for output timing decoder conformance. Every V3C decoder conforming to point A should not discard any relevant Type-A essential SEI messages and is to consider them for bitstream conformance and for output timing decoder conformance.

[0042] Regarding Type-B essential SEI messages, V3C decoders that wish to conform to a particular reconstruction profile should not discard any relevant Type-B essential SEI messages and are to consider them for 3D point cloud reconstruction and conformance purposes.

[0043] V3C: Attributes Video Data

[0044] 2D video data is contained in video_sub_bistream (). One of the types of the video data is attribute video data. An attribute is carried in a V3C unit with vuh_unit_type equal to V3C_AVD. The attribute video data V3C unit header also specifies the index of the attribute, which allows identification of the attribute type based on VPS information and the partition index, which enables an attribute that consists of multiple components to be segmented into smaller component partition units. Such segmentation allows such attribute types to be coded using legacy coding specifications that may be limited in terms of the number of components that they can support. An attribute is a scalar or vector property, optionally associated with each point in a volumetric frame. Such an attribute can e.g. be color, reflectance, surface normal, transparency, material ID, etc. The attribute type is identified in ISO/IEC 23090-5 by the ai_attribute_type_id syntax element. So far, five attribute types have been specified as shown in Table 2.

[0045] Table 2 shows the V3C attribute types in ISO/IEC 23090-5.

Table 2

[0046] Mesh

[0047] Increasing computational resources and advances in 3D data acquisition devices have enabled reconstruction of highly detailed volumetric video representations of natural scenes. Infrared, lasers, time-of-flight and structured light are examples of devices that can be used to construct 3D video data. Representation of the 3D data depends on how the 3D data is used. Dense voxel arrays have been used to represent volumetric medical data. In 3D graphics, polygonal meshes are extensively used.

[0048] A polygon mesh is a collection of vertices, edges and faces that define the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles (triangle mesh), quadrilaterals (quads), or other simple convex polygons (n-gons), since this simplifies rendering, but may also be more generally composed of concave polygons, or even polygons with holes.

[0049] Objects created with polygon meshes are represented by different types of elements. These include vertices, edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored. [0050] A vertex defines a position, i.e. a point, in a 3D space defined as (x, y, z) along with other information such as color (r, g, b), normal vector and texture coordinates. An edge is a connection between two vertices, wherein the two vertices are endpoints of the edge. A face is a closed set of edges, in which a triangle face has three edges, and a quad face has four edges. A polygon is a coplanar set of faces. In systems that support multi-sided faces, polygons and faces are equivalent. Surfaces, i.e. smoothing groups, may be used to form a discrete representation of the faces. Smoothing groups are useful, but it is not required to group smooth regions.

[0051] Some mesh formats contain groups, which define separate elements of the mesh, and are useful for determining separate sub-objects for skeletal animation or separate actors for non-skeletal animation. Materials are defined to allow different portions of the mesh to use different shaders when rendered.

[0052] Most mesh formats also support some form of UV coordinates ("U" and "V" denoting axes of a 2D texture) which are a separate 2D representation of the mesh "unfolded" to show what portion of a 2-dimensional texture map applies to different polygons of the mesh. It is also possible for meshes to contain other such vertex attribute information such as color, tangent vectors, weight maps to control animation, etc. (sometimes also called channels).

[0053] Lighting

[0054] In computer graphics, a number of techniques are used to simulate light in 3D scenes. Each technique offers flexibility in the level of detail and functionality available, but also operates at different levels of computational demand and complexity. Graphics artists can choose from a variety of shading techniques and effects to suit the needs of each application.

[0055] Light sources allow for different ways to introduce light into 3D scenes. Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance. A directional source uniformly lights a scene from one direction. A spotlight produces a directed cone of light. The light becomes more intense closer to the spotlight source and to the center of the light cone. Ambient light sources illuminate objects even when no other light source is present. The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene. An area light models an emissive surface which can be oriented by an artist.

[0056] Lighting models are used to replicate lighting effects in rendered environments where light is approximated based on the simplified physics of light. Without lighting models, replicating lighting effects as they occur in the natural world, e.g. using ray tracing, requires processing power that is unavailable in the vast majority of devices. A lighting model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene. There are two main illumination models. These are object oriented lighting models, e.g. a Phong model, that considers each object individually, and global illumination models, e.g. Ray tracing, that consider how light rays interact with the entire scene.

[0057] Modern lighting models revolve around approximation of how light behaves on an object surface, and this is commonly referred to as Physically Based Rendering (PBR). PBR has largely replaced Phong shading in modern real-time graphics applications. Ray tracing and global illumination remain more pre-dominant in less real-time intensive applications, such as the special effects industry.

[0058] PBR topics that deal with surfaces often rely on a simplified model of the bidirectional reflectance distribution function (BRDF), that approximates optical properties of the material using only a handful of intuitive parameters, and that is quick to compute. glTF separates BRDF shading approximations into two categories, namely metallic-roughness and specular-glossiness, which are mutually exclusive. For metallic-roughness, different maps for the surface are provided for albedo (base-color), roughness factor and metalness factor. Specular-glossiness models surfaces with a diffuse factor, specular factor and glossiness factor. Additional maps for both PBR shading models like normals, occlusion maps or an emissive factor may be provided.

[0059] Lately modern graphics have adopted hybrid shading models where ray-tracing and PBR shading are combined. Hybrid rendering pipelines combine rasterization, compute, and ray tracing shaders to work together to enable real-time visuals approaching the quality of offline path tracing. These rendering pipelines started to emerge in the late 2010s along with first ray-tracing oriented hardware platforms.

[0060] The math for calculating the color for surfaces is explained here: https://github .com/KhronosGroup/giTF/tree/master/specificatio n/2,0#appendix-b-brdf-implementation (last accessed February 8, 2021). Information about the surface geometry (mesh position), material description of the surface (PBR parameters), scene lighting information (light sources, ambient, environment map, etc.) and the position of the viewer are required for calculating the value of a pixel residing on the surface. Filtering may be applied to further smooth the effects.

[0061] Light sources

[0062] As explained above, realistic global illumination is extremely complicated and depends on many parameters. Traditional computer graphics rely on simplified modeling of lighting information that is easier to understand and process. Parameterized light sources along with the surface materials are used in combination to achieve a realistic visual appearance of a 3D modeled object. Traditional light sources may contain for example the following properties as for example defined in the below FOI structure.

[0063] orientation describes the three-dimensional orientation of the field of view constraint.

[0064] opening_angle specifies the opening angle of the field of view in the two dimensions perpendicular to orientation.

[0065] Table 3 shows Fol types.

Table 3

[0066] foi_type defines the type of the field of view, as specified in Table 3 - Fol Types. [0067] position describes the three-dimensional position of the light source in the scene.

[0068] ambient specifies the three color components (RGB) of ambient light emitted by the light source.

[0069] diffuse specifies the three color components of diffuse light emitted by the light source. [0070] specular specifies the three color components of specular light emitted by the light source.

[0071] foi defines the field of view of the light source. If not present, foi is inferred to be unconstrained. Each foi defined in Table 3 - Foi Types has a tip, which coincides with the position of the light source.

[0072] Table 4 shows light source types.

Table 4 [0073] light_source_type indicates the kind of light source as specified in Table 4 - Light source types.

[0074] constant describes the constant part of light attenuation, or the part that is independent of the distance of the light source to the viewer or object.

[0075] linear describes with which linear gradient luminosity fades with increasing distance of the light source to the viewer or object.

[0076] quadratic is a scalar factor for the quadratic reduction of light source luminosity relative to distance of the light source to the viewer or object.

[0077] State of the art rendering of lighting is moving to a physically based approach (PBR), which is a much closer approximation of the observed real-world light physics (but still an approximation). It has two principles: first, conserve energy: never reflect more light than is coming in. Non- metallic surfaces absorb some portion of the incoming light; second, integrate all incoming light over the hemisphere of a point on a surface. The result is scaled by the light's incident angle and by the bidirectional reflective distribution function (BRDF). In addition to viewing direction, incident angle, and surface normal, BRDFs model microfacets (model for surfaces) by a roughness parameter. This roughness influences firstly how well aligned the microfacets are, thus how specular the texture is. Secondly, roughness influences how much self-shadowing is taking place on the texture: given a shallow incident angle of light, in a rough surface, parts of the surface cast shadows on other parts of the texture on the microscopic level. A BRDF also models how reflective a surface is depending on the viewing angle: the smaller the angle of viewing a surface, the more reflective it behaves.

[0078] For a surface to be rendered using PBR, it needs to provide the below syntax for each point on the surface (texture element, texel). A texture can be implemented either as struct of arrays or array of structs (which is what is described below).

[0079] albedo defines color, or, for metallic surfaces, the base reflectivity of the texel.

[0080] normal a normal map defines in which direction the normal points per texel. This enables rendering of a bumpy surface, even when the underlying polygon is flat.

[0081] metallic defines if the texel is metallic or not. metallic can be binary or defined in more weighting levels.

[0082] roughness stores the roughness parameter defined previously, influencing how blurry reflections are, and to which amount the texture shadows itself.

[0083] ao, or ambient occlusion contains shadows by macroscopic structures in the surface and possibly by neighboring objects.

[0084] Image Based Lighting [0085] IBL is a collection of techniques that calculate illumination effects on objects by treating the surrounding environment as one big light source, where the surrounding environment is represented by an image. The image is called an environment map. An environment map is typically representing a cube textured on the inside or an equirectangular projection. The textures of the environment map (i.e. sky and horizon or walls, buildings) are reflected on the object's surface. A lighting equation can assume that each texel of an environment map is an emitter and can be used for calculation.

[0086] During the lighting calculation in IBL, the environment maps can be further processed to separate indirect diffuse and specular components of the lighting. An irradiance map is used to represent the diffuse portion and is computed by convolving the environment map with a filter that models which point light sources on the cubemap reach an object in the scene, effectively acting as a low-pass filter. This irradiance map is then used for lighting, possibly with high dynamic range (HDR).

[0087] The specular component of lighting may be represented by a pre-filtered environment map that is a pre-computed environment convolution map that takes roughness into account. For each pre-determined roughness level, a pre-filtered specular map is created. The pre-filtered environment map together with a BRDF integration map may be used to calculate the specular lighting component.

[0088] When a number of dynamic objects are present in a scene, an environment mapping may require a dynamically rendered cubemap for each object for each frame to be able to reflect other objects in the scene in addition to the static cubemap. For one object, the scene is rendered for all 6 angles (inside faces of the cube) from the object, including rendering of other objects. The resulting cubemap is then used to simulate dynamic reflections on this one object.

[0089] It is computationally intensive to sample the environment's lighting from every possible direction as the number of possible directions is theoretically infinite. It is possible to approximate the number of directions by taking a finite number of directions or samples, spaced uniformly or taken randomly from within the hemisphere, to get a fairly accurate approximation of the irradiance, effectively solving the irradiance map or pre-filtered environment map.

[0090] Box-structured file formats

[0091] Box-structured and hierarchical file format concepts have been widely used for media storage and sharing. The most well-known file formats in this regard are the ISO Base Media File Format (ISOBMFF) and its variants such as MP4 and 3GPP file formats.

[0092] ISOBMFF allows storage of timely captured audio/visual media streams, called media tracks. The metadata which describes the track is separated from the encoded bitstream itself. The format provides mechanisms to access media data in a codec-agnostic fashion from a file parser perspective.

[0093] In files conforming to ISOBMFF, the media data may be provided in one or more instances of MediaDataBox 'mdat', and the MovieBox 'moov' may be used to enclose the metadata for timed media. In some cases, for a file to be operable, both of the 'mdat' and 'moov' boxes may be required to be present. The 'moov' box may include one or more tracks, and each track may reside in one corresponding TrackBox 'trak'. Each track is associated with a handler, identified by a four-character code, specifying the track type. Video, audio, and image sequence tracks can be collectively called media tracks, and they contain an elementary media stream. Other track types comprise hint tracks and timed metadata tracks.

[0094] Tracks comprise samples, such as audio or video frames. For video tracks, a media sample may correspond to a coded picture or an access unit. A media track refers to samples (which may also be referred to as media samples) formatted according to a media compression format (and its encapsulation to the ISO base media file format). A hint track refers to hint samples, containing cookbook instructions for constructing packets for transmission over an indicated communication protocol. A timed metadata track may refer to samples describing referred media and/or hint samples.

[0095] SampleDescriptionBox

[0096] The 'trak' box includes in its hierarchy of boxes the SampleTableBox (also known as the sample table or the sample table box). The SampleTableBox contains the

SampleDescriptionBox, which gives detailed information about the coding type used, and any initialization information needed for that coding. The SampleDescriptionBox contains an entry- count and as many sample entries as the entry-count indicates. The format of sample entries is track-type specific but derived from generic classes (e.g., VisualSampleEntry,

AudioSampleEntry) . The type of sample entry form used for derivation the track-type specific sample entry format is determined by the media handler of the track. aligned (8) abstract class SampleEntry (unsigned int(32) format) extends Box (format){ const unsigned int(8)[6] reserved = 0; unsigned int(16) data_reference_index;

} aligned (8) class SampleDescriptionBox (unsigned int(32) handler_type) extends FullBox ('stsd', version, 0){ int i ; unsigned int(32) entry_count; for (i = 1 ; i <= entry_count ; i++){

SampleEntry (); // an instance of a class derived from

SampleEntry

}

}

[0097] Specifications deriving Sample Entry classes are defined in ISO/IEC 14496-12. SampleEntry boxes may contain "extra boxes" not explicitly defined in the box syntax of ISO/IEC 14496-12. When present, such boxes are to follow all defined fields and should follow any defined contained boxes. Decoders are to presume a sample entry box could contain extra boxes and are to continue parsing as though they are present until the containing box length is exhausted.

[0098] Sync Samples in ISOBMFF

[0099] Several types of stream access points (SAPs) have been specified. SAP Type 1 corresponds to what is known in some coding schemes as a "Closed group of pictures (GOP) random access point" (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps) and in addition the first picture in decoding order is also the first picture in presentation order. SAP Type 2 corresponds to what is known in some coding schemes as a "Closed GOP random access point" (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps), for which the first picture in decoding order may not be the first picture in presentation order. SAP Type 3 corresponds to what is known in some coding schemes as an "Open GOP random access point", in which there may be some pictures in decoding order that cannot be correctly decoded and have presentation times less than an intra-coded picture associated with the SAP.

[00100] A stream access point (SAP) sample group as specified in ISOBMFF identifies samples as being of the indicated SAP type.

[00101] A sync sample may be defined as a sample corresponding to SAP type 1 or 2. A sync sample can be regarded as a media sample that starts a new independent sequence of samples; if decoding starts at the sync sample, it and succeeding samples in decoding order can all be correctly decoded, and the resulting set of decoded samples forms the correct presentation of the media starting at the decoded sample that has the earliest composition time. Sync samples can be indicated with the SyncSampleBox (for those samples whose metadata is present in a TrackBox) or within sample flags indicated or inferred for track fragment runs.

[00102] Items in ISOBMFF

[00103] Files conforming to the ISOBMFF may contain any non- timed objects, referred to as items, meta items, or metadata items, in a MetaBox 'meta', which may also be called MetaBox. While the name of the meta box refers to metadata, items can generally contain metadata or media data. The meta box may reside at the top level of the file, within a MovieBox 'moov', and within a TrackBox 'trak', but at most one meta box may occur at each of the file level, movie level, or track level. The meta box may be required to contain a HandlerReferenceBox 'hdlr' indicating the structure or format of the MetaBox 'meta' contents. The MetaBox may list and characterize any number of items that can be referred and each one of them can be associated with a file name and can be uniquely identified with the file by item identifier (item_id) which is an integer value. The metadata items may be for example stored in ItemDataBox 'idat' of the MetaBox or in an 'mdat' box or reside in a separate file. If the metadata is located external to the file, then its location may be declared by the DatalnformationBox 'dinf'. In the specific case that the metadata is formatted using extensible Markup Language (XML) syntax and is required to be stored directly in the MetaBox, the metadata may be encapsulated into either the XMLBox 'xml' or the BinaryXMLBox 'bxml'. An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g., to enable interleaving. An extent is a contiguous subset of the bytes of the resource, and the resource can be formed by concatenating the extents.

[00104] High Efficiency Image File Format (HEIF) is a standard developed by the Moving Picture Experts Group (MPEG) for storage of images and image sequences. Among other things, the standard facilitates file encapsulation of data coded according to the High Efficiency Video Coding (HEVC) standard. HEIF includes features building on top of the used ISO Base Media File Format (ISOBMFF). [00105] The ISOBMFF structures and features are used to a large extent in the design of HEIF. The basic design for HEIF comprises that still images are stored as items and image sequences are stored as tracks.

[00106] In the context of HEIF, the following boxes may be contained within the root-level 'meta' box and may be used as described hereinafter. In HEIF, the handler value of the Handler box of the 'meta' box is 'pict'. The resource (whether within the same file, or in an external file identified by a uniform resource identifier) containing the coded media data is resolved through the DatalnformationBox 'dinf', whereas the ItemLocationBox 'iloc' box stores the position and sizes of every item within the referenced file. The ItemReferenceBox 'iref documents relationships between items using typed referencing. If there is an item among a collection of items that is in some way to be considered the most important compared to others, then this item is signaled by the PrimaryltemBox 'pitm'. Apart from the boxes mentioned here, the 'meta' box is also flexible to include other boxes that may be necessary to describe items.

[00107] Any number of image items can be included in the same file. Given a collection of images stored by using the 'meta' box approach, certain relationships may be qualified between images. Examples of such relationships include indicating a cover image for a collection, providing thumbnail images for some or all of the images in the collection, and associating some or all of the images in a collection with an auxiliary image such as an alpha plane. A cover image among the collection of images is indicated using the 'pitm' box. A thumbnail image or an auxiliary image is linked to the primary image item using an item reference of type 'thmb' or 'auxl', respectively. [00108] The ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties are small data records. The ItemPropertiesBox consists of two parts: an ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox (es) that associate items with item properties. An item property is formatted as a box.

[00109] A descriptive item property may be defined as an item property that describes rather than transforms the associated item. A transformative item property may be defined as an item property that transforms the reconstructed representation of the image item content.

[00110] Typical V3C content consists of geometry and attributes (e.g. color, normal, transparency), where view dependent lighting is baked into the texture data. The texture is essentially painted on a surface making it appear static in relation to the viewing orientation. In addition to pure texture data, V3C enables carriage of attribute information, which may be used to provide further information about the surface such as roughness, normals, or metallicity. This information is meaningful when the client application intends to render view-dependent effects like reflections, refractions, or sub-surface scattering on the surface.

[00111] An overlooked aspect of V3C is the lighting information, without which it is impossible to synthesize view- dependent effects on a given surface. Currently V3C does not contain any mechanisms for providing this information. Thus, it is not possible to render viewing direction dependent lighting effects on surfaces as the artist has intended. [00112] Signaling ambient lighting information for traditional 2D video may be useful, to accommodate color correction at the client side to match the editing environment. However it is also useful to consider lighting concepts related to rendering of volumetric video. Usage and generation of temporal environment maps (video instead of a static image) may also be useful. One of the proposed embodiments described herein is related to signaling temporal environment maps along with the rest of the data, for which novel signaling is also provided. Without the signaling part, the environment maps could not be carried as part of the volumetric video bitstream. Content may be adaptively delivered to a client using deferred rendering techniques, where part of the processing may take place on the cloud and part on the device. Utilization of light-maps and shadow-maps may be implemented, while considering temporal and related video compression aspects of said information. It may also be useful to implement foveated rendering of shadows, where pre-calculated shadow maps (in different resolutions) are utilized to render shadows at higher quality near the region of interest and at lower quality further away from the center of attention. The embodiments described herein also provide signaling of pre-calculated shadow maps.

[00113] V3C enables carriage of visually compressed volumetric information. It defines carriage for different attribute maps (specularity, normals, etc.), which are useful for rendering view-dependent lighting effects. However, V3C does not provide mechanisms for carriage of light sources, which are an essential part of calculating view-dependent lighting effects. Utilization of attribute maps therefore assumes an external source of lighting information. It would be beneficial to define carriage of lighting information inside V3C to enable view dependent rendering of content as intended by an artist. This disclosure introduces novel ways of signaling lighting information specific for visually compressed volumetric content. New methods along with signaling for generating environment maps from patch data as a basis for synthesizing view-dependent lighting effects are introduced.

[00114] A main embodiment includes signaling of lighting information and rendering procedures for volumetric video to enable rendering of view-dependent lighting effects. There are several additional embodiments, among them including utilization of image based dynamic lighting, lighting signaling in V3C, signaling directly in ISOBMFF Box, and signaling a V3C lighting video component in in ISOBMFF.

[00115] Utilization of image based dynamic lighting may include creating an environment map for each object, approximating the number of directions by taking a finite number of directions or samples, spaced uniformly or taken randomly from within the hemisphere, transmitting only the sampled information instead of full environment maps, and pre processing the environment map to calculate the irradiance map and pre-filtered environment map to reduce computational complexity at the rendering.

[00116] Lighting signaling in V3C may include several features. Pre-processed light maps may be used as an additional video component that decouples the lighting information from the scene geometry and surface attribute information. Pre-processed light maps may provide higher dynamic range (precision), but lower pixel density than attribute and geometry. Lighting parameters as parameter sets (VPS, CASPS, ASPS) may be an extension mechanism to close the gap in the existing standard. Pre-processed light maps as lighting patches may utilize a new patch type to allow decoupling the scene geometry and surface attribute information packing from light information packing. The lighting map may be updated as needed, for example not all faces of a cubemap need to be transmitted and the number of cube maps could be packed into one frame, and the pre-processed light maps may be linked to an object.

[00117] Pre-processed light maps may be attribute texture patches. Image based lighting may use environment maps (dynamic and separate). Optimization for dynamic lightmaps may be provided, with rendering with lower quality behind the viewport. Pre-processed light maps may include reuse of previously rendered lightmaps when possible, and the pre- processed light maps may be signaled per patch, which texture patches may be directly used as lighting sources. Texture patches may be constructed so that the patches can be directly used as IBL, e.g. as sides of a cube map.

[00118] Regarding lighting parameters in VUI, when the lighting information is static during the sequence (e.g. spot light), the information can be part of VUI. Regarding lighting parameters in SEI, when the lighting information is dynamic during the sequence, the information could be provided as part of atlas NAL units, e.g. as a SEI message.

[00119] Signaling directly in ISOBMFF Box may include a restricted video track carrying pre-processed light maps, dedicated items carrying pre-processed light maps, LightingVideoBox in SchemelnformationBox, and dynamic lighting parameters as a metadata track. Signaling the V3C lighting video component in ISOBMFF may include a dedicated track reference. [00120] There are several benefits and technical effects of the examples described herein. These include enabling delivery of lighting information required to synthesize view dependent effects on surfaces and to reduce complexity of the calculation on the client side, enabling shadow calculation in a streaming scenario when not all V3C content is downloaded (low resolution environment map is downloaded at low cost), and addressing a gap in the standard.

[00121] FIG. 1 shows an encoder side apparatus 100 configured to implement dynamic re-lighting of volumetric video. FIG. 2 shows a decoder side apparatus 200 configured to implement dynamic re-lighting of volumetric video.

[00122] With reference to FIG. 1 and FIG. 2, signaling of lighting information (109, 110) including pre-processed lighting maps (105, 203) and lighting parameters (106, 204) for volumetric video in or along a bitstream (107, 108) enables rendering of view-dependent lighting effects by rendering engine (205). Signaling may be achieved using V3C level constructs (107, 110) or file format level methods (108, 109).

[00123] A client receives an encoded 3D scene (201) encapsulated in a file format (108) or as V3C bitstream (107). The renderer (205) reconstructs a 3D scene (such as 3D scene 101) utilizing the decoded 3D geometry and attributes information (202) to render view-dependent lighting effects on a surface for a given viewer position (206) utilizing pre- processed lighting maps (203, 105) and lighting parameters

(204, 106).

[00124] As further shown in FIG. 1, the apparatus 100 encodes the 3D scene 101. The apparatus 100 may use at least 3D geometry and attributes information 102 to encode the 3D scene 101. The scene may include 3D content consisting of point clouds (V3C/V-PCC input), 3D meshes, and/or 2D source views projected from the real world (TMIV input).

[00125] Utilization of image based dynamic lighting

[00126] In one embodiment, lighting source information (103) is utilized to pre-render a scene and create Pre-Processed Lighting Maps (105). In one embodiment, Pre-Processed Lighting Maps (105) would be an environment map capturing the scene with lighting information from the center of the scene. The environment map represented by a cube map could be mapped to patches 1-6, as shown in FIG. 3 where each patch of frame 300 represents a cube face. The patches may be transmitted as a separate video component, i.e. a lighting video component, or as a separate attribute type, i.e. a lighting attribute identified by Lighting Signaling (110).

[00127] In another embodiment, demonstrated with FIG. 4, Pre- Processed Lighting Maps (105), e.g. an environment map, is calculated for each object in the scene or for one or more pre-defined positions in a scene, where the one or more pre- defined positions in the scene are signaled in Lighting Parameters (106). Utilizing the patch nature of V3C, each environment map is not stored as a whole but common parts of the environment maps are identified and the amount of data is reduced. Accordingly, FIG. 4 shows nine patches in a frame 400 each representing information, where six patches are representing common information (namely patches 1, 2, 3, 4, 5, and 6) and three patches are representing for a specific object (namely patches 7, 8, and 9).

[00128] In another embodiment Pre-Processed Lighting Maps (105), e.g. an environment map, can be transmitted as patches together with attribute texture as presented on FIG. 5 with the frame 500.

[00129] In another embodiment environment maps could be further pre-processed so Pre-Processed Lighting Maps (105) could represent i) an irradiance map, which contains the sum of all indirect diffuse light hitting a surface from a given direction. It could be utilized to calculate the diffuse lighting for an object in a scene; or ii) a BRDF integration map and a pre-filtered environment map that could be utilized to calculate the specular lighting component.

[00130] In another embodiment Lighting parameters (106, 204) would provide the information how the environment maps were processed and how the pre-processed lighting maps could be utilized by the renderer (205).

[00131] In one embodiment sampling data for the irradiance map may be provided as additional metadata in Lighting Signaling/Parameters (109, 110, 204) helping to generate the irradiance map or pre-filtered environment map from the environment map in real-time during rendering. As described herein, it is difficult to solve the entire irradiance of a scene in real-time considering that there is an unlimited number of directions to sample from. The most important directions may be provided along with the environment map, helping to generate the irradiance map in real-time. Also, there are several methods that enable sampling of irradiance maps in real-time.

[00132] FIG. 6 provides an example decoder workflow 600 for generating irradiance maps utilizing regular texture attribute patches and information from Lighting signaling (110). [00133] First, patches 2, 3 and 5 from the atlas 602 are identified as patches containing lighting information (collectively 605) and are passed to rendering 606. Parser 604 identifies the patches containing lighting information, by for example extracting lighting related information 603 from lighting signaling (109, 110). Using the patch data and atlas related information, rendering engine 606 can generate an environment map, where for example the patches may directly contain cube faces for it. Second, the patches are converted into an environment map using the metadata from the V3C bitstream, which may contain information on patch projections or type of patches or to which object the environment map applies to, or where in the scene the environment map should be placed. Conversion into an environment map is for illustration purposes only; in practice reconstruction of the environment map is not necessary, as the irradiance may be calculated from patches directly. Third, the irradiance map 608 and pre-filtered environment map 610 are sampled from the environment map or directly from patches by using pre-defined sampling directions as described Lighting parameters (204) that are provided by Lightning Signaling (109, 110).

[00134] In one embodiment an additional metadata in Lighting Parameters (106, 204) can be provided that includes the individual lighting source type, position, color/strength, and orientation. Individual lighting information may be used in combination with pre-computed or real-time generated lighting maps to enable more efficient light contribution from lights that are relatively close to the viewing position. This information could be described by syntax structures defined herein (struct LightSource ()).

[00135] Lighting Signalling in V3C (110) [00136] Pre-Processed Lighting Maps (105) as additional video component

[00137] Pre-Processed Lighting Map (105) can change per frame and can be encoded as a video sequence. They can be transmitted as part of the V3C bitstream in the V3C unit with a new identifier V3C_LVD. Table 5 provides a description of vuh unit types.

[00138] Table 5 is a description of vuh_unit_types.

Table 5

[00139] A V3C unit header with a vuh_unit_type equal to new identifier V3C_LVD would also allow enable identification of the lighting index vuh_lighting_index.

[00140] vuh_lighting_common equal to 1 indicates that the lighting data carried in the Lighting Video Data unit is applicable to all atlases in the V3C sequence. vuh_lighting_common equal to 0 indicates that the lighting data carried in the Lighting Video Data unit is applicable to an atlas with atlas ID equal to vuh_atlas_id.

[00141] vuh_lighting_index indicates the index of the lighting data carried in the Lighting Video Data unit. The value of vuh_lighting_index should be in the range of 0 to ( vle_lighting_count [ vuh_atlas_id ] - 1 ), inclusive.

[00142] Note the comparison vuh_unit_type == V3C_LVD within the above code for v3c_unit_payload().

[00143] Lighting Parameters (106) as part of VPS, CASPS and ASPS.

[00144] A Pre-Processed Lighting Map (105) transmitted by a V3C unit identified by vuh_unit_type equal to new identifier V3C_LVD can be interpreted by information provided in a V3C Parameter Set (VPS), Common Atlas Sequence Parameter Set (CASPS) and an Atlas Sequence Parameter Set (ASPS).

[00145] vps_lighting_extension_present_flag equal to 1 specifies that the vps_lighting_extension( ) syntax structure is present in the v3c_parameter_set( ) syntax structure. vps_lighting_extension_present_flag equal to 0 specifies that this syntax structure is not present. When not present, the value of vps_lighting_extension_present_flag is inferred to be equal to 0.

[00146] vps_miv_extension is under preparation (ISO/IEC CD 23090-12:2020).

[00147] vle_common_present_flag equal to 0 indicates that the V3C sequence does not have lighting video data that is common for all atlases. vle_common_present_flag equal to 1 indicates that the V3C sequence does have lighting video data that is common for all atlases.

[00148] vle_common_lighting_count indicates the number of lighting videos. vle_common_lighting_count is to be in the range of 0 to 15, inclusive. [00149] vle_common_lighting_type_id [ 1 ] indicates the lighting type of the Lighting Video Data unit with index i for the common atlas. Table 6 - V3C lighting types describes the list of supported lighting types and their relationship with vle_common_lighting_type_id [ 1 ]. [00150] vle_common_lighting_codec_id [ 1 ] indicates the identifier of the codec used to compress the lighting video data with index i for the common atlas. vle_common_lighting_codec_id[ 1 ] is to be in the range of 0 to 255, inclusive. This codec may be identified through the profiles defined in Annex A of ISO/IEC 23090-5, a component codec mapping SEI message, or through means outside this description.

[00151] vle_atlas_count indicates the total number of atlases in the current bitstream that have associated lighting video data. The value of vle_atlas_count is to be in the range of 0 to 63, inclusive.

[00152] vle_atlas_id [ a ] specifies the ID of the atlas with index a. The value of vle_atlas_id[ a ] is to be in the range of 0 to 63, inclusive. It is a requirement of bitstream conformance to this version of this description that the value of vle_atlas_id[ k ] is to not be equal to vle_atlas_id[ j ] for all j != k.

[00153] vle_lighting_count [ atlasID ] indicates the number of lighting video data associated with an atlas with atlas ID equal to atlasID. vle_common_lighting_count is to be in the range of 0 to 15, inclusive.

[00154] vle_lighting_type_id [ atlasID ][ 1 ] indicates the lighting type of the Lighting Video Data unit with index i for the atlas with atlas ID equal to atlasID. Table 6 - V3C lighting types describes the list of supported lighting types and their relationship with vle_lighting_type_id[ atlasID ][ 1 ]

[00155] vle_lighting_codec_id[atlasID] [ 1 ] indicates the identifier of the codec used to compress the lighting video data with index 1 for the atlas with atlas ID equal to atlasID. vle_common_lighting_codec_id[ 1 ] is to be in the range of 0 to 255, inclusive. This codec may be identified through the profiles defined in Annex A of ISO/IEC 23090-5, a component codec mapping SEI message, or through means outside this description.

[00156] Table 6 shows V3C lighting types.

Table 6

[00157] Below is an example raw byte sequence payload common atlas sequence parameter set structure implementation.

[00158] casps_lighting_extension_present_flag equal to 1 specifies that the casps_lighting_extension( ) syntax structure is present in the common_atlas_sequence_parameter_set_rbsp ( ) syntax structure. casps_lighting_extension_present_flag equal to 0 specifies that this syntax structure is not present. When not present, the value of casps_lighting_extension_present_flag is inferred to be equal to 0. [00159] Below is an example implementation of a common atlas sequence parameter set lighting extension.

[00160] cle_explicit_lights_present_flag equal to 0 indicates that the V3C sequence does not have explicit light sources. cle_explicit_lights_present_flag equal to 1 indicates that the V3C sequence does have explicit lights present. In some examples it is inferred whether the V3C sequence has or does not have explicit light sources.

[00161] cle_num_explicit_lights describes how many explicit lights are present. [00162] light_source defines information for an explicit light source. The syntax is defined herein as structure LightSource().

[00163] cle_irradiance_map_sampling_directions_present_flag equal to 0 indicates that the V3C sequence does not have irradiance map sampling directions. cle_irradiance_map_sampling_directions_present_flag equal to 1 indicates that the V3C sequence does have sampling vectors for the irradiance map.

[00164] cle_num_sampling_directions provides the number of sampling directions for the irradiance map.

[00165] cle_sampling_vector_x, cle_sampling_vector_y , and cle_sampling_vector_z define components for normalized sampling vectors for generating the irradiance map.

[00166] Below is an adaptation of an atlas sequence parameter set raw byte sequence payload to implement lighting as described herein.

[00167] asps_lighting_extension_present_flag equal to 1 specifies that the syntax structure is present in the atlas_sequence_parameter_set_rbsp ( ) syntax structure. asps_lighting_extension_present_flag equal to 0 specifies that this syntax structure is not present. When not present, the value of asps_lighting_extension_present_flag is inferred to be equal to 0.

[00168] Below is an example implementation of an atlas sequence parameter set lighting extension.

[00169] ale_environment_map_in_attribute_present_flag equal to 1 specifies that patch data units may contain lighting information.

[00170] Pre-Processed Lighting Map (105) as lighting patches.

[00171] A Pre-Processed Lighting Map (105) can be transmitted by File Format (108) as traditional cubemaps, or when ingested to V3C encoder (104), it can be mapped to patches of V3C. A new patch type can be defined to describe the lighting video components as well to provide mapping information between the pre-processed map patches and scene objects. The mapping can be done based on an object ID provided by a SEI message or an entity ID if present in patches describing the attribute, geometry, and occupancy video components.

[00172] Below is an implementation of a patch information data structure.

[00173] Table 7 shows patch modes for I_TILE type atlas tiles, including identifier I_LIGHT that provides a lighting patch mode with atdu_patch_mode[ tilelD ][ p ] = 3. Table 7

[00174] Table 8 shows patch types, including a LIGHT patch.

Table 8 [00175] Below is an example implementation of a lighting patch data unit.

[00176] lpdu_2d_pos_x [ tilelD ][ p ] specifies the x- coordinate of the top-left corner of the patch bounding box for patch p in the current atlas tile, with tile ID equal to tilelD, expressed as a multiple of PatchPackingBlockSize.

[00177] lpdu_2d_pos_y [ tilelD ][ p ] specifies the y- coordinate of the top-left corner of the patch bounding box for patch p in the current atlas tile, with tile ID equal to tilelD, expressed as a multiple of PatchPackingBlockSize. [00178] lpdu_2d_size_x_minusl [ tilelD ][ p ] plus 1 specifies the quantized width value of the patch with index p in the current atlas tile, with tile ID equal to tilelD.

[00179] lpdu_2d_size_y_minusl [ tilelD ][ p ] plus 1 specifies the quantized height value of the patch with index p in the current atlas tile, with tile ID equal to tilelD.

[00180] lpdu_3d_offset_u [ tilelD ][ p ] specifies the shift to be applied to the reconstructed patch points in the patch with index p of the current atlas tile, with tile ID equal to tilelD, along the tangent axis. [00181] lpdu_3d_offset_v [ tilelD ][ p ] specifies the shift to be applied to the reconstructed patch points in the patch with index p of the current atlas tile, with tile ID equal to tilelD, along the bi-tangent axis. [00182] lpdu_cubemap_face_id [ tilelD ][ p ] specifies the values of the cubemap face id for the patch with index p of the current atlas tile, with tile ID equal to tilelD.

[00183] Table 9 shows a cube map face id mapping.

[00184] lpdu_orientation_index [ tilelD ][ p ] specifies the patch orientation index, for the patch with index p of the current atlas tile, with tile ID equal to tilelD.

[00185] lpdu_object_count_minusl [ tilelD ][ p ] plus 1 indicates the total number of objects and associated lighting information within the patch with index p of the current atlas tile, with tile ID equal to tilelD.

[00186] lpdu_object_id [ tilelD ][ p ][ i ] specifies the object ID of the lighting of the patch with index equal to p, in a tile with ID equal to tilelD applies. Object ID can be mapped to object ID provided by a scene object information SEI message or to an entity ID provided in the MIV extension.

[00187] lpdu_pre_filtered_map [ tilelD ][ p ] equal to 1 indicates that the lpdu_pre_filtered_map_roughness[ tilelD ][ p ] syntax element is present in the lighting_patch_data_unit() syntax structure.

[00188] lpdu_pre_filtered_map_roughness [ tilelD ][ p ] indicates the roughness value for which the pre-filtered environment map was calculated.

[00189] Pre-Processed Lighting Maps (105) as attribute texture patch data

[00190] In one embodiment information related to pre-processed lighting maps may be stored inside patches. This requires signalling of patch types inside the V3C patch_data_unit(). An extension of the patch data unit may be defined as follows:

[00191] pdu_lighting_type_id [ tilelD ][ p ] defines the type of the patch as defined in Table 6 - V3C lighting types. An unspecified value is to be used to express that the patch is a normal patch and does not contain pre-processed lighting information.

[00192] pdu_pre_filtered_map_roughness [ tilelD ][ p ] indicates the roughness value for which the pre-filtered environment map was calculated. [00193] pdu_object_count_minusl [ tilelD ] [ p ] plus 1 indicates the total number of objects associated with lighting information within the patch with index p of the current atlas tile, with tile ID equal to tilelD.

[00194] pdu_object_id [ tilelD ][p ][ i ] specifies the object ID of the lighting of the patch with index equal to p, in a tile with ID equal to tilelD applies. Object ID can be mapped to the object ID provided by the scene object information SEI message or to the entity ID provided in MIV extension.

[00195] Pre-Processed Lighting Maps (105) as attribute lighting patch data [00196] In one embodiment information related to pre-processed lighting maps may be stored as a new attribute type. The attribute type could have a different layout of the patches in relation to occupancy, geometry and attributes with a type different from ATTR_LIGHT. The different layout of patches may be signaled as in U.S. publication number US 2020-0294271 A1 described by the Applicant/assignee of this disclosure having the same first named inventor.

[00197] Table 10 shows V3C attribute types.

[00198] The patches related to an attribute of ATTR_LIGHT could be signaled as patches with a lighting extension as described in "Pre-Processed Lighting Maps (105) as attribute texture patch data" or as patches with mode I_LIGHT described in "Pre-Processed Lighting Map (105) as lighting patches."

[00199] Lighting Parameters (106) as SEI message

[00200] SEI messages may contain information as defined in the casps_lighting_information() syntax structure. [00201] Lighting Parameters (106) as VUI ambient

[00202] SEI messages (including VUI ambient messages) may contain information as defined in the casps_lighting_information() syntax structure.

[00203] Signaling directly in ISOBMFF

[00204] The Pre-Processed Lighting Map (105) can be encoded directly and encapsulated by the File Format (108) as tracks. Such tracks could be represented in the file as restricted video and dedicated and identified as e.g. 'lght', in the scheme_type field of the SchemeTypeBox of the RestrictedSchemelnfoBox of their restricted video sample entries.

[00205] A static Pre-Processed Map (105) can be encoded directly and encapsulated by File Format (108) as an item identified by a type 4CC code, e.g. 'lght'.

[00206] Lighting Parameters (106) can be encoded directly and encapsulated by File Format (108) LightingVideoBox in SchemelnformationBox of the RestrictedSchemelnfoBox . An example is shown below. [00207] When Lighting Parameters (106) are dynamic, they could be encoded as a sample of a metadata track. A sample entry is defined as shown below that allows identification of a lighting parameters metadata track containing lighting information samples.

Sample Entry Type: 'lght'

Container: Sample Description Box ('lght')

Mandatory: No

Quantity: 0 or 1 aligned (8) class LightingMetadataSampleEntry() extends MetadataSampleEntry ('lght') {

}

[00208] Signaling V3C lighting video component in ISOBMFF

[00209] The track referencing mechanism between the V3C atlas track 704 and the V3C video component track 706 containing lighting information described in FIG. 7 could be provided. In such a case, a single track reference type, which may be called 'v3vl' (refer to item 702) may be used from/to V3C atlas track 704 to/from V3C video component track 706 that describes samples with lighting information originated from V3C units with vuh_unit_type equal to V3C_LVD as described in Table 5. Refer to samples 708-1, 708-2. 708-3, 708-4, and 708-5 described by the V3C video component track 706.

[00210] As further shown in FIG. 7, the V3C video component track 706 having the lighting bitstream is comprised of a restricted video sample entry 710, where the restricted video sample entry 710 is comprised of a video configuration 712 and a V3C unit header 714. The video configuration 712 includes parameter sets, SEI, etc. The V3C video component tracks having the geometry, attribute, and occupancy bitstreams (respectively 716, 718, 720) are configured similar to the V3C video component track 706 having the lighting bitstream.

[00211] As further shown in FIG. 7, a track reference type called 'v3vo' (refer to item 722) may be used from/to V3C atlas track 704 to/from V3C video component track 720 that describes samples with occupancy information originated from V3C units with vuh_unit_type equal to V3C_OVD as described in Table 5. A track reference type called V3va' (refer to item 724) may be used from/to V3C atlas track 704 to/from V3C video component track 718 that describes samples with attribute information originated from V3C units with vuh_unit_type equal to V3C_AVD as described in Table 5. A track reference type called 'v3vg' (refer to item 726) may be used from/to V3C atlas track 704 to/from V3C video component track 716 that describes samples with geometry information originated from V3C units with vuh_unit_type equal to V3C_GVD as described in Table 5.

[00212] As further shown in FIG. 7, V3C atlas track 704 includes a sample entry 728 having a V3C configuration 730 and a V3C unit header 732, where the V3C configuration 730 includes parameter sets, SEI, etc. Similar to the V3C video component track 706, each of the V3C atlas track 704, the V3C video component track 716, the V3C video component track 718, and the V3C video component track 720 reference samples 708.

[00213] The examples described herein support coding of camera captured natural scenes with non-Lambertian (or non- lambertian) characteristics. When natural scenes with non- Lambertian surfaces are captured, e.g. specular surfaces, transparent objects, etc., the appearance of the scene varies depending on the viewpoint within the viewing volume, from which the scene is consumed. The coding of the scene with non-Lambertian surfaces may include first determining those regions of the scene that express such characteristics and coding of additional meta-data information that help a renderer in the client device to represent the scene in a photorealistic manner, regardless of the rendering technology used.

[00214] The examples described herein enable coding a dynamic volumetric scene that contains non-Lambertian surfaces. The examples described herein enable handling of heterogeneous object-specific parameters (e.g. temporal sampling, duration, atlas sizes, and non-Lambertian characteristics) at the MIV bitstream level. A kind of lightning map may be used to signal these non-lambertian characteristics. MIV is to build on the V3C framework. For a list scene description, lighting maps are relevant to providing the overall lighting information for the objects in the scene.

[00215] FIG. 8 is an apparatus 800 which may be implemented in hardware, configured to implement dynamic re-lighting of volumetric video, based on any of the examples described herein. The apparatus comprises a processor 802, at least one memory 804 including computer program code 805, wherein the at least one memory 804 and the computer program code 805 are configured to, with the at least one processor 802, cause the apparatus to implement circuitry, a process, component, module, function, coding, and/or decoding (collectively 806) to implement dynamic re-lighting of volumetric video, based on the examples described herein. The apparatus 800 optionally includes a display and/or I/O interface 808 that may be used to display an output (e.g., an image or volumetric video) of a result of the component 806. The display and/or I/O interface 808 may also be configured to receive input such as user input (e.g. with a keypad). The apparatus 800 also includes one or more network (NW) interfaces (I/F(s)) 810. The NW I/F(s) 810 may be wired and/or wireless and communicate over a channel or the Internet/other network(s) via any communication technique. The NW I/F(s) 810 may comprise one or more transmitters and one or more receivers. The N/W I/F(s) 810 may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de)modulator, and encoder/decoder circuitry(ies) and one or more antennas. In some examples, the processor 802 is configured to implement item 806 without use of memory 804.

[00216] The apparatus 800 may be a remote, virtual or cloud apparatus. The apparatus 800 may be either a writer or a reader (e.g. parser), or both a writer and a reader (e.g. parser). The apparatus 800 may be either a coder or a decoder, or both a coder and a decoder. The apparatus 800 may be a user equipment (UE), a head mounted display (HMD), or any other fixed or mobile device.

[00217] The memory 804 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The memory 804 may comprise a database for storing data. The memory 804 may be non-transitory, transitory, volatile or non-volatile memory.

[00218] Interface 812 enables data communication between the various items of apparatus 800, as shown in FIG. 8. Interface 812 may be one or more buses, or interface 812 may be one or more software interfaces configured to pass data between the items of apparatus 800. For example, the interface 812 may be one or more buses such as address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, and the like. The apparatus 800 need not comprise each of the features mentioned, or may comprise other features as well. The apparatus may be an embodiment of apparatus 100 or apparatus 200, for example having the features shown in the apparatuses of FIG. 1 and/or FIG. 2.

[00219] FIG. 9 is a method 900 to implement dynamic re lighting of volumetric video, based on the examples described herein. At 902, the method includes obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content. At 904, the method includes extracting lighting information from the obtained scene. At 906, the method includes processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map. At 908, the method includes encoding the scene with the at least one pre- processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video- based coding bitstream. Method 900 may be implemented with apparatus 100 or with apparatus 800.

[00220] FIG. 10 is a method 1000 to implement dynamic re lighting of volumetric video, based on the examples described herein. At 1002, the method includes receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene. At 1004, the method includes wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene. At 1006, the method includes rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information. Method 1000 may be implemented with apparatus 200 or with apparatus 800.

[00221] FIG. 11 is an example method to code a scene with non- Lambertian characteristics. At 1102, the method includes determining at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed. At 1104, the method includes coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene. At 1106, the method includes signaling non-lambertian characteristics of the scene, the signaling comprising a lighting map. Method 1100 may be implemented with apparatus 100 or with apparatus 800.

[00222] FIG. 12 is an example method to decode a scene with non-Lambertian characteristics. At 1202, the method includes decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content. At 1204, the method includes decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene. At 1206, the method includes receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map. Method 1200 may be implemented with apparatus 200 or with apparatus 800. [00223] References to a 'computer', 'processor', etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device such as instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device, etc.

[00224] As used herein, the term 'circuitry' may refer to any of the following: (a) hardware circuit implementations, such as implementations in analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. As a further example, as used herein, the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term 'circuitry' would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device. Circuitry may also be used to mean a function or a process, such as one implemented by an encoder or decoder, or a codec.

[00225] The ideas described herein may be contributed to standardization in MPEG-I: ISO/IEC 23090-5 - Visual Volumetric Video-based Coding and Video-based Point Cloud Compression; and/or ISO/IEC 23090-10 Carriage of Visual Volumetric Video- based Coding Data. Further, the examples described herein may be included in 23090-12 ed2, and subsequently in 23090-5 and -10.

[00226] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: obtain a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; extract lighting information from the obtained scene; process the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and encode the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream.

[00227] Other aspects of the apparatus may include the following. The lighting information may be in the form of explicit light sources comprising point lights or ambient light, or may be provided as image based lighting. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: extract geometry data and attribute data from the obtained scene; wherein the geometry data is three-dimensional information, and the attribute data is used to describe rendering details of the geometry data; process the extracted geometry data and attribute data based on volumetric visual video-based compression or another format for compression of volumetric video information; encode the scene with the processed geometry data and attributes data from the obtained scene; and store the geometry data and attribute data along with the pre-processed lighting information in a file format or visual volumetric video-based coding bitstream. The at least one pre-processed lighting map may be an environment map that captures the scene with lighting information from a center of the scene. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: map the environment map to a plurality of patches, the plurality of patches respectively representing a cube face; and transmit the plurality of patches as a separate lighting video component or as a lighting attribute identified using the signaled lighting information. A patch type may describe the lighting video component, and the patch type may provide mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components. The at least one pre-processed lighting map may be calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: transmit the at least one pre-processed lighting map as one or more patches together with attribute texture. The at least one pre-processed lighting map may represent at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre-filtered environment map used to calculate a specular lighting component. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: provide sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre- processed lighting map or from a plurality of patches in real time during rendering. The at least one lighting parameter may provide information concerning how the at least one pre- processed lighting map is generated and used by a renderer. The at least one lighting parameter may comprise at least one of a lighting source type, position, color/strength, or orientation. The at least one lighting parameter may be signaled using either a supplemental enhancement information message, or as a video usability information ambient message. The at least one lighting parameter may be encoded as a sample of a metadata track using a sample entry. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: encode the at least one pre-processed lighting map as a video sequence; transmit the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; and wherein the at least one pre-processed lighting map is interpreted using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: signal information related to the at least one pre-processed lighting map using an extension to a patch data unit. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: signal information related to the at least one pre-processed lighting map using an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled using either attribute texture patch data, or using one or more lighting patches. The at least one pre-processed lighting map may be encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter may be encapsulated with the file format in a scheme information box of a restricted scheme information box. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: provide a track reference type used from/to a visual volumetric video-based coding atlas track to/from a visual volumetric video-based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video-based coding units having a lighting video data type. The lighting information may be extracted using at least one visual volumetric video-based coding construct or at least one file format level method.

[00228] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to: determine at least one region of the scene of three- dimensional information, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; code metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and signal non-lambertian characteristics of the scene, the signaling comprising the at least one pre-processed lighting map.

[00229] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: receive an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre- processed lighting map or/and at least one lighting parameter associated with the scene; and render a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information. [00230] Other aspects of the apparatus may include the following. The at least one pre-processed lighting map or/and the at least one lighting parameter associated with the scene may be signaled using at least one visual volumetric video- based coding construct or at least one file format level method. The at least one pre-processed lighting map or/and the at least one lighting parameter associated with the scene may be utilized to render the scene. The at least one pre- processed lighting map may be an environment map that captures the scene with lighting information from a center of the scene. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive a plurality of patches as a separate lighting video component or as a lighting attribute identified with the signaled lighting information; wherein the environment map has been mapped to the plurality of patches, the plurality of patches respectively representing a cube face. A patch type may describe the lighting video component, and the patch type may provide mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components. The at least one pre-processed lighting map may be calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive the at least one pre-processed lighting map as one or more patches together with attribute texture. The at least one pre-processed lighting map may represent at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre filtered environment map used to calculate a specular lighting component. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre-processed lighting map or from a plurality of patches in real-time during rendering. The at least one lighting parameter may provide information concerning how the at least one pre-processed lighting map is generated and used by a renderer. The at least one lighting parameter may comprise at least one of a lighting source type, position, color/strength, or orientation. The at least one lighting parameter may be signaled using either a supplemental enhancement information message, or as a video usability information ambient message. The at least one lighting parameter may be encoded as a sample of a metadata track using a sample entry. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; wherein the at least one pre-processed lighting map is encoded as a video sequence; and interpret the at least one pre-processed lighting map using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive information related to the at least one pre-processed lighting map signaled using an extension to a patch data unit. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: receive information related to the at least one pre-processed lighting map through an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled with either attribute texture patch data, or with one or more lighting patches. The at least one pre-processed lighting map may be encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter may be encapsulated with the file format in a scheme information box of a restricted scheme information box. The at least one memory and the computer program code may be further configured to, with the at least one processor, cause the apparatus at least to: decode a track reference type used from/to a visual volumetric video-based coding atlas track to/from a visual volumetric video-based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video- based coding units having a lighting video data type. The scene may be a three-dimensional scene, and the geometry may be three-dimensional information and attribute information is used to describe rendering details of the geometry. The lighting information may be in the form of explicit light sources comprising point lights or ambient light, or may be provided as image based lighting.

[00231] An example apparatus includes means for obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; means for extracting lighting information from the obtained scene; means for processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre- processed lighting map; and means for encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream.

[00232] The apparatus may further include wherein the lighting information is in the form of explicit light sources comprising point lights or ambient light, or is provided as image based lighting.

[00233] The apparatus may further include means for extracting geometry data and attribute data from the obtained scene; wherein the geometry data is three-dimensional information, and the attribute data is used to describe rendering details of the geometry data; means for processing the extracted geometry data and attribute data based on volumetric visual video-based compression or another format for compression of volumetric video information; means for encoding the scene with the processed geometry data and attributes data from the obtained scene; and means for storing the geometry data and attribute data along with the pre-processed lighting information in a file format or visual volumetric video-based coding bitstream.

[00234] The apparatus may further include wherein the at least one pre-processed lighting map is an environment map that captures the scene with lighting information from a center of the scene.

[00235] The apparatus may further include means for mapping the environment map to a plurality of patches, the plurality of patches respectively representing a cube face; and means for transmitting the plurality of patches as a separate lighting video component or as a lighting attribute identified using the signaled lighting information.

[00236] The apparatus may further include wherein a patch type describes the lighting video component, and the patch type provides mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components.

[00237] The apparatus may further include wherein the at least one pre-processed lighting map is calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map.

[00238] The apparatus may further include means for transmitting the at least one pre-processed lighting map as one or more patches together with attribute texture. [00239] The apparatus may further include wherein the at least one pre-processed lighting map represents at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre filtered environment map used to calculate a specular lighting component.

[00240] The apparatus may further include means for providing sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre- processed lighting map or from a plurality of patches in real time during rendering.

[00241] The apparatus may further include wherein the at least one lighting parameter provides information concerning how the at least one pre-processed lighting map is generated and used by a renderer.

[00242] The apparatus may further include wherein the at least one lighting parameter comprises at least one of a lighting source type, position, color/strength, or orientation.

[00243] The apparatus may further include wherein the at least one lighting parameter is signaled using either a supplemental enhancement information message, or as a video usability information ambient message.

[00244] The apparatus may further include wherein the at least one lighting parameter is encoded as a sample of a metadata track using a sample entry. [00245] The apparatus may further include means for encoding the at least one pre-processed lighting map as a video sequence; means for transmitting the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; and wherein the at least one pre-processed lighting map is interpreted using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set.

[00246] The apparatus may further include means for signaling information related to the at least one pre-processed lighting map using an extension to a patch data unit.

[00247] The apparatus may further include means for signaling information related to the at least one pre-processed lighting map using an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled using either attribute texture patch data, or using one or more lighting patches.

[00248] The apparatus may further include wherein: the at least one pre-processed lighting map is encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter is encapsulated with the file format in a scheme information box of a restricted scheme information box.

[00249] The apparatus may further include means for providing a track reference type used from/to a visual volumetric video- based coding atlas track to/from a visual volumetric video- based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video-based coding units having a lighting video data type.

[00250] The apparatus may further include wherein the lighting information is extracted using at least one visual volumetric video-based coding construct or at least one file format level method.

[00251] The apparatus may further include means for determining at least one region of the scene of three- dimensional information, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; means for coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for signaling non- lambertian characteristics of the scene, the signaling comprising the at least one pre-processed lighting map.

[00252] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00253] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00254] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object.

[00255] The apparatus may further include means for coding or decoding at least one heterogeneous object-specific parameter at a bitstream level. [00256] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00257] The apparatus may further include wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

[00258] The apparatus may further include means for capturing the at least one region of the scene of three-dimensional information, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00259] The apparatus may further include wherein the means for capturing comprises at least one camera.

[00260] The apparatus may further include where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

[00261] An example apparatus includes means for receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and means for rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information. [00262] The apparatus may further include wherein the at least one pre-processed lighting map or/and the at least one lighting parameter associated with the scene is signaled using at least one visual volumetric video-based coding construct or at least one file format level method.

[00263] The apparatus may further include wherein the at least one pre-processed lighting map or/and the at least one lighting parameter associated with the scene is utilized to render the scene.

[00264] The apparatus may further include wherein the at least one pre-processed lighting map is an environment map that captures the scene with lighting information from a center of the scene.

[00265] The apparatus may further include means for receiving a plurality of patches as a separate lighting video component or as a lighting attribute identified with the signaled lighting information; wherein the environment map has been mapped to the plurality of patches, the plurality of patches respectively representing a cube face.

[00266] The apparatus may further include wherein a patch type describes the lighting video component, and the patch type provides mapping information between the plurality of patches and one or more scene objects, wherein the mapping information is based on an object identifier provided as a supplemental enhancement information message or an entity identifier present in the plurality of patches describing attribute, geometry, and occupancy video components.

[00267] The apparatus may further include wherein the at least one pre-processed lighting map is calculated for one or more objects in the scene, or for one or more pre-defined positions in the scene, the one or more pre-defined positions being signaled using the at least one lighting parameter, to identify common parts of the at least one pre-processed lighting map.

[00268] The apparatus may further include means for receiving the at least one pre-processed lighting map as one or more patches together with attribute texture.

[00269] The apparatus may further include wherein the at least one pre-processed lighting map represents at least one of: an irradiance map comprising a sum of indirect diffuse light hitting a surface from a given direction used to calculate diffuse lighting for an object in the scene; or a bidirectional reflective distribution function integration map and pre filtered environment map used to calculate a specular lighting component.

[00270] The apparatus may further include means for receiving sampling data for the irradiance map or pre-filtered environment map as additional metadata in the at least one lighting parameter so that the irradiance map or pre-filtered environment map is generated from the at least one pre- processed lighting map or from a plurality of patches in real time during rendering.

[00271] The apparatus may further include wherein the at least one lighting parameter provides information concerning how the at least one pre-processed lighting map is generated and used by a renderer.

[00272] The apparatus may further include wherein the at least one lighting parameter comprises at least one of a lighting source type, position, color/strength, or orientation. [00273] The apparatus may further include wherein the at least one lighting parameter is signaled using either a supplemental enhancement information message, or as a video usability information ambient message.

[00274] The apparatus may further include wherein the at least one lighting parameter is encoded as a sample of a metadata track using a sample entry.

[00275] The apparatus may further include means for receiving the at least one pre-processed lighting map as a visual volumetric video-based coding bitstream with a lighting video data identifier; wherein the at least one pre-processed lighting map is encoded as a video sequence; and means for interpreting the at least one pre-processed lighting map using information provided in at least one of a visual volumetric video-based coding parameter set, a common atlas sequence parameter set, or an atlas sequence parameter set.

[00276] The apparatus may further include means for receiving information related to the at least one pre-processed lighting map signaled using an extension to a patch data unit.

[00277] The apparatus may further include means for receiving information related to the at least one pre-processed lighting map through an attribute type; wherein the attribute type comprises a layout of a plurality of patches of the at least one pre-processed lighting map in relation to occupancy, geometry, and attributes with a type different from the attribute type; and wherein the plurality of patches are signaled with either attribute texture patch data, or with one or more lighting patches.

[00278] The apparatus may further include wherein: the at least one pre-processed lighting map is encapsulated with the file format as one or more tracks, and identified with a four character code; and the at least one lighting parameter is encapsulated with the file format in a scheme information box of a restricted scheme information box.

[00279] The apparatus may further include means for decoding a track reference type used from/to a visual volumetric video- based coding atlas track to/from a visual volumetric video- based coding video component track; wherein the track reference type describes one or more samples with lighting information originated from visual volumetric video-based coding units having a lighting video data type.

[00280] The apparatus may further include wherein the scene is a three-dimensional scene, and the geometry is three- dimensional information and attribute information is used to describe rendering details of the geometry.

[00281] The apparatus may further include wherein the lighting information is in the form of explicit light sources comprising point lights or ambient light, or is provided as image based lighting.

[00282] The apparatus may further include means for decoding at least one region of the encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; means for decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one pre-processed lighting map. [00283] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00284] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00285] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object.

[00286] The apparatus may further include means for decoding at least one heterogeneous object-specific parameter at a bitstream level.

[00287] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00288] The apparatus may further include wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

[00289] The apparatus may further include means for capturing the at least one region of the scene, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00290] The apparatus may further include wherein the means for capturing comprises at least one camera.

[00291] The apparatus may further include where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

[00292] An example apparatus includes means for determining at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; means for coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for signaling non-lambertian characteristics of the scene, the signaling comprising at least one lighting map.

[00293] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00294] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00295] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object.

[00296] The apparatus may further include means for coding or decoding at least one heterogeneous object-specific parameter at a bitstream level.

[00297] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00298] The apparatus may further include wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

[00299] The apparatus may further include means for capturing the at least one region of the scene of three-dimensional content, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00300] The apparatus may further include wherein the means for capturing comprises at least one camera.

[00301] The apparatus may further include where the lighting map provides overall lighting information for a plurality of objects within the scene.

[00302] An example apparatus includes means for decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; means for decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and means for receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

[00303] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00304] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00305] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object.

[00306] The apparatus may further include means for decoding at least one heterogeneous object-specific parameter at a bitstream level. [00307] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00308] The apparatus may further include wherein the bitstream level comprises a moving picture experts group immersive bitstream level.

[00309] The apparatus may further include means for capturing the at least one region of the scene, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00310] The apparatus may further include wherein the means for capturing comprises at least one camera.

[00311] The apparatus may further include where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

[00312] An example method includes obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; extracting lighting information from the obtained scene; processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream. [00313] An example method includes receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre- processed lighting map or/and at least one lighting parameter associated with the scene; and rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

[00314] An example method includes determining at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; coding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and signaling non- lambertian characteristics of the scene, the signaling comprising at least one lighting map.

[00315] An example method includes decoding at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; decoding metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and receiving signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

[00316] An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations is provided, the operations comprising: obtaining a scene comprising three-dimensional information in the form of point clouds, three-dimensional meshes, two-dimensional projections of three-dimensional information, light sources, animations or any other form considered as a representation or description of three-dimensional content; extracting lighting information from the obtained scene; processing the extracted lighting information into at least one explicit lighting parameter and/or at least one pre-processed lighting map; and encoding the scene with the at least one pre-processed lighting map and/or the at least one lighting parameter in a file format or as a visual volumetric video-based coding bitstream.

[00317] An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations is provided, the operations comprising: receiving an encoded scene with lighting information signaled in a file format or as a visual volumetric video-based coding bitstream, and with geometry and attributes information associated with the scene; wherein the lighting information comprises at least one pre-processed lighting map or/and at least one lighting parameter associated with the scene; and rendering a reconstruction of the scene with view-dependent lighting effects on a plurality of surfaces for a given viewer position, using the lighting information and the geometry and attributes information.

[00318] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: determine at least one region of a scene of three-dimensional content, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed; code metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and signal non- lambertian characteristics of the scene, the signaling comprising at least one lighting map.

[00319] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00320] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00321] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object.

[00322] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to: code or decode at least one heterogeneous object- specific parameter at a bitstream level.

[00323] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00324] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to: capture the at least one region of the scene of three-dimensional content, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00325] The apparatus may further include wherein the capturing is performed using at least one camera.

[00326] The apparatus may further include where the lighting map provides overall lighting information for a plurality of objects within the scene.

[00327] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: decode at least one region of an encoded scene, where an appearance of the scene varies depending on a viewpoint within a viewing volume from which the scene is consumed, and where the scene comprises three-dimensional content; decode metadata configured to assist a renderer in a client device to represent the scene in a photorealistic manner regardless of a technology used to render the scene; and receive signaling of non-lambertian characteristics of the scene, the signaling comprising the at least one lighting map.

[00328] The apparatus may further include wherein the scene of volumetric content is a natural dynamic volumetric scene comprising at least one non-lambertian surface.

[00329] The apparatus may further include wherein the at least one non-lambertian surface comprises a specular surface.

[00330] The apparatus may further include wherein the at least one non-lambertian surface comprises a transparent object. [00331] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to decode at least one heterogeneous object-specific parameter at a bitstream level.

[00332] The apparatus may further include wherein the at least one heterogeneous object-specific parameter comprises at least one of: a temporal sampling parameter; a duration; an atlas size; or a non-lambertian characteristic.

[00333] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to capture the at least one region of the scene, where the appearance of the scene varies depending on the viewpoint within the viewing volume from which the scene is consumed.

[00334] The apparatus may further include wherein the capturing is performed using at least one camera.

[00335] The apparatus may further include where the at least one pre-processed lighting map provides overall lighting information for a plurality of objects within the scene.

[00336] It should be understood that the foregoing description is only illustrative. Various alternatives and modifications may be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination (s). In addition, features from different embodiments described above could be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims. [00337] The following acronyms and abbreviations that may be found in the specification and/or the drawing figures are defined as follows:

2D two-dimensional

3D three-dimensional

3GPP third generation partnership project

4CC four character code

ACL atlas coding layer ao ambient occlusion

ASIC application-specific integrated circuit

ASPS atlas sequence parameter set

BRDF bidirectional reflectance/reflective distribution function

CASPS common atlas sequence parameter set

CD committee draft

CVS coded V3C sequence

DASH dynamic adaptive streaming over HTTP

EOM enhanced occupancy map

Exp exponential

FDIS final draft international standard

FOI field of illumination

FPGA field-programmable gate array glTF graphics library/language transmission format

GOP group of pictures

HDR high dynamic range

HEIF high efficiency image file format

HEVC high efficiency video coding

HMD head mounted display

HRD hypothetical reference decoder

HTTP hypertext transfer protocol

IBL image based lighting id or ID identifier IEC International Electrotechnical Commission

I/F interface

I/O input/output

ISO International Organization for Standardization

ISOBMFF ISO base media file format

MIV MPEG immersive video

MP4 MPEG-4

MPEG moving picture experts group

MPEG-I MPEG immersive

NAL or nal network abstraction layer

NW network

PBR physically based rendering

RBSP raw byte sequence payload

RGB or r, g, b red, green, blue

SAP stream access point

SEI supplemental enhancement information

TMIV test model for immersive video

TBD to be determined u (n) unsigned integer using n bits

UE user equipment ue (v) unsigned integer Exp-Golomb-coded syntax element with the left bit first.

UV "U" and "V" are axes of a 2D texture

V3C visual volumetric video-based coding

VPCC video-based point cloud compression

VPS V3C parameter set

VUI video usability information

XML extensible markup language