Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VOLUMETRIC VIDEO
Document Type and Number:
WIPO Patent Application WO/2019/135024
Kind Code:
A1
Abstract:
There are disclosed various methods, apparatuses and computer program products for video encoding. The method comprises inputting a point cloud frame in an encoder (900); projecting a 3D object represented by the point cloud frame onto a 2D patch (902); generating a geometry image, a texture image and a occupancy map from the 2D patch (904); partitioning the occupancy map into image blocks of a predetermined size along a predetermined block grid (906); assigning, on the basis of binary values of the image block a codeword for each image block (908); mapping the codeword of each image block according to a mapping scheme to sample values of a multi-level occupancy map (910); and multiplexing the geometry image and the multi-level occupancy map into sample arrays of an image for compression (912).

Inventors:
HANNUKSELA, Miska (Rusthollinrinne 2, Tampere, 33610, FI)
SCHWARZ, Sebastian (Vähäjärvenkatu 4, Tampere, 33900, FI)
Application Number:
FI2018/050965
Publication Date:
July 11, 2019
Filing Date:
December 21, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (Karaportti 3, Espoo, 02610, FI)
International Classes:
H04N19/00; H04N1/411; H04N13/161; H04N19/597
Foreign References:
US20020135814A12002-09-26
Other References:
MAMMOU, K.: "PCC Test Model Category 2 v0", MPEG DMS, ISO/IEC JTC1/SC29/WG11 N17248, 14 December 2017 (2017-12-14), XP030023909, Retrieved from the Internet [retrieved on 20181204]
SCHWARZ, S. ET AL.: "Nokia’s response to CfP for Point Cloud Compression (Category 2)", ISO/IEC JTC1/SC29/WG11 MPEG2017/M41779, 17 October 2017 (2017-10-17), XP030070121, Retrieved from the Internet [retrieved on 20190329]
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (Ari Aarnio, IPR DepartmentKarakaari 7, Espoo, 02610, FI)
Download PDF:
Claims:
CLAIMS

1. A method comprising:

inputting a point cloud frame in an encoder;

projecting a 3D object represented by the point cloud frame onto a 2D patch;

generating a geometry image, a texture image and a occupancy map from the 2D patch;

partitioning the occupancy map into image blocks of a predetermined size along a predetermined block grid;

assigning, on the basis of binary values of the image block, a codeword for each image block;

mapping the codeword of each image block according to a mapping scheme to sample values of a multi-level occupancy map; and

multiplexing the geometry image and the multi-level occupancy map into sample arrays of an image for compression.

2. The method according to claim 1, further comprising

signaling information about the mapping scheme and variables used for applying the mapping scheme for creating the multi-level occupancy map in or along a bitstream comprising the compressed image data to a decoder.

3. The method according to claim 1 or 2, wherein the size of the image blocks is determined based on a chroma format targeted for decoding.

4. The method according to any preceding claim, further comprising

using a codeword mapping where between any two adjacent codeword values, only one pixel in the respective occupancy map block changes state.

5. The method according to any preceding claim, wherein the geometry image is used as a luma sample array of the image and the multi-level occupancy map is used as one of the chroma sample arrays of the image.

6. A method comprising:

receiving a bitstream in a decoder; demultiplexing a texture image and multiplexed sample arrays of a geometry image and a multi-level occupancy map from the bitstream;

obtaining information identifying a mapping scheme used for mapping codewords for image blocks of a binary occupancy map and variables used for applying the mapping scheme for creating the multi-level occupancy map;

decoding the multi-level occupancy map into a plurality of samples;

quantizing each sample value to a closest quantization level determined on the basis of said variables; and

creating image blocks of a predetermined size of a binary occupancy map by inverse mapping said mapping scheme onto said quantized sample values.

7. The method according to claim 6, wherein the information identifying the mapping scheme and the variables are obtained by decoding the information from or along the bitstream or determining a predetermined mapping scheme and related variables.

8. The method according to claim 6 or 7, further comprising

decoding, from the bitstream, a coded picture into a decoded picture comprising the multiplexed sample arrays; and

demultiplexing the sample arrays of the multi-level occupancy map from the multiplexed sample arrays.

9. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform the method according to at least one of claims 1 to 5.

10. A computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform the method according to at least one of claims 1 to 5.

11. An apparatus comprising:

means for inputting a point cloud frame in an encoder;

means for projecting a 3D object represented by the point cloud frame onto a 2D patch; means for generating a geometry image, a texture image and a occupancy map from the 2D patch;

means for partitioning the occupancy map into image blocks of a predetermined size along a predetermined block grid;

means for assigning, on the basis of binary values of the image block, a codeword for each image block;

means for mapping the codeword of each image block according to a mapping scheme to sample values of a multi-level occupancy map; and

means for multiplexing the geometry image and the multi-level occupancy map into sample arrays of an image for compression.

12. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform the method according to at least one of claims 6 to

8.

13. A computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform the method according to at least one of claims 6 to 8.

14. An apparatus comprising:

means for receiving a bitstream in a decoder;

means for demultiplexing a texture image and multiplexed sample arrays of a geometry image and a multi-level occupancy map from the bitstream;

means for obtaining information identifying a mapping scheme used for mapping codewords for image blocks of a binary occupancy map and variables used for applying the mapping scheme for creating the multi-level occupancy map;

means for decoding the multi-level occupancy map into a plurality of samples; means for quantizing each sample value to a closest quantization level determined on the basis of said variables; and

means for creating image blocks of a predetermined size of a binary occupancy map by inverse mapping said mapping scheme onto said quantized sample values.

Description:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR

VOLUMETRIC VIDEO

TECHNICAL FIELD

[0001] The present invention relates to an apparatus, a method and a computer program for content dependent projection for volumetric video coding and decoding.

BACKGROUND

[0002] Volumetric video data represents a three-dimensional scene or object and can be used as input for virtual reality (VR), augmented reality (AR) and mixed reality (MR) applications. Such data describes the geometry, e.g. shape, size, position in three-dimensional (3D) space, and respective attributes, e.g. colour, opacity, reflectance and any possible temporal changes of the geometry and attributes at given time instances. Volumetric video is either generated from 3D models through computer-generated imagery (CGI), or captured from real-world scenes using a variety of capture solutions, e.g. multi-camera, laser scan, combination of video and dedicated depth sensors, and more. Also, a combination of CGI and real-world data is possible.

[0003] Typical representation formats for such volumetric data are triangle meshes, point clouds (PCs), or voxel arrays. Representation of the 3D data depends on how the 3D data is used. Dense Voxel arrays have been used to represent volumetric medical data. In 3D graphics, polygonal meshes are extensively used. Point clouds on the other hand are well suited for applications such as capturing real world 3D scenes where the topology is not necessarily a 2D manifold.

[0004] In dense point clouds or voxel arrays, the reconstructed 3D scene may contain tens or even hundreds of millions of points. One way to compress a time- varying volumetric scene/object is to project 3D surfaces on to some number of pre-defined 2D planes. Regular 2D video compression algorithms can then be used to compress various aspects of the projected surfaces. For e.g. a time-varying 3D point cloud, with spatial and texture coordinates, can be mapped into a sequence of at least three sets of planes, where a first set carries the temporal motion image data, a second set carries the texture data and a third set carries the depth data, i.e. the distance of the mapped 3D surface points from the projection surfaces.

[0005] For the MPEG standardization, there has been developed a test model for point cloud compression. MPEG W17248 discloses a projection-based approach for a test model for standardisation of dynamic point cloud compression. In intra coding of this test model, projected texture and geometry data is accompanied with an additional binary occupancy map to signal if a 2D pixel should be reconstructed to 3D space at the decoder side.

[0006] However, transmitting the additional occupancy map is costly in terms of the bit rate budget. In IPPP coding structure (intra frame followed by three uni-predicted inter frames), the additional occupancy map is only transmitted in every fourth frame (I-frame). Thus, 3D resampling and additional motion information, possibly including padding and/or interpolation, is required to align the I-frame occupancy map to the three P-frames without transmitted occupancy map. Moreover, the occupancy map information uses a codec different from the video codec used for texture and geometry images. Consequently, it is unlikely that such a dedicated occupancy map codec would be hardware-accelerated.

SUMMARY

[0007] Now, an improved method and technical equipment implementing the method has been invented, by which the above problems are alleviated. Various aspects of the invention include a method, an apparatus and a computer readable medium comprising a computer program or a signal stored therein, which are characterized by what is stated in the

independent claims. Various details of the invention are disclosed in the dependent claims and in the corresponding images and description.

[0008] According to a first aspect, there is provided a method comprising: inputting a point cloud frame in an encoder; projecting a 3D object represented by the point cloud frame onto a 2D patch; generating a geometry image, a texture image and a occupancy map from the 2D patch; partitioning the occupancy map into image blocks of a predetermined size along a predetermined block grid; assigning, on the basis of binary values of the image block, a codeword for each image block; mapping the codeword of each image block according to a mapping scheme to sample values of a multi-level occupancy map; and multiplexing the geometry image and the multi-level occupancy map into sample arrays of an image for compression.

[0009] According to an embodiment, the method further comprises signaling information about the mapping scheme and variables used for applying the mapping scheme for creating the multi-level occupancy map in or along a bitstream comprising the compressed image data to a decoder.

[0010] According to an embodiment, the size of the image blocks is determined based on a chroma format targeted for decoding.

[0011] According to an embodiment, the method further comprises using a codeword mapping where between any two adjacent codeword values, only one pixel in the respective occupancy map block changes state.

[0012] According to an embodiment, the geometry image is used as a luma sample array of the image and the multi-level occupancy map is used as one of the chroma sample arrays of the image.

[0013] A decoding method according to a second aspect comprises receiving a bitstream in a decoder; demultiplexing a texture image and multiplexed sample arrays of a geometry image and a multi-level occupancy map from the bitstream; obtaining information identifying a mapping scheme used for mapping codewords for image blocks of a binary occupancy map and variables used for applying the mapping scheme for creating the multi-level occupancy map; decoding the multi-level occupancy map into a plurality of samples; quantizing each sample value to a closest quantization level determined on the basis of said variables; and creating image blocks of a predetermined size of a binary occupancy map by inverse mapping said mapping scheme onto said quantized sample values.

[0014] According to an embodiment, the information identifying the mapping scheme and the variables are obtained by decoding the information from or along the bitstream or determining a predetermined mapping scheme and related variables.

[0015] According to an embodiment, the method further comprises decoding, from the bitstream, a coded picture into a decoded picture comprising the multiplexed sample arrays;

and demultiplexing the sample arrays of the multi-level occupancy map from the multiplexed sample arrays.

[0016] Apparatuses according to further aspects comprise at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform the above methods.

[0017] Computer readable storage media according to further aspects comprise code for use by an apparatus, which when executed by a processor, causes the apparatus to perform the above methods.

[0018] Further aspects relate at least to an apparatus and computer readable storage medium with computer program code comprising means for performing the above methods and embodiments related thereto.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

[0020] Fig. 1 shows a system for capturing, encoding, decoding, reconstructing and viewing a three-dimensional scheme;

[0021] Figs. 2a and 2b show a capture device and a viewing device;

[0022] Figs. 3a and 3b show an encoder and decoder for encoding and decoding texture pictures, geometry pictures and/or auxiliary pictures;

[0023] Figs. 4a, 4b, 4c and 4d show a setup for forming a stereo image of a scene to a user;

[0024] Figs. 5a illustrates projection of source volumes in a scene and parts of an object to projection surfaces, as well as determining depth information;

[0025] Fig. 5b shows an example of projecting an object using a cube map projection format;

[0026] Fig. 6 shows a projection of a source volume to a projection surface, and inpainting of a sparse projection;

[0027] Fig. 7 shows an examples of occlusion of surfaces;

[0028] Fig. 8 shows a known test model for standardized validating of an intra point cloud frame encoding;

[0029] Fig. 9 shows a flow chart for the intra point cloud frame coding according to an embodiment; [0030] Figure 10 illustrates a block chart of an encoder arranged to carry out the coding according to an embodiment;

[0031] Figures 1 la and 1 lb show examples of mapping codewords for blocks of an occupancy map according to an embodiment;

[0032] Figure 12 shows an example of using a geometry image and a coded occupancy map for a multiplexing process according to an embodiment;

[0033] Figure 13 shows a flow chart for the occupancy map decoding according to an embodiment; and

[0034] Figure 14 shows a block chart of a decoder arranged to carry out the decoding according to an embodiment.

DETAILED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS

[0035] In the following, several embodiments of the invention will be described in the context of point cloud, voxel or mesh scene models for three-dimensional volumetric video and pixel and picture based two-dimensional video coding. It is to be noted, however, that the invention is not limited to specific scene models or specific coding technologies. In fact, the different embodiments have applications in any environment where coding of volumetric scene data is required.

[0036] It has been noticed here that identifying correspondences for motion-compensation in three-dimensional space is an ill-defined problem, as both the geometry and the respective attributes of the objects to be coded may change. For example, temporal successive“frames” do not necessarily have the same number of meshes, points or voxel. Therefore, compression of dynamic 3D scenes is inefficient.

[0037] “Voxel” of a three-dimensional world corresponds to a pixel of a two-dimensional world. Voxels exist in a three-dimensional grid layout. An octree is a tree data structure used to partition a three-dimensional space. Octrees are the three-dimensional analog of quadtrees. A sparse voxel octree (SVO) describes a volume of a space containing a set of solid voxels of varying sizes. Empty areas within the volume are absent from the tree, which is why it is called“sparse”.

[0038] A three-dimensional volumetric representation of a scene is determined as a plurality of voxels on the basis of input streams of at least one multicamera device. Thus, at least one but preferably a plurality (i.e. 2, 3, 4, 5 or more) of multicamera devices are used to capture 3D video representation of a scene. The multicamera devices are distributed in different locations in respect to the scene, and therefore each multicamera device captures a different 3D video representation of the scene. The 3D video representations captured by each multicamera device may be used as input streams for creating a 3D volumetric representation of the scene, said 3D volumetric representation comprising a plurality of voxels. Voxels may be formed from the captured 3D points e.g. by merging the 3D points into voxels comprising a plurality of 3D points such that for a selected 3D point, all neighbouring 3D points within a predefined threshold from the selected 3D point are merged into a voxel without exceeding a maximum number of 3D points in a voxel.

[0039] Voxels may also be formed through the construction of the sparse voxel octree. Each leaf of such a tree represents a solid voxel in world space; the root node of the tree represents the bounds of the world. The sparse voxel octree construction may have the following steps: 1) map each input depth map to a world space point cloud, where each pixel of the depth map is mapped to one or more 3D points; 2) determine voxel attributes such as colour and surface normal vector by examining the neighbourhood of the source pixel(s) in the camera images and the depth map; 3) determine the size of the voxel based on the depth value from the depth map and the resolution of the depth map; 4) determine the SVO level for the solid voxel as a function of its size relative to the world bounds; 5) determine the voxel coordinates on that level relative to the world bounds; 6) create new and/or traversing existing SVO nodes until arriving at the determined voxel coordinates; 7) insert the solid voxel as a leaf of the tree, possibly replacing or merging attributes from a previously existing voxel at those coordinates. Nevertheless, the size of voxel within the 3D volumetric representation of the scene may differ from each other. The voxels of the 3D volumetric representation thus represent the spatial locations within the scene.

[0040] A volumetric video frame is a complete sparse voxel octree that models the world at a specific point in time in a video sequence. Voxel attributes contain information like colour, opacity, surface normal vectors, and surface material properties. These are referenced in the sparse voxel octrees (e.g. colour of a solid voxel), but can also be stored separately.

[0041] Point clouds are commonly used data structures for storing volumetric content. Compared to point clouds, sparse voxel octrees describe a recursive subdivision of a finite volume with solid voxels of varying sizes, while point clouds describe an unorganized set of separate points limited only by the precision of the used coordinate values.

[0042] When encoding a volumetric video, each frame may produce several hundred megabytes or several gigabytes of voxel data which needs to be converted to a format that can be streamed to the viewer, and rendered in real-time. The amount of data depends on the world complexity and volume. The larger impact comes in a multi-device recording setup with a number of separate locations where the cameras are recording. Such a setup produces more information than a camera at a single location.

[0043] Fig. 1 shows a system for capturing, encoding, decoding, reconstructing and viewing a three-dimensional scheme, that is, for 3D video and 3D audio digital creation and playback. The task of the system is that of capturing sufficient visual and auditory

information from a specific scene to be able to create a scene model such that a convincing reproduction of the experience, or presence, of being in that location can be achieved by one or more viewers physically located in different locations and optionally at a time later in the future. Such reproduction requires more information that can be captured by a single camera or microphone, in order that a viewer can determine the distance and location of objects within the scene using their eyes and their ears. To create a pair of images with disparity, two camera sources are used. In a similar manner, for the human auditory system to be able to sense the direction of sound, at least two microphones are used (the commonly known stereo sound is created by recording two audio channels). The human auditory system can detect the cues, e.g. in timing difference of the audio signals to detect the direction of sound.

[0044] The system of Fig. 1 may consist of three main parts: image sources, a server and a rendering device. A video source SRC 1 may comprise multiple cameras CAM 1 , CAM2, ... , CAMN with overlapping field of view so that regions of the view around the video capture device is captured from at least two cameras. The video source SRC1 may comprise multiple microphones to capture the timing and phase differences of audio originating from different directions. The video source SRC1 may comprise a high-resolution orientation sensor so that the orientation (direction of view) of the plurality of cameras CAM1, CAM2, ..., CAMN can be detected and recorded. The cameras or the computers may also comprise or be functionally connected to means for forming distance information corresponding to the captured images, for example so that the pixels have corresponding depth data. Such depth data may be formed by scanning the depth or it may be computed from the different images captured by the cameras. The video source SRC1 comprises or is functionally connected to, or each of the plurality of cameras CAM1, CAM2, ..., CAMN comprises or is functionally connected to a computer processor and memory, the memory comprising computer program code for controlling the source and/or the plurality of cameras. The image stream captured by the video source, i.e. the plurality of the cameras, may be stored on a memory device for use in another device, e.g. a viewer, and/or transmitted to a server using a communication interface. It needs to be understood that although a video source comprising three cameras is described here as part of the system, another amount of camera devices may be used instead as part of the system.

[0045] Alternatively or in addition to the source device SRC1 creating information for forming a scene model, one or more sources SRC2 of synthetic imagery may be present in the system, comprising a scene model. Such sources may be used to create and transmit the scene model and its development over time, e.g. instantaneous states of the model. The model can be created or provided by the source SRC1 and/or SRC2, or by the server SERVER. Such sources may also use the model of the scene to compute various video bitstreams for transmission.

[0046] One or more two-dimensional video bitstreams may be computed at the server SERVER or a device RENDERER used for rendering, or another device at the receiving end. When such computed video streams are used for viewing, the viewer may see a three- dimensional virtual world as described in the context of Figs 4a— 4d. The devices SRC1 and SRC2 may comprise or be functionally connected to one or more computer processors (PROC2 shown) and memory (MEM2 shown), the memory comprising computer program (PROGR2 shown) code for controlling the source device SRC1/SRC2. The image stream captured by the device and the scene model may be stored on a memory device for use in another device, e.g. a viewer, or transmitted to a server or the viewer using a communication interface COMM2. There may be a storage, processing and data stream serving network in addition to the capture device SRC1. For example, there may be a server SERVER or a plurality of servers storing the output from the capture device SRC1 or device SRC2 and/or to form a scene model from the data from devices SRC1, SRC2. The device SERVER comprises or is functionally connected to a computer processor PROC3 and memory MEM3, the memory comprising computer program PROGR3 code for controlling the server. The device SERVER may be connected by a wired or wireless network connection, or both, to sources SRC1 and/or SRC2, as well as the viewer devices VIEWER1 and VIEWER2 over the communication interface COMM3.

[0047] The creation of a three-dimensional scene model may take place at the server SERVER or another device by using the images captured by the devices SRC1. The scene model may be a model created from captured image data (a real-world model), or a synthetic model such as on device SRC2, or a combination of such. As described later, the scene model may be encoded to reduce its size and transmitted to a decoder, for example viewer devices.

[0048] For viewing the captured or created video content, there may be one or more viewer devices VIEWER 1 and VIEWER2. These devices may have a rendering module and a display module, or these functionalities may be combined in a single device. The devices may comprise or be functionally connected to a computer processor PROC4 and memory MEM4, the memory comprising computer program PROG4 code for controlling the viewing devices. The viewer (playback) devices may consist of a data stream receiver for receiving a video data stream and for decoding the video data stream. The video data stream may be received from the server SERVER or from some other entity, such as a proxy server, an edge server of a content delivery network, or a file available locally in the viewer device. The data stream may be received over a network connection through communications interface COMM4, or from a memory device MEM6 like a memory card CARD2. The viewer devices may have a graphics processing unit for processing of the data to a suitable format for viewing. The viewer VIEWER1 may comprise a high-resolution stereo-image head-mounted display for viewing the rendered stereo video sequence. The head-mounted display may have an orientation sensor DET1 and stereo audio headphones. The viewer VIEWER2 may comprise a display (either two-dimensional or a display enabled with 3D technology for displaying stereo video), and the rendering device may have an orientation detector DET2 connected to it. Alternatively, the viewer VIEWER2 may comprise a 2D display, since the volumetric video rendering can be done in 2D by rendering the viewpoint from a single eye instead of a stereo eye pair.

[0049] It needs to be understood that Fig. 1 depicts one SRC1 device and one SRC2 device, but generally the system may comprise more than one SRC1 device and/or SRC2 device.

[0050] Any of the devices (SRC 1 , SRC2, SERVER, RENDERER, VIEWER 1 , VIEWER2) may be a computer or a portable computing device, or be connected to such or configured to be connected to such. Moreover, even if the devices (SRC1, SRC2, SERVER, RENDERER, VIEWER1, VIEWER2) are depicted as a single device in Fig. 1, they may comprise multiple parts or may be comprised of multiple connected devices. For example, it needs to be understood that SERVER may comprise several devices, some of which may be used for editing the content produced by SRC1 and/or SRC2 devices, some others for compressing the edited content, and a third set of devices may be used for transmitting the compressed content. Such devices may have computer program code for carrying out methods according to various examples described in this text.

[0051] Figs. 2a and 2b show a capture device and a viewing device, respectively. Fig. 2a illustrates a camera CAM1. The camera has a camera detector CAMDET1, comprising a plurality of sensor elements for sensing intensity of the light hitting the sensor element. The camera has a lens OBJ1 (or a lens arrangement of a plurality of lenses), the lens being positioned so that the light hitting the sensor elements travels through the lens to the sensor elements. The camera detector CAMDET1 has a nominal centre point CP1 that is a middle point of the plurality of sensor elements, for example for a rectangular sensor the crossing point of diagonals of the rectangular sensor. The lens has a nominal centre point PP1, as well, lying for example on the axis of symmetry of the lens. The direction of orientation of the camera is defined by the line passing through the centre point CP1 of the camera sensor and the centre point PP1 of the lens. The direction of the camera is a vector along this line pointing in the direction from the camera sensor to the lens. The optical axis of the camera is understood to be this line CP1-PP1. However, the optical path from the lens to the camera detector need not always be a straight line but there may be mirrors and/or some other elements which may affect the optical path between the lens and the camera detector.

[0052] Fig. 2b shows a head-mounted display (HMD) for stereo viewing. The head- mounted display comprises two screen sections or two screens DISP1 and DISP2 for displaying the left and right eye images. The displays are close to the eyes, and therefore lenses are used to make the images easily viewable and for spreading the images to cover as much as possible of the eyes' field of view. When the device will be used by a user, the user may put the device on her/his head so that it will be attached to the head of the user so that it stays in place even when the user turns his head. The device may have an orientation detecting module ORDET1 for determining the head movements and direction of the head. The head-mounted display gives a three-dimensional (3D) perception of the

recorded/streamed content to a user.

[0053] The system described above may function as follows. Time-synchronized video and orientation data is first recorded with the capture devices. This can consist of multiple concurrent video streams as described above. One or more time-synchronized audio streams may also be recorded with the capture devices. The different capture devices may form image and geometry information of the scene from different directions. For example, there may be three, four, five, six or more cameras capturing the scene from different sides, like front, back, left and right, and/or at directions between these, as well as from the top or bottom, or any combination of these. The cameras may be at different distances, for example some of the cameras may capture the whole scene and some of the cameras may be capturing one or more objects in the scene. In an arrangement used for capturing volumetric video data, several cameras may be directed towards an object, looking onto the object from different directions, where the object is e.g. in the middle of the cameras. In this manner, the texture and geometry of the scene and the objects within the scene may be captured adequately. As mentioned earlier, the cameras or the system may comprise means for determining geometry

information, e.g. depth data, related to the captured video streams. From these concurrent video and audio streams, a computer model of a scene may be created. Alternatively or additionally, a synthetic computer model of a virtual scene may be used. The models (at successive time instances) are then transmitted immediately or later to the storage and processing network for processing and conversion into a format suitable for subsequent delivery to playback devices. The conversion may involve processing and coding to improve the quality and/or reduce the quantity of the scene model data while preserving the quality at a desired level. Each playback device receives a stream of the data (either computed video data or scene model data) from the network, and renders it into a viewing reproduction of the original location which can be experienced by a user. The reproduction may be two- dimensional or three-dimensional (stereo image pairs).

[0054] Figs. 3a and 3b show an encoder and decoder for encoding and decoding texture pictures, geometry pictures and/or auxiliary pictures. A video codec consists of an encoder that transforms an input video into a compressed representation suited for

storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. Typically, the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate). An example of an encoding process is illustrated in Figure 3a. Figure 3a illustrates an image to be encoded (P); a predicted representation of an image block (P' n ); a prediction error signal (D n ); a reconstructed prediction error signal (D' n ); a preliminary reconstructed image (I' n ); a final reconstructed image (R' n ); a transform (T) and inverse transform (T 1 ); a quantization (Q) and inverse quantization (Q 1 ); entropy encoding (E); a reference frame memory (RFM); inter prediction (Pi nter ); intra prediction (P mtra ); mode selection (MS) and filtering (F).

[0055] An example of a decoding process is illustrated in Figure 3b. Figure 3b illustrates a predicted representation of an image block (P' n ); a reconstructed prediction error signal (D' n ); a preliminary reconstructed image (I' n ); a final reconstructed image (R' n ); an inverse transform (T 1 ); an inverse quantization (Q 1 ); an entropy decoding (E 1 ); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F).

[0056] Many hybrid video encoders encode the video information in two phases. Firstly pixel values in a certain picture area (or“block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate). Video codecs may also provide a transform skip mode, which the encoders may choose to use. In the transform skip mode, the prediction error is coded in a sample domain, for example by deriving a sample- wise difference value relative to certain adjacent samples and coding the sample-wise difference value with an entropy coder.

[0057] Many video encoders partition a picture into blocks along a block grid. For example, in the High Efficiency Video Coding (HEVC) standard, the following partitioning and definitions are used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non overlapping LCUs.

[0058] Entropy coding/decoding may be performed in many ways. For example, context- based coding/decoding may be applied, where in both the encoder and the decoder modify the context state of a coding parameter based on previously coded/decoded coding parameters. Context-based coding may for example be context adaptive binary arithmetic coding

(CABAC) or context-based variable length coding (CAVLC) or any similar entropy coding. Entropy coding/decoding may alternatively or additionally be performed using a variable length coding scheme, such as Huffman coding/decoding or Exp-Golomb coding/decoding. Decoding of coding parameters from an entropy-coded bitstream or codewords may be referred to as parsing.

[0059] Available media file format standards include ISO base media file format (ISO/IEC 14496-12, which may be abbreviated ISOBMFF). The file format for NAL unit structured video (ISO/IEC 14496-15) and the High Efficiency Image File Format (ISO/IEC 23008-12, which may be abbreviated HEIF) both derive from the ISOBMFF.

[0060] Out-of-band transmission, signaling or storage can be used for tolerance against transmission errors as well as for other purposes, such as ease of access or session

negotiation. For example, a sample entry of a track in a file conforming to the ISO Base Media File Format may comprise parameter sets, while the coded data in the bitstream is stored elsewhere in the file or in another file. The phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the bitstream or the coded unit, respectively. The phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively. For example, the phrase along the bitstream may be used when the bitstream is contained in a file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.

[0061] Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented. The aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.

[0062] A basic building block in the ISO base media file format is called a box. Each box has a header and a payload. The box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.

[0063] According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.

[0064] In files conforming to the ISO base media file format, the media data may be provided in a media data‘mdat‘ box and the movie‘moov’ box may be used to enclose the metadata. In some cases, for a file to be operable, both of the‘mdat’ and‘moov’ boxes may be required to be present. The movie‘moov’ box may include one or more tracks, and each track may reside in one corresponding track‘trak’ box. A track may be one of the many types, including a media track that refers to samples formatted according to a media compression format (and its encapsulation to the ISO base media file format).

[0065] In HEIF, still images are stored as items. All image items are independently coded and do not depend on any other item in their decoding. Any number of image items can be included in the same file.

[0066] The Matroska file format is capable of (but not limited to) storing any of video, audio, picture, or subtitle tracks in one file. Matroska may be used as a basis format for derived file formats, such as WebM. Matroska uses Extensible Binary Meta Language (EBML) as basis. EBML specifies a binary and octet (byte) aligned format inspired by the principle of XML. EBML itself is a generalized description of the technique of binary markup. A Matroska file consists of Elements that make up an EBML "document." Elements incorporate an Element ID, a descriptor for the size of the element, and the binary data itself. Elements can be nested. A Segment Element of Matroska is a container for other top-level (level 1) elements. A Matroska file may comprise (but is not limited to be composed of) one Segment. Multimedia data in Matroska files is organized in Clusters (or Cluster Elements), each containing typically a few seconds of multimedia data. A Cluster comprises BlockGroup elements, which in turn comprise Block Elements. A Cues Element comprises metadata which may assist in random access or seeking and may include file pointers or respective timestamps for seek points.

[0067] Figs. 4a, 4b, 4c and 4d show a setup for forming a stereo image of a scene to a user, for example a video frame of a 3D video. In Fig. 4a, a situation is shown where a human being is viewing two spheres Al and A2 using both eyes El and E2. The sphere Al is closer to the viewer than the sphere A2, the respective distances to the first eye El being LEI,AI and LEI,A2. The different objects reside in space at their respective (x,y,z) coordinates, defined by the coordinate system SZ, SY and SZ. The distance di 2 between the eyes of a human being may be approximately 62-64 mm on average, and varying from person to person between 55 and 74 mm. This distance is referred to as the parallax, on which stereoscopic view of the human vision is based on. The viewing directions (optical axes) DIR1 and DIR2 are typically essentially parallel, possibly having a small deviation from being parallel, and define the field of view for the eyes. The head of the user has an orientation (head orientation) in relation to the surroundings, most easily defined by the common direction of the eyes when the eyes are looking straight ahead. That is, the head orientation tells the yaw, pitch and roll of the head in respect of a coordinate system of the scene where the user is.

[0068] When the viewer's body (thorax) is not moving, the viewer's head orientation is restricted by the normal anatomical ranges of movement of the cervical spine.

[0069] In the setup of Fig. 4a, the spheres Al and A2 are in the field of view of both eyes. The centre-point O12 between the eyes and the spheres are on the same line. That is, from the centre-point, the sphere A2 is behind the sphere Al. However, each eye sees part of sphere A2 from behind Al, because the spheres are not on the same line of view from either of the eyes.

[0070] In Fig. 4b, there is a setup shown, where the eyes have been replaced by cameras Cl and C2, positioned at the location where the eyes were in Fig. 4a. The distances and directions of the setup are otherwise the same. Naturally, the purpose of the setup of Fig. 4b is to be able to take a stereo image of the spheres Al and A2. The two images resulting from image capture are Fci and Fc 2 . The "left eye” image Fci shows the image S A 2 of the sphere A2 partly visible on the left side of the image SAI of the sphere Al . The "right eye" image Fc2 shows the image SA2 of the sphere A2 partly visible on the right side of the image SAI of the sphere Al. This difference between the right and left images is called disparity, and this disparity, being the basic mechanism with which the HVS determines depth information and creates a 3D view of the scene, can be used to create an illusion of a 3D image.

[0071] In this setup of Fig. 4b, where the inter-eye distances correspond to those of the eyes in Fig. 4a, the camera pair Cl and C2 has a natural parallax, that is, it has the property of creating natural disparity in the two images of the cameras. Natural disparity may be understood to be created even though the distance between the two cameras forming the stereo camera pair is somewhat smaller or larger than the normal distance (parallax) between the human eyes, e.g. essentially between 40 mm and 100 mm or even 30 mm and 120 mm.

[0072] It needs to be understood here that the images Fci and Fc2 may be captured by cameras Cl and C2, where the cameras Cl and C2 may be real-world cameras or they may be virtual cameras. In the case of virtual cameras, the images Fci and Fc2 may be computed from a computer model of a scene by setting the direction, orientation and viewport of the cameras Cl and C2 appropriately such that a stereo image pair suitable for viewing by the human visual system (HVS) is created.

[0073] In Fig. 4c, the creating of this 3D illusion is shown. The images Fci and Fc2 captured or computed by the cameras Cl and C2 are displayed to the eyes El and E2, using displays Dl and D2, respectively. The disparity between the images is processed by the human visual system so that an understanding of depth is created. That is, when the left eye sees the image SA2 of the sphere A2 on the left side of the image SAI of sphere Al, and respectively the right eye sees the image S A 2 of the sphere A2 on the right side, the human visual system creates an understanding that there is a sphere V2 behind the sphere VI in a three-dimensional world. Here, it needs to be understood that the images Fci and Fc2 can also be synthetic, that is, created by a computer. If they carry the disparity information, synthetic images will also be seen as three-dimensional by the human visual system. That is, a pair of computer-generated images can be formed so that they can be used as a stereo image.

[0074] Fig. 4d illustrates how the principle of displaying stereo images to the eyes can be used to create 3D movies or virtual reality scenes having an illusion of being three- dimensional. The images Fxi and Fx2 are either captured with a stereo camera or computed from a model so that the images have the appropriate disparity. By displaying a large number (e.g. 30) frames per second to both eyes using display Dl and D2 so that the images between the left and the right eye have disparity, the human visual system will create a cognition of a moving, three-dimensional image.

[0075] The field of view represented by the content may be greater than the displayed field of view e.g. in an arrangement depicted in Fig. 4d. Consequently, only a part of the content along the direction of view (a.k.a. viewing orientation) is displayed at a single time. This direction of view, that is, the head orientation, may be determined as a real orientation of the head e.g. by an orientation detector mounted on the head, or as a virtual orientation determined by a control device such as a joystick or mouse that can be used to manipulate the direction of view without the user actually moving his head. That is, the term "head orientation" may be used to refer to the actual, physical orientation of the user's head and changes in the same, or it may be used to refer to the virtual direction of the user's view that is determined by a computer program or a computer input device.

[0076] The content may enable viewing from several viewing positions within the 3D space. The texture picture(s), the geometry picture(s) and the geometry information may be used to synthesize the images Fxi and/or Fx 2 as if the displayed content was captured by camera(s) located at the viewing position.

[0077] The principle illustrated in Figs. 4a-4d may be used to create three-dimensional images to a viewer from a three-dimensional scene model (volumetric video) after the scene model has been encoded at the sender and decoded and reconstructed at the receiver. Because volumetric video describes a 3D scene or object at different (successive) time instances, such data can be viewed from any viewpoint. Therefore, volumetric video is an important format for any augmented reality, virtual reality and mixed reality applications, especially for providing viewing capabilities having six degrees of freedom (so-called 6DOF viewing).

[0078] Fig. 5a illustrates projection of source volumes in a digital scene model SCE and parts of an object model OBJ1, OBJ2, OBJ3, BG4 to projection surfaces Sl, S2, S3, S4, as well as determining depth information for the purpose of encoding volumetric video.

[0079] The projection of source volumes SV1, SV2, SV3, SV4 may result in texture pictures and geometry pictures, and there may be geometry information related to the projection source volumes and/or projection surfaces. Texture pictures, geometry pictures and projection geometry information may be encoded into a bitstream. A texture picture may comprise information on the colour data of the source of the projection. Through the projection, such colour data may result in pixel colour information in the texture picture. Pixels may be coded in groups, e.g. coding units of rectangular shape. The projection geometry information may comprise but is not limited to one or more of the following:

- projection type, such as planar projection or equirectangular projection

- projection surface type, such as a cube, sphere, cylinder, polyhedron

- location of the projection surface in 3D space

- orientation of the projection surface in 3D space

- size of the projection surface in 3D space

- type of a projection centre, such as a projection centre point, axis, or plane, from which a geometry primitive is projected onto the projection surface

- location and/or orientation of a projection centre.

[0080] The projection may take place by projecting the geometry primitives (points of a point could, triangles of a triangle mesh or voxels of a voxel array) of a source volume SV1, SV2, SV3, SV4 (or an object OBJ1, OBJ2, OBJ3, BG4) onto a projection surface Sl, S2, S3, S4. The geometry primitives may comprise information on the texture, for example a colour value or values of a point, a triangle or a voxel. The projection surface may surround the source volume at least partially such that projection of the geometry primitives happens from the centre of the projection surface outwards to the surface. For example, a cylindrical surface has a projection centre axis and a spherical surface has a projection centre point. A cubical or rectangular surface may have projection centre planes or a projection centre axis or point and the projection of the geometry primitives may take place either orthogonally to the sides of the surface or from the projection centre axis or point outwards to the surface. The projection surfaces, e.g. cylindrical and rectangular, may be open from the top and the bottom such that when the surface is cut and rolled out on a two-dimensional plane, it forms a rectangular shape. Such rectangular shape with pixel data can be encoded and decoded with a video codec.

[0081] Alternatively or in addition, the projection surface such as a planar surface or a sphere may be inside group of geometry primitives, e.g. inside a point cloud that defines a surface. In the case of an inside projection surface, the projection may take place from outside in towards the centre and may result in sub-sampling of the texture data of the source.

[0082] In a point cloud based scene model or object model, points may be represented with any floating point coordinates. A quantized point cloud may be used to reduce the amount of data, whereby the coordinate values of the point cloud are represented e.g. with lO-bit, l2-bit or 16-bit integers. Integers may be used because hardware accelerators may be able to operate on integers more efficiently. The points in the point cloud may have associated colour, reflectance, opacity etc. texture values. The points in the point cloud may also have a size, or a size may be the same for all points. The size of the points may be understood as indicating how large an object the point appears to be in the model in the projection. The point cloud is projected by ray casting from the projection surface to find out the pixel values of the projection surface. In such a manner, the topmost point remains visible in the projection, while points closer to the centre of the projection surface may be occluded. In other words, in general, the original point cloud, meshes, voxels, or any other model is projected outwards to a simple geometrical shape, this simple geometrical shape being the projection surface.

[0083] Different projection surfaces may have different characteristics in terms of projection and reconstruction. In the sense of computational complexity, a projection to a cubical surface may be the most efficient, and a cylindrical projection surface may provide accurate results efficiently. Also cones, polyhedron-based parallelepipeds (hexagonal or octagonal, for example) and spheres or a simple plane may be used as projection surfaces.

[0084] The phrase along the bitstream (e.g. indicating along the bitstream) may be defined to refer to out-of-band transmission, signalling, or storage in a manner that the out-of-band data is associated with the bitstream. The phrase decoding along the bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signalling, or storage) that is associated with the bitstream. For example, an indication along the bitstream may refer to metadata in a container file that encapsulates the bitstream.

[0085] As illustrated in Fig. 5a, a first texture picture may be encoded into a bitstream, and the first texture picture may comprise a first projection of texture data of a first source volume SV1 of a scene model SCE onto a first projection surface Sl. The scene model SCE may comprise a number of further source volumes SV2, SV3, SV4.

[0086] In the projection, data on the position of the originating geometry primitive may also be determined, and based on this determination, a geometry picture may be formed. This may happen for example so that depth data is determined for each or some of the texture pixels of the texture picture. Depth data is formed such that the distance from the originating geometry primitive such as a point to the projection surface is determined for the pixels. Such depth data may be represented as a depth picture, and similarly to the texture picture, such geometry picture (in this example, depth picture) may be encoded and decoded with a video codec. This first geometry picture may be seen to represent a mapping of the first projection surface to the first source volume, and the decoder may use this information to determine the location of geometry primitives in the model to be reconstructed. In order to determine the position of the first source volume and/or the first projection surface and/or the first projection in the scene model, there may be first geometry information encoded into or along the bitstream.

[0087] A picture may be defined to be either a frame or a field. A frame may be defined to comprise a matrix of luma samples and possibly the corresponding chroma samples. A field may be defined to be a set of alternate sample rows of a frame. Fields may be used as encoder input for example when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or may be subsampled when compared to luma sample arrays. Some chroma formats may be summarized as follows:

- In monochrome sampling there is only one sample array, which may be nominally

considered the luma array.

- In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array.

- In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array. - In 4:4:4 sampling when no separate colour planes are in use, each of the two chroma arrays has the same height and width as the luma array.

[0088] It is possible to code sample arrays as separate colour planes into the bitstream and respectively decode separately coded colour planes from the bitstream. When separate colour planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.

[0089] An attribute picture may be defined as a picture that comprises additional information related to an associated texture picture. An attribute picture may for example comprise surface normal, opacity, or reflectance information for a texture picture. A geometry picture may be regarded as one type of an attribute picture, although a geometry picture may be treated as its own picture type, separate from an attribute picture.

[0090] Texture picture(s) and the respective geometry picture(s), if any, and the respective attribute picture(s) may have the same or different chroma format.

[0091] Terms texture image and texture picture may be used interchangeably. Terms geometry image and geometry picture may be used interchangeably. A specific type of a geometry image is a depth image. Embodiments described in relation to a geometry image equally apply to a depth image, and embodiments described in relation to a depth image equally apply to a geometry image. Terms attribute image and attribute picture may be used interchangeably. A geometry picture and/or an attribute picture may be treated as an auxiliary picture in video/image encoding and/or decoding.

[0092] Depending on the context, a pixel may be defined to a be a sample of one of the sample arrays of the picture or may be defined to comprise the collocated samples of all the sample arrays of the picture.

[0093] Multiple source volumes (objects) may be encoded as texture pictures, geometry pictures and projection geometry information into the bitstream in a similar manner. That is, as in Fig. 5a, the scene model SCE may comprise multiple objects OBJ1, OBJ2, OBJ3, OBJ4, and these may be treated as source volumes SV1, SV2, SV3, SV4 and each object may be coded as a texture picture, geometry picture and projection geometry information.

[0094] In the above, the first texture picture of the first source volume S V 1 and further texture pictures of the other source volumes SV2, SV3, SV4 may represent the same time instance. That is, there may be a plurality of texture and geometry pictures and projection geometry information for one time instance, and the other time instances may be coded in a similar manner. Since the various source volumes are in this way producing sequences of texture pictures and sequences of geometry pictures, as well as sequences of projection geometry information, the inter-picture redundancy in the picture sequences can be used to encode the texture and geometry data for the source volumes efficiently, compared to the presently known ways of encoding volume data.

[0095] An object OBJ3 (source volume SV3) may be projected onto a projection surface S3 and encoded into the bitstream as a texture picture, geometry picture and projection geometry information as described above. Furthermore, such source volume may be indicated to be static by encoding information into said bitstream on said fourth projection geometry being static. A static source volume or object may be understood to be an object whose position with respect to the scene model remains the same over two or more or all time instances of the video sequence. For such static source volume, the geometry data (geometry pictures) may also stay the same, that is, the object's shape remains the same over two or more time instances. For such static source volume, some or all of the texture data (texture pictures) may stay the same over two or more time instances. By encoding information into the bitstream of the static nature of the source volume the encoding efficiency may further be improved, as the same information may not need to be coded multiple times. In this manner, the decoder will also be able to use the same reconstruction or partially same reconstruction of the source volume (object) over multiple time instances.

[0096] In an analogous manner, the different source volumes may be coded into the bitstream with different frame rates. For example, a slow-moving or relatively unchanging object (source volume) may be encoded with a first frame rate, and a fast-moving and/or changing object (source volume) may be coded with a second frame rate. The first frame rate may be slower than the second frame rate, for example one half or one quarter of the second frame rate, or even slower. For example, if the second frame rate is 30 frames per second, the second frame rate may be 15 frames per second, or 1 frame per second. The first and second object (source volumes) may be "sampled" in synchrony such that some frames of the faster frame rate coincide with frames of the slower frame rate.

[0097] There may be one or more coordinate systems in the scene model. The scene model may have a coordinate system and one or more of the objects (source volumes) in the scene model may have their local coordinate systems. The shape, size, location and orientation of one or more projection surfaces may be encoded into or along the bitstream with respect to the scene model coordinates. Alternatively or in addition, the encoding may be done with respect to coordinates of the scene model or said first source volume. The choice of coordinate systems may improve the coding efficiency. [0098] Information on temporal changes in location, orientation and size of one or more said projection surfaces may be encoded into or along the bitstream. For example, if one or more of the objects (source volumes) being encoded is moving or rotating with respect to the scene model, the projection surface moves or rotates with the object to preserve the projection as similar as possible.

[0099] If the projection volumes are changing, for example splitting or bending into two parts, the projection surfaces may be sub-divided respectively. Therefore, information on sub- division of one or more of the source volumes and respective changes in one or more of the projection surfaces may be encoded into or along the bitstream.

[0100] The resulting bitstream may then be output to be stored or transmitted for later decoding and reconstruction of the scene model.

[0101] Decoding of the information from the bitstream may happen in analogous manner.

A first texture picture may be decoded from a bitstream to obtain first decoded texture data, where the first texture picture comprises a first projection of texture data of a first source volume of the scene model to be reconstructed onto a first projection surface. The scene model may comprise a number of further source volumes. Then, a first geometry picture may be decoded from the bitstream to obtain first decoded scene model geometry data. The first geometry picture may represent a mapping of the first projection surface to the first source volume. First projection geometry information of the first projection may be decoded from the bitstream, the first projection geometry information comprising information of position of the first projection surface in the scene model. Using this information, a reconstructed scene model may be formed by projecting the first decoded texture data to a first destination volume using the first decoded scene model geometry data and said first projection geometry information to determine where the decoded texture information is to be placed in the scene model.

[0102] A 3D scene model may be classified into two parts: first all dynamic parts, and second all static parts. The dynamic part of the 3D scene model may further be sub-divided into separate parts, each representing objects (or parts of) an object in the scene model, that is, source volumes. The static parts of the scene model may include e.g. static room geometry (walls, ceiling, fixed furniture) and may be compressed either by known volumetric data compression solutions, or, similar to the dynamic part, sub-divided into individual objects for projection-based compression as described earlier, to be encoded into the bitstream.

[0103] In an example, some objects may be a chair (static), a television screen (static geometry, dynamic texture), a moving person (dynamic). For each object, a suitable projection geometry (surface) may be found, e.g. cube projection to represent the chair, another cube for the screen, a cylinder for the person's torso, a sphere for a detailed representation of the person's head, and so on. The 3D data of each object may then be projected onto the respective projection surface and 2D planes are derived by "unfolding" the projections from three dimensions to two dimensions (plane). The unfolded planes will have several channels, typically three for the colour representation of the texture, e.g. RGB, YUV, and one additional plane for the geometry (depth) of each projected point for later

reconstruction.

[0104] Frame packing may be defined to comprise arranging more than one input picture, which may be referred to as (input) constituent frames, into an output picture. In general, frame packing is not limited to any particular type of constituent frames or the constituent frames need not have a particular relation with each other. In many cases, frame packing is used for arranging constituent frames of a stereoscopic video clip into a single picture sequence. The arranging may include placing the input pictures in spatially non-overlapping areas within the output picture. For example, in a side-by-side arrangement, two input pictures are placed within an output picture horizontally adjacently to each other. The arranging may also include partitioning of one or more input pictures into two or more constituent frame partitions and placing the constituent frame partitions in spatially non-overlapping areas within the output picture. The output picture or a sequence of frame-packed output pictures may be encoded into a bitstream e.g. by a video encoder. The bitstream may be decoded e.g. by a video decoder. The decoder or a post-processing operation after decoding may extract the decoded constituent frames from the decoded picture(s) e.g. for displaying.

[0105] A standard 2D video encoder may then receive the planes as inputs, either as individual layers per object, or as a frame-packed representation of all objects. The texture picture may thus comprise a plurality of projections of texture data from further source volumes and the geometry picture may represent a plurality of mappings of projection surfaces to the source volume.

[0106] For each object, additional information may be signalled to allow for reconstruction at the decoder side:

- in the case of a frame-packed representation: separation boundaries may be signalled to recreate the individual planes for each object,

- in the case of projection-based compression of static content: classification of each object as static/dynamic may be signalled, - relevant data to create real-world geometry data from the decoded (quantised) geometry channel(s), e.g. quantisation method, depth ranges, bit depth, etc. may be signalled,

- initial state of each object: geometry shape, location, orientation, size may be signalled,

- temporal changes for each object, either as changes to the initial state on a per-picture level, or as a function of time may be signalled, and

- nature of any additional auxiliary data may be signalled.

[0107] The decoder may receive the static 3D scene model data together with the video bitstreams representing the dynamic parts of the scene model. Based on the signalled information on the projection geometries, each object may be reconstructed in 3D space and the decoded scene model is created by fusing all reconstructed parts (objects or source volumes) together.

[0108] Standard video encoding hardware may be utilized for real-time

compression/decompression of the projection surfaces that have been unfolded onto planes.

[0109] Single projection surfaces might suffice for the projection of very simple objects. Complex objects or larger scenes may require several (different) projections. The relative geometry of the object/scene may remain constant over a volumetric video sequence, but the location and orientation of the projection surfaces in space can change (and can be possibly predicted in the encoding, wherein the difference from the prediction is encoded).

[01 10] Depth may be coded "outside-in" (indicating the distance from the projection surface to the 3D point), or "inside-out” (indicating the distance from the 3D point to the projection surface). In inside-out coding, depth of each projected point may be positive (with positive distance PD1) or negative (with negative distance). Fig. 5b shows an example of projecting an object OBJ1 using a cube map projection format, wherein there are six projection surfaces PS 1 , ... ,PS6 of the proj ection cube PC 1. In this example, the proj ection surfaces are one on the left side PS1, one in front PS2, one on the right side PS3, one in the back PS4, one in the bottom PS5, and one in the top PS6 of the cube PC1 in the setup of Figure 5b. For clarity, only four of the projection surfaces will be shown and used in the rest of the specification. For example, in Figure 8a the projection surfaces on the left PS1, on the right PS3, in the front PS2 and at in the back PS4 are shown. It is, however, clear to a skilled person to utilize similar principles on all six projection surfaces when the cube map projection format is used.

[0111] Fig. 6 shows a projection of a source volume to a projection surface, and inpainting of a sparse projection. A three-dimensional (3D) scene model, represented as objects OBJ1 comprising geometry primitives such as mesh elements, points, and/or voxel, may be projected onto one, or more, projection surfaces, as described earlier. As shown in Fig 6, these projection surface geometries may be "unfolded" onto 2D planes (two planes per projected source volume: one for texture TP1, one for depth GP1), which may then be encoded using standard 2D video compression technologies. Relevant projection geometry information may be transmitted alongside the encoded video files to the decoder. The decoder may then decode the video and performs the inverse projection to regenerate the 3D scene model object ROBJ1 in any desired representation format, which may be different from the starting format e.g. reconstructing a point cloud from original mesh model data.

[0112] In addition to the texture picture and geometry picture shown in Fig. 6, one or more auxiliary pictures related to one or more said texture pictures and the pixels thereof may be encoded into or along with the bitstream. The auxiliary pictures may e.g. represent texture surface properties related to one or more of the source volumes. Such texture surface properties may be e.g. surface normal information (e.g. with respect to the projection direction), reflectance and opacity (e.g. an alpha channel value). An encoder may encode, in or along with the bitstream, indication(s) of the type(s) of texture surface properties represented by the auxiliary pictures, and a decoder may decode, from or along the bitstream, indication(s) of the type(s) of texture surface properties represented by the auxiliary pictures.

[0113] Mechanisms to represent an auxiliary picture may include but are not limited to the following:

A colour component sample array, such as a chroma sample array, of the geometry picture.

An additional sample array in addition to the conventional three colour component sample arrays of the texture picture or the geometry picture.

A constituent frame of a frame-packed picture that may also comprise texture picture(s) and/or geometry picture(s).

An auxiliary picture included in specific data units in the bitstream. For example, the Advanced Video Coding (H.264/AVC) standard specifies a network abstraction layer (NAL) unit for a coded slice of an auxiliary coded picture without partitioning.

An auxiliary picture layer within a layered bitstream. For example, the High Efficiency Video Coding (HE VC) standard comprises the feature of including auxiliary picture layers in the bitstream. An auxiliary picture layer comprises auxiliary pictures.

An auxiliary picture bitstream separate from the bitstream(s) for the texture picture(s) and geometry picture(s). The auxiliary picture bitstream may be indicated, for example in a container file, to be associated with the bitstream(s) for the texture pictures(s) and geometry picture(s).

[0114] The mechanism(s) to be used for auxiliary pictures may be pre-defined e.g. in a coding standard, or the mechanism(s) may be selected e.g. by an encoder and indicated in or along the bitstream. The decoder may decode the mechanism(s) used for auxiliary pictures from or along the bitstream.

[0115] The projection surface of a source volume may encompass the source volume, and there may be a model of an object in that source volume. Encompassing may be understood so that the object (model) is inside the surface such that when looking from the centre axis or centre point of the surface, the object's points are closer to the centre than the points of the projection surface are. The model may be made of geometry primitives, as described. The geometry primitives of the model may be projected onto the projection surface to obtain projected pixels of the texture picture. This projection may happen from inside-out.

Alternatively or in addition, the projection may happen from outside-in.

Projecting 3D data onto 2D planes is independent from the 3D scene model representation format. There exist several approaches for projecting 3D data onto 2D planes, with the respective signalling. For example, there exist several mappings from spherical coordinates to planar coordinates, known from map projections of the globe, and the type and parameters of such projection may be signalled. For cylindrical projections, the aspect ratio of height and width may be signalled.

[0116] It may happen that when the projection of the object is performed on the projection surfaces PS1— PS6, some parts of the object OBJ1 or another object may occlude some other parts of the object OBJ1 which otherwise were visible from the projection surface in question. Hence, some parts of the object OBJ1 would not be projected to any of the surfaces of the projection format.

[0117] Figure 7 illustrates an example of this kind of situation. In this example the person’s left hand occludes a part of the body of the person so that when viewed (projected) from the left hand’s side the occluded part of the body would not be projected. In the same way, the planar object on the person’s right hand occludes some parts of the person’s stomach when viewed from the front of the person.

[0118] According to an approach, which has been proposed for occlusion handling for projection-based volumetric video coding, the 3D volume surface is analysed with respect to the target projection surface before performing the 3D-to-2D projection. Therein, an entity that maps 3D texture data on to projection planes can choose the six sides of an oriented or an axis aligned bounding box of a 3D point cloud as the initial set of projection planes. The mapping of 3D surface parts on to the projection planes only maps the closest coherent surface onto the projections planes. For example, if there are two surfaces of the 3D object where one surface occludes the other surface in the direction of the 2D planes normal, then only the occluding surface is mapped on to the projection plane. The occluded surface requires the generation of another projection plane for mapping. The pose of the projections planes for the occluded points in the point cloud can be chosen such that it maximizes the rate-distortion performance for encoding the texture, depth and other auxiliary planes.

[0119] For the MPEG standard, there has been developed a test model for point cloud compression. MPEG W17248 discloses another projection-based approach for a test model for standardisation of dynamic point cloud compression. In intra coding of this test model, projected texture and geometry data is accompanied with an additional binary“occupancy map”, which may also referred to as“inpainting mask”, to establish which projected points are valid or not. Therein, texture and depth images are blurred/inpainted to increase coding efficiency. The occupancy map is used to signal if a 2D pixel should be reconstructed to 3D space at the decoder side. When the value of a sample in the binary occupancy map is zero, the collocated sample in the texture image and the geometry image is not valid, i.e., does not represent a point in the point cloud. When the value of a sample in the binary occupancy map is non-zero, the collocated sample in the texture image and the geometry image is valid, i.e., represents a point in the point cloud. It is noted that the mapping of point cloud sample values (zero or non-zero) to not valid or valid points could alternatively be defined in an opposite manner. Fig. 8 illustrates the intra coding approach presented in MPEG W17248. For a comprehensive description of the test model, a reference is made to MPEG W17248. When the term occupancy map is used hereafter without a qualifier, it may be understood as a binary occupancy map.

[0120] However, transmitting the additional occupancy map is costly in terms of the bit rate budget. W17248 uses a four- frame IPPP coding structure (intra frame followed by three inter frames), and the additional occupancy map is only transmitted in every fourth frame (I- ffame). Thus, 3D resampling and additional motion information, possibly including padding and/or interpolation, is required to align the I-frame occupancy map to the three P-frames without transmitted occupancy map. The coding and decoding of the occupancy map information and the 3D motion information also require significant computational, memory, and memory access resources. Moreover, the occupancy map information uses a codec different from the video codec used for texture and geometry images. Consequently, it is unlikely that such a dedicated occupancy map codec would be hardware-accelerated.

[0121] In the following, an enhanced method for point cloud video or image coding will be described in more detail, in accordance with an embodiment. The method can be applied to either or both of intra coding and inter coding of point cloud frames, i.e. intra (I) pictures or slices and inter (P or B) pictures or slices.

[0122] The method, which is disclosed in Figure 9, comprises inputting (900) a point cloud frame in an encoder; projecting (902) a 3D object represented by the point cloud frame onto a 2D patch; generating (904) a geometry image, a texture image and a occupancy map from the 2D patch; partitioning (906) the occupancy map into image blocks of a predetermined size along a predetermined block grid; assigning (908), on the basis of binary values of the image block, a codeword for each image block; mapping (910) the codeword of each image block to sample values of a multi-level occupancy map according to a mapping scheme; and multiplexing (912) the geometry image and the multi-level occupancy map into sample arrays of an image for compression.

[0123] According to an embodiment, the method additionally comprises signaling (914) information about the mapping scheme of the multi-level occupancy map in or along a bitstream comprising the compressed image data. In another embodiment, the mapping scheme of the multi-level occupancy map is pre-defined for example in a coding standard.

The mapping scheme specifies a mapping between an occupancy map of wi x hi samples (in width x height) and a multi-level occupancy map of w 2 x h 2 samples, where w 2 is less than or equal to wi, h 2 is less than or equal to hi, and either or both of the following is true: w 2 is less than wi, h 2 is less than hi. For example, w 2 may be equal to wi / 2 and h 2 may be equal to hi / 2.

[0124] According to an embodiment, the method additionally comprises signaling variables for the mapping scheme of the multi-level occupancy map in or along a bitstream comprising the compressed image data.

[0125] The method is now further illustrated by referring to encoder according to Figure 10. As compared to the encoder according to MPEG W17248 disclosed in Figure 8, the encoder of Figure 10 comprises many blocks or units that may operate similarly to the encoder according to MPEG W17248, such as the blocks "Decomposition into patches", "Packing", which are configured to project a 3D object represented by the point cloud frame onto a 2D patch and generate a geometry image, a texture image and a multi-level occupancy map from the 2D patch. Moreover, the blocks "Video compression", "Auxiliary patch information compression", and "Multiplexer" may operate similarly to the encoder according to MPEG W17248.

[0126] As disclosed in the above method, the block- wise binarization block partitions the occupancy map into MxN blocks.

[0127] According to an embodiment, the values of M and N are determined based on the target chroma format. For example, if the target chroma format is 4:2:0, M and N are both equal to 2.

[0128] For each MxN block, a (MxN)-bit codeword is formed from the binary sample values of the MxN block. The binary sample values may be mapped to the codeword in a plurality of ways. Figure 1 la shows an example of a mapping scheme for mapping codewords for pixel values of 2x2 blocks of the occupancy map.

[0129] According to an embodiment, the codeword mapping is selected in a manner that between any two adjacent codeword values, only one pixel in the respective occupancy map block changes state (from on to off, or vice versa). Figure 1 lb shows an example of such mapping of codewords for pixel values of 2x2 blocks of the occupancy map. Consequently, if a codeword c derived from an uncompressed occupancy map changes due to the encoding to c -c+1 or c -c-1, only one pixel in the decoded occupancy map would change.

[0130] As disclosed in the above method, a multi-level occupancy map with a lower resolution based on the target chroma format is created using the binary values of the original occupancy map as basis. Thus, the (MxN)-bit codeword c is mapped to sample values d for creating the multi-level occupancy map. For example, the following mapping may be used: d = A x c + B, where A and B are selected constants. For example, for M = N = 2 and an 8-bit dynamic range in a 16-level occupancy map, A can be set equal to 16 and B can be set equal to 8. Consequently, the sample values d would be among the set 8, 24, 40, 56, 72, 88, ..., 248. In an embodiment, the values of A and B are selected by an encoder, are considered as variables for the mapping scheme, and are signalled by the encoder in or along the bitstream. In another embodiment, the values of A and B are pre-defined for example in a coding standard.

[ 131] Then, in the sample array multiplexing block of Figure 10, the geometry image and the multi-level occupancy map are multiplexed as sample arrays of a picture, which may be referred to as a joint geometry-occupancy image. According to an embodiment, the geometry image is used as the luma sample array and the multi-level occupancy map is used as one of the chroma sample arrays of the picture. [0132] The texture image and the multiplexed sample arrays of the geometry image and the multi-level occupancy map are compressed by video or image compressor(s). Moreover, the auxiliary patch information may be compressed by a compressor. The compressed image data from each compressor may be multiplexed into a bitstream, which may transmitted as such, or stored as a file. Another possibility is to encapsulate the compressed texture image data as a track or an image item in a file and the compressed multiplexed sample arrays of a geometry image and the multi-level occupancy map as another track or image item in the file.

[0133] According to an embodiment, for enabling the decoder to perform inverse binarization of the occupancy map, information about the used mapping scheme and variables of the multi-level occupancy map signaled in or along a bitstream comprising the compressed and multiplexed image data to a decoder. In another embodiment, the mapping scheme of the multi-level occupancy map is pre-defined for example in a coding standard.

[0134] Figure 12 shows an example of the multiplexing process and its result. The two left-hand side images show the geometry image and the multi-level occupancy map. The geometry image is used as YUV 420 luma sample array and the occupancy map is used as YUV 420 chroma sample array of the picture. The third image shows the result of the multiplexing seen in YUV420 format. In the fourth image, a detail of a rectangular in the upper left comer of the multiplexing result is shown as magnified several times.

[0135] Thus, the occupancy map correlates to the projected 2D patch geometry image. In the method, prediction modes are shared between the geometry image and the occupancy map, thereby utilizing the correlation. Consequently, compared to the test model of MPEG W17248, the compressed data rate for occupancy map coding is expected to be lower.

Moreover, the resource usage in terms of computational complexity, memory usage, and required memory bandwidth is lower than that in MPEG W17248. Also the 3D motion field coding in MPEG W17248 may be avoided by using the per- frame occupancy map coding, and consequently the related point cloud resampling and 3D motion compensation processes of MPEG W17248 may be avoided.

[0136] It is further noted that no dedicated codec is needed for the occupancy map, but existing video codecs can be used. Hardware-accelerated codec implementations can therefore be used. Neither additional encoder/decoder instances are required, as the occupancy data is packed with the geometry image.

[0137] A decoding method according to aspect comprises, as shown in Figure 13, receiving (1300) a bitstream in a decoder; demultiplexing (1302) a texture image and multiplexed sample arrays of a geometry image and a multi-level occupancy map from the bitstream; decoding (1304), from or along the bitstream, information identifying a mapping scheme used for mapping codewords for image blocks of a binary occupancy map and variables used for applying the mapping scheme for creating the multi-level occupancy map; decoding (1306) the multi-level occupancy map into a plurality of samples; quantizing (1308) each sample value to a closest quantization level determined on the basis of said variables; and creating (1310) image blocks of a predetermined size of a binary occupancy map by inverse mapping said mapping scheme onto said quantized sample values.

[0138] In an embodiment, rather than decoding, from or along the bitstream, information identifying a mapping scheme used for mapping codewords for image blocks of a binary occupancy map, a pre-defined mapping scheme is used. The mapping scheme may be pre defined for example in a coding standard.

[0139] In an embodiment, rather than decoding, from or along the bitstream, variables used for applying the mapping scheme for creating the multi-level occupancy map, pre-defined values for the variables are used. The variable values may be pre-defined for example in a coding standard.

[0140] It needs to be understood that variables used for applying the mapping scheme for creating the multi-level occupancy map can alternatively or additionally comprise variables used as a part of decoding the multi-level occupancy according to the mapping scheme to a binary occupancy map.

[ 141] Figure 14 shows a decoder according to an embodiment. Similarly to the encoder of Figure 10, the decoder of Figure 14 comprises many blocks or units that may operate similarly to a decoder according to MPEG W17248, such as the blocks "Demultiplexer", "Video decompression", "Auxiliary patch information decompression", and "2D-to-3D reconstruction". It is noted that the operational order of the blocks in Figure 13 is from right to left.

[0142] According to an embodiment, the sample array of the multi-level occupancy map is demultiplexed from the sample arrays of the decoded joint geometry-occupancy image. As shown in Figure 14, the sample array de-multiplexing block may be present. Alternatively, the inverse binarization block and 2D-to-3D reconstruction block may operate on multi- component images. When present, the sample array de-multiplexing separates the sample arrays of a decoded picture into separate pictures, i.e. the decoded geometry picture and the decoded multi-level occupancy map.

[0143] The samples values of the decoded multi-level occupancy map are converted to a decoded occupancy map in a process, which may be called inverse binarization. The inverse binarization, binary sample values of an MxN block in the decoded occupancy map are formed according to the indicated or pre-defined mapping scheme from a (MxN)-bit codeword that is derived from a decoded sample value of the decoded multi-level occupancy map. In an example, for each sample d' in the decoded multi-level occupancy map, the inverse binarization block may operate for example as follows:

Sample value d’ is rounded to the closest sample value quantization level among (A * c' + B), where c' is an integer in the range 0 to 2MxN - 1, inclusive. Consequently, the c' could be derived with the equation c' = round ((d' - B) ÷ A).

- A block of MxN samples for the decoded occupancy map is created by inverse

mapping the codeword c' corresponding to the closest sample value quantization level for d’. For example, the tables presented in Figures 1 la and 1 lb for encoding can be used for the inverse mapping.

[0144] According to an embodiment, the video encoding of the joint geometry-occupancy image is optimized in terms of rate-distortion in one or more of the following manners:

The rounding may be performed as described above with the inverse binarization as a part of the distortion calculation, whereupon the distortion may be derived from the rounded reconstructed sample values d'.

The distortion of the occupancy image may be derived in the bi-level occupancy map domain, i.e., by applying the entire inverse binarization map to each reconstructed geometry-occupancy block.

In the rate-distortion optimized encoder mode selection for the joint geometry- occupancy image, the rate-distortion weighting of the occupancy image may differ from that of the geometry image in a manner that small distortion of the occupancy image is preferred over small bitrate.

[0145] According to an embodiment, a finer quantization step is selected for the encoding of the chroma sample array comprising the multi-level occupancy map compared to quantization step of the luma sample array of the same picture.

[0146] According to an embodiment, the chroma sample array comprising the multi-level occupancy map is encoded using the transform skip mode, while transform-based coding is used in the luma sample array of the same picture.

[0147] According to an embodiment, filtering is turned off for encoding and/or decoding of the chroma sample array comprising the multi-level occupancy map. Filtering may be kept on for encoding and/or decoding of the luma sample array of the joint geometry-occupancy image. The filtering may comprise for example a deblocking loop filter (e.g. as specified in HEVC), a sample adaptive offset filter (e.g. as specified in HEVC), and/or an adaptive loop filter.

[0148] According to an embodiment, the method and the related embodiments are applied to a binary image of any type instead of or in addition to an occupancy map.

[0149] According to an embodiment, the method and the related embodiments are applied to a K-level image instead of a binary image, such as a binary occupancy map, where K is greater than 2. The term multi-level occupancy map in various embodiments may be regarded as an L-level image (i.e. with L possible values), where L is greater than K and L is less than or equal to the bit-depth of the chroma sample array associated with L-level image. Examples of the K-level image include but are not limited to the following:

A transparency image, where value 0 may indicate full transparency (i.e., the collocated sample in the texture image and the geometry image is not valid or not occupied), value K minus 1 may indicate that the collocated sample in the texture image is fully opaque, and values from 1 to K minus 2, inclusive, may indicate gradually increasing opacity.

A label image, where value 0 may indicate that collocated sample in the texture image and the geometry image is not valid, and non-zero values may serve as a label or identifier of the object associated with the collocated sample in the texture image. For example, if a point cloud comprises a person sitting on a chair and holding a book in his hands, the points belonging to the person, the chair, and the book could be assigned labels 1, 2, and 3, respectively, in a K-level image with K equal to 4.

[0150] The MxN K-level values are mapped to sample values d for creating the L-level image. For example, the following mapping may be used: d = å(C x k + A x C k + B), where the sum (å) is derived for all values of k in the range of 1 to K-l, inclusive, A, B, and C are selected constants, C k is the codeword resulting from a binary mapping of the "k- th plane" of the input samples, in which a sample value is equal to 1 if the corresponding sample in the K- level image is equal to k. The binary mapping of the k- th plane may be performed for example according to any presented embodiment. For example, for M = N = 2, K = 4, and an 8-bit dynamic range, A can be set equal to 2 and B can be set equal to 16, and C can be set equal to 64. Consequently, the sample values d would be among the set 64 x (0, 1, 2, 3} + (0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30} + 16, where curly brackets indicate a selection of any enclosed value. In an embodiment, the values of A, B, and/or C are selected by an encoder, are considered as variables for the mapping scheme, and are signalled by the encoder in or along the bitstream. In another embodiment, the values of A, B, and/or C are pre-defined for example in a coding standard.

[0151 ] According to an embodiment, the method and the related embodiments are applied to a K-level image out of which two multi-level images are derived. For example, embodiments may be applied to a K-level transparency image with K equal to 128. Two chroma sample arrays for the 4:2:0 chroma format may be derived. In an example, the first chroma sample array may represent two samples of each 2x2 block of samples of the K-level image, and the second chroma sample array may represent the remaining two samples of each 2x2 block of samples of the K-level image. In another example, the samples of each 2x2 block of the K-level image may be labelled as p, q, r, s in raster scan order. The first chroma sample array may comprise samples that are derived from codeword pairs of (p+q)/2 and (r+s)/2 (i.e., horizontally derived averages), and the second chroma sample array may comprise samples that are derived from codeword pairs of (p+r)/2 and (q+s)/2 (i.e., vertically derived averages).

[0152] According to an embodiment, instead of a geometry image, the method and the related embodiments are applied to an attribute image, such as an image containing surface normals, opacity, reflectance, albedo, or other material and surface attributes.

[0153] According to an embodiment, the method and the related embodiments are applied to an image in which the luma sample array represents an attribute image of a first type (e.g. depth), a first chroma sample array (e.g. U sample array) represents an occupancy map, and a second chroma sample array (e.g. V sample array) represents an attribute image of a second type (e.g. reflectance).

[0154] According to an embodiment, the second chroma sample array may comprise representative attribute values for those points that are indicated to be valid according to the first chroma sample array. For example, a sample of the second chroma sample array may be average attribute value for the respective luma samples in the texture image that are indicated to be valid according to the first chroma sample array. For example, three out of four luma samples may be indicated to be valid and the chroma sample value of the second chroma sample array may represent an average reflectance value of the points corresponding to the three luma samples.

[0155] According to an embodiment, the method and the related embodiments are applied to an image in which the luma sample array represents an attribute image of a first type (e.g. depth), a first chroma sample array (e.g. U sample array) represents a first binary image, such as an occupancy map, and a second chroma sample array (e.g. V sample array) represents a second binary image.

[0156] In the above, some embodiments have been described with reference to encoding. It needs to be understood that said encoding may comprise one or more of the following:

encoding source image data into a bitstream, encapsulating the encoded bitstream in a container file and/or in packet(s) or stream(s) of a communication protocol, and announcing or describing the bitstream in a content description, such as the Media Presentation

Description (MPD) of ISO/IEC 23009-1 (known as MPEG-DASH) or the IETF Session Description Protocol (SDP). Similarly, some embodiments have been described with reference to decoding. It needs to be understood that said decoding may comprise one or more of the following: decoding image data from a bitstream, decapsulating the bitstream from a container file and/or from packet(s) or stream(s) of a communication protocol, and parsing a content description of the bitstream,

[0157] In the above, some embodiments have been described with reference to blocks of a particular size (NxN) along a block grid. It needs to be understood that embodiments may be realized for non-square blocks (MxN) if the underlying coding system supports such. It also needs to be understood that embodiments may be realized with multiple block partitioning levels. For example, a block of NxN may be considered filled in different embodiments, when it the underlying coding system support block partitioning of LxL blocks (e.g. CTUs of HE VC) such that the NxN block is a coding unit.

[0158] In the above, some embodiments have been described with reference to encoding or decoding texture images, geometry images, and (optionally) attribute images. It needs to be understood that these images are not necessarily separate images in encoding and/or decoding. For example the following options exist to represent a geometry image and/or one or more attribute images in relation to the associated texture image:

An additional sample array in addition to the conventional three colour component sample arrays of the texture picture.

A constituent frame of a frame-packed picture that may also comprise texture picture(s).

[0159] In the above, some embodiments have been described with reference to encoding or decoding texture pictures, geometry pictures, projection geometry information, and

(optionally) attribute pictures into or from a single bitstream. It needs to be understood that embodiments can be similarly realized when encoding or decoding texture pictures, geometry pictures, projection geometry information, and (optionally) attribute pictures into or from several bitstreams that are associated with each other, e.g. by metadata in a container file or media presentation description for streaming.

[0160] In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof

[0161] Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

[0162] Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.

[0163] The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar

modifications of the teachings of this invention will still fall within the scope of this invention.




 
Previous Patent: DEVICE CABINET LOCK

Next Patent: ADJUSTMENT PART