Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VOLUMETRIC VIDEO
Document Type and Number:
WIPO Patent Application WO/2019/185985
Kind Code:
A1
Abstract:
There are disclosed various methods, apparatuses and computer program products for volumetric video encoding and decoding. In some embodiments, one or more patches (125) formed from a three- dimensional image information are obtained. Said one or more patches (125) represent projection data of at least a part of an object to a projection plane. Said one or more patches (125) are projected to a projection plane; and an area (124) around the projected patch (125) is allocated to prevent projection of another patch to the allocated area (124). A current patch (126) of a current frame from said one or more patches (125) is compared to one or more patches (125) of a reference frame to find a reference patch candidate(125) for the current patch (126). Said reference patch candidate (125) is surrounded by a guard area (124). Patch alignment (122) for the patch (126) is performed on the basis of the reference patch candidate (125).

Inventors:
SCHWARZ SEBASTIAN (FI)
HANNUKSELA MISKA (FI)
PESONEN MIKA (FI)
RIDGE JUSTIN (US)
Application Number:
PCT/FI2019/050235
Publication Date:
October 03, 2019
Filing Date:
March 21, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04N19/597; G06T3/00; H04N13/268; H04N19/543; H04N13/106
Foreign References:
US20170221263A12017-08-03
Other References:
ZHANG, D. 'A: "new patch side information encoding method for PCC TMC2.' MPEG 121st meeting Gwangju input document m42195 rev2", MPEG DOCUMENT MANAGEMENT SYSTEM, 22 January 2018 (2018-01-22), Retrieved from the Internet [retrieved on 20190606]
MAMMOU, K.: "PCC Test Model Category 2 v0", MPEG 120TH MEETING MACAU OUTPUT DOCUMENT W17248, 14 December 2017 (2017-12-14), Retrieved from the Internet [retrieved on 20190606]
PRADA, F. ET AL.: "Spatiotemporal Atlas Parameterization for Evolving Meshes", ACM TRANSACTIONS ON GRAPHICS (TOG). ACM DIGITAL LIBRARY, vol. 36, no. 4, 20 July 2017 (2017-07-20), XP058372832, Retrieved from the Internet [retrieved on 20190613], DOI: 10.1145/3072959.3073679
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
Claims:

1. A method comprising:

obtaining one or more patches formed from a three-dimensional image information of volumetric content, said one or more patches representing projection data of at least a part of an object to a projection plane;

projecting said one or more patches to the projection plane; and

allocating an area around the projected patch to prevent projection of another patch to the allocated area.

2. The method according to claim 1 comprising:

comparing a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

performing patch alignment for the current patch on the basis of the reference patch candidate.

3. The method according to claim 2, said patch alignment comprising:

performing a search between the current patch and the reference patch candidate to find one or more displacement vectors for the current patch aligning it to the reference patch position.

4. The method according to claim 3 comprising:

selecting from the found one or more displacement vectors such a displacement vector which results with the smallest difference value between the position of the reference patch and the position of the current patch in their respective 2D images; and

using the selected displacement vector as an alignment vector for the current patch.

5. The method according to claim 4, the selecting comprising one or more of: deriving the displacement vector by minimization of a distortion metric;

deriving the displacement vector by feature matching;

deriving the displacement vector by point cloud position matching, in which an optimal match is searched found by minimizing an error between 3D positions between the reference patch and the current patch.

6. An apparatus according to a second aspect comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:

obtain one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

project said one or more patches to a projection plane; and

allocate an area around the projected patch to prevent projection of another patch to the allocated area.

7. The apparatus according to claim 6, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

compare a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

perform patch alignment for the current patch on the basis of the reference patch candidate.

8. The apparatus according to claim 7, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform said patch alignment by at least the following: performing a search between the current patch and the reference patch candidate to find one or more displacement vectors for the current patch aligning it to the reference patch position.

9. The apparatus according to claim 8, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

select from the found one or more displacement vectors such a displacement vector which results with the smallest difference value between the position of the reference patch and the position of the current patch in their respective 2D images; and

use the selected displacement vector as an alignment vector for the current patch.

10. The apparatus according to claim 9, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

derive the displacement vector by minimization of a distortion metric;

derive the displacement vector by feature matching;

derive the displacement vector by point cloud position matching, in which an optimal match is searched found by minimizing an error between 3D positions between the reference patch and the current patch.

11. The apparatus according to any of claims 6 to 10, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

limit the search within the reference patch candidate and the guard area of the reference patch candidate.

12. The apparatus according to any of claims 6 to 11, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

organize patches of the reference frame into one or more lists of reference patches by the projection plane; and

organize patches of the current frame into one or more lists of current patches by the projection plane; and

sort both the one or more lists of reference patches and the one or more lists of current patches by a same criterion.

13. The apparatus according to claim 12, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:

compare each patch in the list of current patches to reference patch candidates.

14. The apparatus according to any of claims 6 to 13, said memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: pack the current patch as closely to the 2D position of its reference patch within the constraints of the guard area of the reference patch and the current patch.

15. An apparatus comprising :

means for obtaining one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

means for projecting said one or more patches to a projection plane; and means for allocating an area around the projected patch to prevent projection of another patch to the allocated area.

Description:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR

VOEUMETRIC VIDEO

TECHNICAE FIEED

[0001] The present invention relates to an apparatus, a method and a computer program for volumetric video coding and decoding.

BACKGROUND

[0002] This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.

[0003] A video coding system may comprise an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. The encoder may discard some information in the original video sequence in order to represent the video in a more compact form, for example, to enable the storage/transmission of the video information at a lower bitrate than otherwise might be needed.

[0004] Volumetric video data represents a three-dimensional scene or object and can be used as input for virtual reality (VR), augmented reality (AR) and mixed reality (MR) applications. Such data describes the geometry attribute, e.g. shape, size, position in three- dimensional (3D) space, and respective attributes, e.g. colour, opacity, reflectance and any possible temporal changes of the geometry attribute and other attributes at given time instances. Volumetric video is either generated from 3D models through computer-generated imagery (CGI), or captured from real-world scenes using a variety of capture solutions, e.g. multi-camera, laser scan, combination of video and dedicated depth sensors, and more. Also, a combination of CGI and real-world data is possible.

[0005] Typical representation formats for such volumetric data are triangle meshes, point clouds (PCs), or voxel arrays. Temporal information about the scene can be included in the form of individual capture instances, i.e.“frames” in 2D video, or other means, e.g. position of an object as a function of time.

[0006] Identifying correspondences for motion-compensation in 3D-space may be an ill- defined problem, as both the geometry and respective attributes may change. For example, temporal successive“frames” do not necessarily have the same number of meshes, points or voxel. Therefore, compression of dynamic 3D scenes may be inefficient. 2D-video based approaches for compressing volumetric data, i.e. multiview + depth, may have better compression efficiency, but rarely cover the full scene. Therefore, they provide only limited six degrees of freedom (6DOF) capabilities.

[0007] Because volumetric video describes a 3D scene (or object), such data can be viewed from any viewpoint. Therefore, volumetric video may be an important format for any AR,

VR, or MR applications, especially for providing 6DOF viewing capabilities.

[0008] Increasing computational resources and advances in 3D data acquisition devices has enabled reconstruction of highly detailed volumetric video representations of natural scenes. Infrared, lasers, time-of- flight and structured light are all examples of devices that can be used to construct 3D video data. Representation of the 3D data depends on how the 3D data is used. Dense Voxel arrays have been used to represent volumetric medical data. In 3D graphics, polygonal meshes are extensively used. Point clouds on the other hand are well suited for applications such as capturing real world 3D scenes where the topology is not necessarily a 2D manifold. Another way to represent 3D data is coding this 3D data as set of texture and depth map as is the case in the multi- view plus depth. Closely related to the techniques used in multi- view plus depth is the use of elevation maps, and multi-level surface maps.

[0009] In dense point clouds or voxel arrays, the reconstructed 3D scene may contain tens or even hundreds of millions of points. If such representations are to be stored or interchanged between entities, then efficient compression may become essential.

[0010] The above mentioned volumetric video representation formats suffer from poor spatial and temporal coding performance.

[0011] There is, therefore, a need for solutions for improved coding of volumetric video.

SUMMARY

[0012] Now there has been invented an improved method and technical equipment implementing the method, by which the above problems are alleviated. Various aspects of the invention include a method, an apparatus (an encoder and/or a decoder), a system and a computer readable medium comprising a computer program or a signal stored therein, which are characterized by what is stated in the independent claims. Various details of the invention are disclosed in the dependent claims and in the corresponding images and description.

[0013] A volumetric video may comprise three-dimensional scenes represented as, for example, dynamic point clouds, arrays of voxels or mesh models or a combination of such. The three-dimensional scenes may be projected onto a number of projection surfaces having simple geometries, for example sphere(s), cylinder(s), cube(s), polyhedron(s) and/or plane(s). In this context, a projection surface may be a piece-wise continuous and smooth surface in three-dimensional space. Piece-wise smoothness may be understood so that there are regions of the surface where the direction of the surface normal does not change abruptly (i.e. the values of the coefficients of the surface normal’s coordinate components are continuous). A projection surface may comprise pieces of simple geometric surfaces. A projection surface may also evolve (change) over time. On such surfaces, the texture and geometry of point clouds, voxel arrays or mesh models may form pixel images, e.g. texture images and depth images (indicative of distance from the projection surface). These two images represent the same object projected onto the same geometry, therefore object boundaries are aligned in texture and depth image.

[0014] Such projection surfaces may be unfolded onto two-dimensional (2D) planes, e.g. resulting in a two-dimensional pixel image. Standard 2D video coding may be applied for each projection to code the pixel information resulting from the texture data. In connection with the texture information, relevant projection geometry information (geometry attributes), comprising e.g. projection or projection surface type, location and orientation of the projection surface in 3D space, and/or size of the projection surface, may be transmitted either in the same bitstream or separately along with the bitstream. At the receiver side, the bitstream may be decoded and volumetric video may be reconstructed from decoded 2D projections and projection geometry information.

[0015] Two-dimensional images may be projected from different parts of scene objects to form several patches. Such patches may be projections onto one of three orthogonal planes (front, side, top). Patches are derived by analysing surface normals and clustering related 3D data points. The projection plane for each such patch is the one of the above mentioned three planes with the closest surface normal to the average patch normal. All patches may be packed into a 2D grid for compression. For each patch a 3D vector is signalled to specify the patch location in 3D space for reprojection at the decoder side.

[0016] Such patches may be gathered together to create a 2D grid which will later be encoded using conventional video codecs. The creation of this 2D grid may not decrease the size of grid and fully use the available pixels. Therefore, in accordance with some approaches there is provided a method to better assign locations for the patches to reduce the amount of bitrate required to encode the 2D grid. Keeping the 2D grid to the minimum required size might provide small benefits in terms of coding efficiency. Current video coding technology is very good in encoding empty areas. However, it may bring large benefits in terms of required video buffer size. As buffer memory, especially at the decoder, may come at a high cost, optimised patch packing is desired.

[0017] The phrase along with the bitstream (e.g. indicating along with the bitstream) may be defined to refer to out-of-band transmission, signalling, or storage in a manner that the out- of-band data is associated with the bitstream. The phrase decoding along with the bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out- of-band transmission, signalling, or storage) that is associated with the bitstream. For example, an indication along with the bitstream may refer to metadata in a container file that encapsulates the bitstream.

[0018] In accordance with an embodiment, there is provided an algorithm to increase temporal consistency between two consecutive frames of projected volumetric video data patches. A "guard area" (or "guard strip") around each patch of the first projected frame may be allocated. Such an area may loosen the overall patch packing but may allow for higher- temporal consistency with successive frames in a later stage. A first volumetric video frame is packed in such a fashion, but with larger empty boundaries between patches. 2D size, projection plane index and 3D location vector of each patch are stored. This frame and its patch mapping information is used as a reference for guiding the packing of a later next frame. The later frame is not necessarily the next frame in a temporal succession, but the next frame in encoding order or a frame referencing the current frame for motion-compensated prediction.

[0019] The current frame is decomposed into patches, and the resulting patches are organised by projection plane index and sorted by size in descending order. For each patch, the most similar patch (closest match) in the reference frame is searched. Various approaches can be used, for example one or a combination of the following: [0020] Intersection over Union (IoU) is suitable for consistent patches but may be unreliable in the context of 3D-to-2D projection with its varying patch borders. It does not provide per-pixel accuracy for later motion-compensation.

[0021] 2D patch size may be unreliable for a precise match, but a good starting point for a first, "rough" search. It does not provide per-pixel accuracy for later motion-compensation.

[0022] Patch location in 3D space may be unreliable for a precise match, but good starting point for a first, "rough" search. It does not provide per-pixel accuracy for later motion- compensation.

[0023] SSD or SAD minimization (or more generally, minimization of a distortion metric) delivers per pixel-accuracy but may be computationally expensive if performed over a whole frame.

[0024] Image Feature detectors such as SIFT or SURF can identify correlation between patches. They deliver per pixel- accuracy but may be computationally expensive if performed over a whole frame.

[0025] Combinations of at least two of the above mentioned approaches, or a sequential application of at least two of the above mentioned approaches, e.g. limit candidate patches by 3D location threshold and refine by SIFT feature matching

[0026] Each patch is assigned a corresponding patch in the reference frame (including a possible pixel shift based on per-pixel matching). The "guard area" introduced in the reference frame now allows for a per-pixel optimised temporal-consistent packing between frames. Per-pixel metrics such as SAD optimisation or SIFT features provide the necessary pixel shift values for optimised packing.

[0027] If no suitable corresponding patch is found, the patch can be compared to patches for the other two projection plane indices or it is packed "as is" in an available space in the projection grid. In this case, a new guard area around the patch shall be introduced.

[0028] Theoretically, a patch should only have one reference patch. However, in the case that two or more patches are assigned the same reference patch, only the largest patch in the current frame is packed as described above and the search is iterated for the other patches until all patches are mapped.

[0029] Some embodiments provide a method for encoding and decoding volumetric video information. In some embodiments of the present invention there is provided a method, apparatus and computer program product for volumetric video coding as well as decoding.

[0030] Various aspects of examples of the invention are provided in the detailed description. [0031] According to a first aspect, there is provided a method comprising: obtaining one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

projecting said one or more patches to a projection plane; and

allocating an area around the projected patch to prevent projection of another patch to the allocated area.

[0032] An apparatus according to a second aspect comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:

obtain one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

project said one or more patches to a projection plane; and

allocate an area around the projected patch to prevent projection of another patch to the allocated area.

[0033] A computer readable storage medium according to a third aspect comprises code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:

obtain one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

project said one or more patches to a projection plane; and

allocate an area around the projected patch to prevent projection of another patch to the allocated area.

[0034] An apparatus according to a fourth aspect comprises:

means for obtaining one or more patches formed from a three-dimensional image information, said one or more patches representing projection data of at least a part of an object to a projection plane;

means for projecting said one or more patches to a projection plane; and means for allocating an area around the projected patch to prevent projection of another patch to the allocated area.

[0035] According to a fifth aspect, there is provided a method comprising: obtaining one or more patches formed from a three-dimensional image information of volumetric content and projected to a projection plane, said one or more patches representing projection data of at least a part of an object to the projection plane and being allocated an area around the projected patch to prevent projection of another patch to the allocated area;

comparing a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

performing patch alignment for the patch on the basis of the reference patch candidate.

[0036] According to a sixth aspect, there is provided an apparatus comprising at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:

obtain one or more patches formed from a three-dimensional image information of volumetric content and projected to a projection plane, said one or more patches representing projection data of at least a part of an object to the projection plane and being allocated an area around the projected patch to prevent projection of another patch to the allocated area;

compare a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

perform patch alignment for the patch on the basis of the reference patch candidate.

[0037] According to a seventh aspect, there is provided a computer readable storage medium comprising code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:

obtain one or more patches formed from a three-dimensional image information of volumetric content and projected to a projection plane, said one or more patches representing projection data of at least a part of an object to the projection plane and being allocated an area around the projected patch to prevent projection of another patch to the allocated area;

compare a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

perform patch alignment for the patch on the basis of the reference patch candidate.

[0038] According to an eighth aspect, there is provided an apparatus comprising: means for obtaining one or more patches formed from a three-dimensional image information of volumetric content and projected to a projection plane, said one or more patches representing projection data of at least a part of an object to the projection plane and being allocated an area around the projected patch to prevent projection of another patch to the allocated area;

means for comparing a current patch of a current frame from said one or more patches to one or more patches of a reference frame to find a reference patch candidate for the current patch, said reference patch candidate being surrounded by a guard area; and

means for performing patch alignment for the patch on the basis of the reference patch candidate.

[0039] Further aspects include at least apparatuses and computer program products/code stored on a non-transitory memory medium arranged to carry out the above methods.

BRIEF DESCRIPTION OF THE DRAWINGS

[0040] For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

[0041 ] Fig. 1 shows a system for capturing, encoding, decoding, reconstructing and viewing a three-dimensional scheme;

[0042] Figs. 2a and 2b show a capture device and a viewing device;

[0043] Figs. 3a and 3b show an encoder and decoder for encoding and decoding texture pictures, geometry pictures and/or auxiliary pictures;

[0044] Figs. 4a, 4b, 4c and 4d show a setup for forming a stereo image of a scene to a user;

[0045] Figs. 5a and 5b illustrate projection of source volumes in a scene and parts of an object to projection surfaces;

[0046] Fig. 6 shows a projection of a source volume to a projection surface, and inpainting of a sparse projection;

[0047] Fig. 7 depicts the effect of occluded projections in a reconstructed point cloud;

[0048] Fig. 8a shows an example of projecting an object using a cube map projection format;

[0049] Fig. 8b shows an example of a cross section of a surface of a three-dimensional object enclosed in a bounding box as viewed from the top of the bounding box;

[0050] Fig. 8c shows projection surfaces established to project the occluded surfaces of the three-dimensional object of Fig. 8b; [0051] Fig. 8d illustrates calculation of the relative pose of one projection surface with respect to another projection surface, in accordance with an embodiment;

[0052] Fig. 9a illustrates an example of an encoding element;

[0053] Fig. 9b illustrates an example of a decoding element.

[0054] Fig. 10 shows a simplified flow diagram of a processing chain to achieve temporal patch alignment of a current frame to a previous processed reference frame, in accordance with an embodiment;

[0055] Fig. 11 illustrates an example of patch packing.

[0056] Fig. l2a illustrates an example of forming a grid for patches and arranging the patches in the grid;

[0057] Fig. l2b illustrates an example of a patch extended by a given number of pixels in a guard area;

[0058] Fig. l2c illustrates an example of a result of a packing process, in accordance with an embodiment;

[0059] Fig. l2d illustrates an example of multi-pass encoding or pre-analysis of at least two pictures performed so that the padding used in the reference patch within the guard area is performed on the basis of the current patch;

[0060] Fig. 13 illustrates an example of a video coding GOP structure;

[0061] Fig. l4a illustrates a packing element, in accordance with an embodiment; and [0062] Fig. l4b illustrates an example of a decoding element.

DETAILED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS

[0063] In the following, several embodiments of the invention will be described in the context of point cloud, voxel or mesh scene models for three-dimensional volumetric video and pixel and picture based two-dimensional video coding. It is to be noted, however, that the invention is not limited to specific scene models or specific coding technologies. In fact, the different embodiments have applications in any environment where coding of volumetric scene data is required.

[0064] Point clouds are commonly used data structures for storing volumetric content. Compared to point clouds, sparse voxel octrees describe a recursive subdivision of a finite volume with solid voxels of varying sizes, while point clouds describe an unorganized set of separate points limited only by the precision of the used coordinate values.

[0065] A volumetric video frame is a sparse voxel octree or a point cloud that models the world at a specific point in time, similar to a frame in a 2D video sequence. Voxel or point attributes contain information like colour, opacity, surface normal vectors, and surface material properties. These are referenced in the sparse voxel octrees (e.g. colour of a solid voxel) or point clouds, but can also be stored separately.

[0066] When encoding a volumetric video, each frame may produce several hundred megabytes or several gigabytes of voxel data which needs to be converted to a format that can be streamed to the viewer and rendered in real-time. The amount of data depends on the world complexity and the number of cameras. The larger impact comes in a multi-device recording setup with a number of separate locations where the cameras are recording. Such a setup produces more information than a camera at a single location.

[0067] Fig. 1 shows a system for capturing, encoding, decoding, reconstructing and viewing a three-dimensional scheme, that is, for 3D video and 3D audio digital creation and playback. The task of the system is that of capturing sufficient visual and auditory

information from a specific scene to be able to create a scene model such that a convincing reproduction of the experience, or presence, of being in that location can be achieved by one or more viewers physically located in different locations and optionally at a time later in the future. Such reproduction requires more information that can be captured by a single camera or microphone, in order that a viewer can determine the distance and location of objects within the scene using their eyes and their ears. To create a pair of images with disparity, two camera sources are used. In a similar manner, for the human auditory system to be able to sense the direction of sound, at least two microphones are used (the commonly known stereo sound is created by recording two audio channels). The human auditory system can detect the cues, e.g. in timing difference of the audio signals to detect the direction of sound.

[0068] The system of Fig. 1 may consist of three main parts: image sources, a server and a rendering device. A video source SRC 1 may comprise multiple cameras CAM 1 , CAM2, ... , CAMN with overlapping field of view so that regions of the view around the video capture device is captured from at least two cameras. The video source SRC1 may comprise multiple microphones to capture the timing and phase differences of audio originating from different directions. The video source SRC1 may comprise a high-resolution orientation sensor so that the orientation (direction of view) of the plurality of cameras CAM1, CAM2, ..., CAMN can be detected and recorded. The cameras or the computers may also comprise or be functionally connected to means for forming distance information corresponding to the captured images, for example so that the pixels have corresponding depth data. Such depth data may be formed by scanning the depth or it may be computed from the different images captured by the cameras. The video source SRC1 comprises or is functionally connected to, or each of the plurality of cameras CAM1, CAM2, .. CAMN comprises or is functionally connected to a computer processor and memory, the memory comprising computer program code for controlling the source and/or the plurality of cameras. The image stream captured by the video source, i.e. the plurality of the cameras, may be stored on a memory device for use in another device, e.g. a viewer, and/or transmitted to a server using a communication interface. It needs to be understood that although a video source comprising three cameras is described here as part of the system, another amount of camera devices may be used instead as part of the system.

[0069] Alternatively, or additionally to the source device SRC1 creating information for forming a scene model, one or more sources SRC2 of synthetic imagery may be present in the system, comprising a scene model. Such sources may be used to create and transmit the scene model and its development over time, e.g. instantaneous states of the model. The model can be created or provided by the source SRC1 and/or SRC2, or by the server SERVER. Such sources may also use the model of the scene to compute various video bitstreams for transmission.

[0070] One or more two-dimensional video bitstreams may be computed at the server SERVER or a device RENDERER used for rendering, or another device at the receiving end. When such computed video streams are used for viewing, the viewer may see a three- dimensional virtual world as described in the context of Figs 4a— 4d. The devices SRC1 and SRC2 may comprise or be functionally connected to one or more computer processors (PROC2 shown) and memory (MEM2 shown), the memory comprising computer program (PROGR2 shown) code for controlling the source device SRC1/SRC2. The image stream captured by the device and the scene model may be stored on a memory device for use in another device, e.g. a viewer, or transmitted to a server or the viewer using a communication interface COMM2. There may be a storage, processing and data stream serving network in addition to the capture device SRC1. For example, there may be a server SERVER or a plurality of servers storing the output from the capture device SRC1 or device SRC2 and/or to form a scene model from the data from devices SRC1, SRC2. The device SERVER comprises or is functionally connected to a computer processor PROC3 and memory MEM3, the memory comprising computer program PROGR3 code for controlling the server. The device SERVER may be connected by a wired or wireless network connection, or both, to sources SRC1 and/or SRC2, as well as the viewer devices VIEWER1 and VIEWER2 over the communication interface COMM3. [0071] The creation of a three-dimensional scene model may take place at the server SERVER or another device by using the images captured by the devices SRC1. The scene model may be a model created from captured image data (a real-world model), or a synthetic model such as on device SRC2, or a combination of such. As described later, the scene model may be encoded to reduce its size and transmitted to a decoder, for example viewer devices.

[0072] For viewing the captured or created video content, there may be one or more viewer devices VIEWER 1 and VIEWER2. These devices may have a rendering module and a display module, or these functionalities may be combined in a single device. The devices may comprise or be functionally connected to a computer processor PROC4 and memory MEM4, the memory comprising computer program PROG4 code for controlling the viewing devices. The viewer (playback) devices may consist of a data stream receiver for receiving a video data stream and for decoding the video data stream. The video data stream may be received from the server SERVER or from some other entity, such as a proxy server, an edge server of a content delivery network, or a file available locally in the viewer device. The data stream may be received over a network connection through communications interface COMM4, or from a memory device MEM6 like a memory card CARD2. The viewer devices may have a graphics processing unit for processing of the data to a suitable format for viewing. The viewer VIEWER1 may comprise a high-resolution stereo-image head-mounted display for viewing the rendered stereo video sequence. The head-mounted display may have an orientation sensor DET1 and stereo audio headphones. The viewer VIEWER2 may comprise a display (either two-dimensional or a display enabled with 3D technology for displaying stereo video), and the rendering device may have an orientation detector DET2 connected to it. Alternatively, the viewer VIEWER2 may comprise a 2D display, since the volumetric video rendering can be done in 2D by rendering the viewpoint from a single eye instead of a stereo eye pair.

[0073] It needs to be understood that Fig. 1 depicts one SRC1 device and one SRC2 device, but generally the system may comprise more than one SRC1 device and/or SRC2 device.

[0074] Any of the devices (SRC 1 , SRC2, SERVER, RENDERER, VIEWER 1 , VIEWER2) may be a computer or a portable computing device or be connected to such or configured to be connected to such. Moreover, even if the devices (SRC1, SRC2, SERVER, RENDERER, VIEWER1, VIEWER2) are depicted as a single device in Fig. 1, they may comprise multiple parts or may be comprised of multiple connected devices. For example, it needs to be understood that SERVER may comprise several devices, some of which may be used for editing the content produced by SRC1 and/or SRC2 devices, some others for compressing the edited content, and a third set of devices may be used for transmitting the compressed content. Such devices may have computer program code for carrying out methods according to various examples described in this text.

[0075] Figs. 2a and 2b show a capture device and a viewing device, respectively. Fig. 2a illustrates a camera CAM1. The camera has a camera detector CAMDET1, comprising a plurality of sensor elements for sensing intensity of the light hitting the sensor element. The camera has a lens OBJ1 (or a lens arrangement of a plurality of lenses), the lens being positioned so that the light hitting the sensor elements travels through the lens to the sensor elements. The camera detector CAMDET1 has a nominal centre point CP1 that is a middle point of the plurality of sensor elements, for example for a rectangular sensor the crossing point of diagonals of the rectangular sensor. The lens has a nominal centre point PP1, as well, lying for example on the axis of symmetry of the lens. The direction of orientation of the camera is defined by the line passing through the centre point CP1 of the camera sensor and the centre point PP1 of the lens. The direction of the camera is a vector along this line pointing in the direction from the camera sensor to the lens. The optical axis of the camera is understood to be this line CP1-PP1. However, the optical path from the lens to the camera detector need not always be a straight line but there may be mirrors and/or some other elements which may affect the optical path between the lens and the camera detector.

[0076] Fig. 2b shows a head-mounted display (HMD) for stereo viewing. The head- mounted display comprises two screen sections or two screens DISP1 and DISP2 for displaying the left and right eye images. The displays are close to the eyes, and therefore lenses are used to make the images easily viewable and for spreading the images to cover as much as possible of the eyes' field of view. When the device will be used by a user, the user may put the device on her/his head so that it will be attached to the head of the user so that it stays in place even when the user turns his head. The device may have an orientation detecting module ORDET1 for determining the head movements and direction of the head. The head-mounted display gives a three-dimensional (3D) perception of the

recorded/streamed content to a user.

[0077] The system described above may function as follows. Time-synchronized video and orientation data is first recorded with the capture devices. This can consist of multiple concurrent video streams as described above. One or more time-synchronized audio streams may also be recorded with the capture devices. The different capture devices may form image and geometry information of the scene from different directions. For example, there may be three, four, five, six or more cameras capturing the scene from different sides, like front, back, left and right, and/or at directions between these, as well as from the top or bottom, or any combination of these. The cameras may be at different distances, for example some of the cameras may capture the whole scene and some of the cameras may be capturing one or more objects in the scene. In an arrangement used for capturing volumetric video data, several cameras may be directed towards an object, looking onto the object from different directions, where the object is e.g. in the middle of the cameras. In this manner, the texture and geometry of the scene and the objects within the scene may be captured adequately. As mentioned earlier, the cameras or the system may comprise means for determining geometry

information, e.g. depth data, related to the captured video streams. From these concurrent video and audio streams, a computer model of a scene may be created. Alternatively, or additionally, a synthetic computer model of a virtual scene may be used. The models (at successive time instances) are then transmitted immediately or later to the storage and processing network for processing and conversion into a format suitable for subsequent delivery to playback devices. The conversion may involve processing and coding to improve the quality and/or reduce the quantity of the scene model data while preserving the quality at a desired level. Each playback device receives a stream of the data (either computed video data or scene model data) from the network and renders it into a viewing reproduction of the original location which can be experienced by a user. The reproduction may be two- dimensional or three-dimensional (stereo image pairs).

[0078J Figs. 3a and 3b show an encoder and decoder, respectively, for encoding and decoding texture pictures, geometry pictures and/or auxiliary pictures. A video codec consists of an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. Typically, the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate). An example of an encoding process is illustrated in Figure 3a. Figure 3a illustrates an image to be encoded (F); a predicted representation of an image block (P' n ); a prediction error signal (D n ); a reconstructed prediction error signal (D' n ); a preliminary reconstructed image (I' n ); a final reconstructed image (R' n ); a transform (T) and inverse transform (T 1 ); a quantization (Q) and inverse quantization (Q 1 ); entropy encoding (E); a reference frame memory (RFM); inter prediction (Pi nter ); intra prediction (P mtra ); mode selection (MS) and filtering (F). [0079] An example of a decoding process is illustrated in Figure 3b. Figure 3b illustrates a predicted representation of an image block (P' n ); a reconstructed prediction error signal (D' n ); a preliminary reconstructed image (I' n ); a final reconstructed image (R' n ); an inverse transform an inverse quantization (Q 1 ); an entropy decoding (E 1 ); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F).

[0080] Figs. 4a, 4b, 4c and 4d show a setup for forming a stereo image of a scene to a user, for example a video frame of a 3D video. In Fig. 4a, a situation is shown where a human being is viewing two spheres Al and A2 using both eyes El and E2. The sphere Al is closer to the viewer than the sphere A2, the respective distances to the first eye El being LEI,AI and LEI,A2. The different objects reside in space at their respective (x,y,z) coordinates, defined by the coordinate system SZ, SY and SZ. The distance di 2 between the eyes of a human being may be approximately 62-64 mm on average, and varying from person to person between 55 and 74 mm. This distance is referred to as the parallax, on which stereoscopic view of the human vision is based on. The viewing directions (optical axes) DIR1 and DIR2 are typically essentially parallel, possibly having a small deviation from being parallel, and define the field of view for the eyes. The head of the user has an orientation (head orientation) in relation to the surroundings, most easily defined by the common direction of the eyes when the eyes are looking straight ahead. That is, the head orientation tells the yaw, pitch and roll of the head in respect of a coordinate system of the scene where the user is.

[0081] When the viewer's body (thorax) is not moving, the viewer's head orientation is restricted by the normal anatomical ranges of movement of the cervical spine.

[0082] In the setup of Fig. 4a, the spheres Al and A2 are in the field of view of both eyes. The centre-point Oi 2 between the eyes and the spheres are on the same line. That is, from the centre-point, the sphere A2 is behind the sphere Al . However, each eye sees part of sphere A2 from behind Al, because the spheres are not on the same line of view from either of the eyes.

[0083] In Fig. 4b, there is a setup shown, where the eyes have been replaced by cameras Cl and C2, positioned at the location where the eyes were in Fig. 4a. The distances and directions of the setup are otherwise the same. Naturally, the purpose of the setup of Fig. 4b is to be able to take a stereo image of the spheres Al and A2. The two images resulting from image capture are Fci and Fc2. The "left eye” image Fci shows the image SA2 of the sphere A2 partly visible on the left side of the image SAI of the sphere Al . The "right eye" image Fc2 shows the image SA2 of the sphere A2 partly visible on the right side of the image SAI of the sphere Al . This difference between the right and left images is called disparity, and this disparity, being the basic mechanism with which the HVS determines depth information and creates a 3D view of the scene, can be used to create an illusion of a 3D image.

[0084] In this setup of Fig. 4b, where the inter-eye distances correspond to those of the eyes in Fig. 4a, the camera pair Cl and C2 has a natural parallax, that is, it has the property of creating natural disparity in the two images of the cameras. Natural disparity may be understood to be created even though the distance between the two cameras forming the stereo camera pair is somewhat smaller or larger than the normal distance (parallax) between the human eyes, e.g. essentially between 40 mm and 100 mm or even 30 mm and 120 mm.

[0085] It needs to be understood here that the images Fci and Fc 2 may be captured by cameras Cl and C2, where the cameras Cl and C2 may be real-world cameras or they may be virtual cameras. In the case of virtual cameras, the images Fci and Fc 2 may be computed from a computer model of a scene by setting the direction, orientation and viewport of the cameras Cl and C2 appropriately such that a stereo image pair suitable for viewing by the human visual system (HVS) is created.

[0086] In Fig. 4c, the creating of this 3D illusion is shown. The images Fci and Fc 2 captured or computed by the cameras Cl and C2 are displayed to the eyes El and E2, using displays Dl and D2, respectively. The disparity between the images is processed by the human visual system so that an understanding of depth is created. That is, when the left eye sees the image S A 2 of the sphere A2 on the left side of the image S AI of sphere Al, and respectively the right eye sees the image S A 2 of the sphere A2 on the right side, the human visual system creates an understanding that there is a sphere V2 behind the sphere VI in a three-dimensional world. Here, it needs to be understood that the images Fci and Fc 2 can also be synthetic, that is, created by a computer. If they carry the disparity information, synthetic images will also be seen as three-dimensional by the human visual system. That is, a pair of computer-generated images can be formed so that they can be used as a stereo image.

[0087] Fig. 4d illustrates how the principle of displaying stereo images to the eyes can be used to create 3D movies or virtual reality scenes having an illusion of being three- dimensional. The images Fxi and Fx2 are either captured with a stereo camera or computed from a model so that the images have the appropriate disparity. By displaying a large number (e.g. 30) frames per second to both eyes using display Dl and D2 so that the images between the left and the right eye have disparity, the human visual system will create a cognition of a moving, three-dimensional image.

[0088] The field of view represented by the content may be greater than the displayed field of view e.g. in an arrangement depicted in Fig. 4d. Consequently, only a part of the content along the direction of view (a.k.a. viewing orientation) is displayed at a single time. This direction of view, that is, the head orientation, may be determined as a real orientation of the head e.g. by an orientation detector mounted on the head, or as a virtual orientation determined by a control device such as a joystick or mouse that can be used to manipulate the direction of view without the user actually moving his head. That is, the term "head orientation" may be used to refer to the actual, physical orientation of the user's head and changes in the same, or it may be used to refer to the virtual direction of the user's view that is determined by a computer program or a computer input device.

[0089] The content may enable viewing from several viewing positions within the 3D space. The texture picture(s), the geometry picture(s) and the geometry information may be used to synthesize the images Fxi and/or Fx 2 as if the displayed content was captured by camera(s) located at the viewing position.

[0090] The principle illustrated in Figs. 4a-4d may be used to create three-dimensional images to a viewer from a three-dimensional scene model (volumetric video) after the scene model has been encoded at the sender and decoded and reconstructed at the receiver. Because volumetric video describes a 3D scene or object at different (successive) time instances, such data can be viewed from any viewpoint. Therefore, volumetric video is an important format for any augmented reality, virtual reality and mixed reality applications, especially for providing viewing capabilities having six degrees of freedom (so-called 6DOF viewing).

[0091] Figs. 5a and 5b illustrate projection of source volumes in a digital scene model SCE and parts of an object model OBJ1, OBJ2, OBJ3, BG4 to projection surfaces Sl, S2, S3, S4, as well as determining depth information for the purpose of encoding volumetric video.

[0092] As illustrated in Fig. 5a, a first texture picture may be encoded into a bitstream, and the first texture picture may comprise a first projection of texture data of a first source volume SV1 of a scene model SCE onto a first projection surface Sl. The scene model SCE may comprise further source volumes SV2, SV3, SV4.

[0093] The projection of source volumes SV1, SV2, SV3, SV4 may result in texture pictures and geometry pictures, and there may be geometry information related to the projection source volumes and/or projection surfaces. Texture pictures, geometry pictures and projection geometry information may be encoded into a bitstream. A texture picture may comprise information on the colour data of the source of the projection. Through the projection, such colour data may result in pixel colour information in the texture picture. Pixels may be coded in groups, e.g. coding units of rectangular shape. The projection geometry information may comprise but is not limited to one or more of the following: - projection type, such as planar projection or equirectangular projection

- projection surface type, such as a cube

- location of the projection surface in 3D space

- orientation of the projection surface in 3D space

- size of the projection surface in 3D space

- type of a projection centre, such as a projection centre point, axis, or plane

- location and/or orientation of a projection centre.

[0094J The projection may take place by projecting the geometry primitives (points of a point could, triangles of a triangle mesh or voxels of a voxel array) of a source volume SV1, SV2, SV3, SV4 (or an object OBJ1, OBJ2, OBJ3, BG4) onto a projection surface Sl, S2, S3, S4. The geometry primitives may comprise information on the texture, for example a colour value or values of a point, a triangle or a voxel. The projection surface may surround the source volume at least partially such that projection of the geometry primitives happens from the centre of the projection surface outwards to the surface. For example, a cylindrical surface has a centre axis and a spherical surface has a centre point. A cubical or rectangular surface may have centre planes or a centre axis and the projection of the geometry primitives may take place either orthogonally to the sides of the surface or from the centre axis outwards to the surface. The projection surfaces, e.g. cylindrical and rectangular, may be open from the top and the bottom such that when the surface is cut and rolled out on a two-dimensional plane, it forms a rectangular shape. In general, projection surfaces need not be rectangular but may be arranged or located spatially on a rectangular picture. Such rectangular shape with pixel data can be encoded and decoded with a video codec.

[0095] Alternatively, or additionally, the projection surface such as a planar surface or a sphere may be inside a group of geometry primitives, e.g. inside a point cloud that defines a surface. In the case of an inside projection surface, the projection may take place from outside in towards the centre and may result in sub-sampling of the texture data of the source.

[0096] In a point cloud-based scene or object model, points may be represented with any floating point coordinates. A quantized point cloud may be used to reduce the amount of data, whereby the coordinate values of the point cloud are represented e.g. with lO-bit, l2-bit or 16- bit integers. Integers may be used because hardware accelerators may be able to operate on integers more efficiently. The points in the point cloud may have associated colour, reflectance, opacity etc. texture values. The points in the point cloud may also have a size, or a size may be the same for all points. The size of the points may be understood as indicating how large an object the point appears to be in the model in the projection. The point cloud is projected by ray casting from the projection surface to find out the pixel values of the projection surface. In such a manner, the topmost point remains visible in the projection, while points closer to the centre of the projection surface may be occluded. In other words, in general, the original point cloud, meshes, voxels, or any other model is projected outwards to a simple geometrical shape, this simple geometrical shape being the projection surface.

[0097] Different projection surfaces may have different characteristics in terms of projection and reconstruction. In the sense of computational complexity, a projection to a cubical surface may be the most efficient, and a cylindrical projection surface may provide accurate results efficiently. Also cones, polyhedron-based parallelepipeds (hexagonal or octagonal, for example) and spheres or a simple plane may be used as projection surfaces.

[0098] In the projection, data on the position of the originating geometry primitive may also be determined, and based on this determination, a geometry picture may be formed. This may happen for example so that depth data is determined for each or some of the texture pixels of the texture picture. Depth data is formed such that the distance from the originating geometry primitive such as a point to the projection surface is determined for the pixels. Such depth data may be represented as a depth picture, and similarly to the texture picture, such geometry picture (in this example, depth picture) may be encoded and decoded with a video codec. This first geometry picture may be seen to represent a mapping of the first projection surface to the first source volume, and the decoder may use this information to determine the location of geometry primitives in the model to be reconstructed. In order to determine the position of the first source volume and/or the first projection surface and/or the first projection in the scene model, there may be first geometry information encoded into or along the bitstream.

[0099] A picture may be defined to be either a frame or a field. A frame may be defined to comprise a matrix of luma samples and possibly the corresponding chroma samples. A field may be defined to be a set of alternate sample rows of a frame. Fields may be used as encoder input for example when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or may be subsampled when compared to luma sample arrays. Some chroma formats may be summarized as follows:

- In monochrome sampling there is only one sample array, which may be nominally

considered the luma array.

- In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array. - In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array.

- In 4:4:4 sampling when no separate colour planes are in use, each of the two chroma

arrays has the same height and width as the luma array.

[0100] It is possible to code sample arrays as separate colour planes into the bitstream and respectively decode separately coded colour planes from the bitstream. When separate colour planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.

[0101 ] Texture picture(s) and the respective geometry picture(s) may have the same or different chroma format.

[0102] Depending on the context, a pixel may be defined to be a sample of one of the sample arrays of the picture or may be defined to comprise the collocated samples of all the sample arrays of the picture.

[0103] Multiple source volumes (objects) may be encoded as texture pictures, geometry pictures and projection geometry information into the bitstream in a similar manner. That is, as in Fig. 5a, the scene model SCE may comprise multiple objects OBJ1, OBJ2, OBJ3, OBJ4, and these may be treated as source volumes SV1, SV2, SV3, SV4 and each object may be coded as a texture picture, geometry picture and projection geometry information.

[0104] As shown in Fig. 5b, a single object may be composed of different parts and thus different source volumes VI 1, V12, V13, V14 and corresponding projection surfaces Sl 1, S12, S13, S14 may be used for these different parts.

[0105] In the above, the first texture picture of the first source volume S V 1 and further texture pictures of the other source volumes SV2, SV3, SV4 may represent the same time instance. That is, there may be a plurality of texture and geometry pictures and projection geometry information for one time instance, and the other time instances may be coded in a similar manner. Since the various source volumes are in this way producing sequences of texture pictures and sequences of geometry pictures, as well as sequences of projection geometry information, the inter-picture redundancy in the picture sequences can be used to encode the texture and geometry data for the source volumes more efficiently, compared to the presently known ways of encoding volume data.

[0106] An object OBJ3 (source volume SV3) may be projected onto a projection surface S3 and encoded into the bitstream as a texture picture, geometry picture and projection geometry information as described above. Furthermore, such source volume may be indicated to be static by encoding information into said bitstream on said fourth projection geometry being static. A static source volume or object may be understood to be an object whose position with respect to the scene model remains the same over two or more or all time instances of the video sequence. For such static source volume, the geometry data (geometry pictures) may also stay the same, that is, the object's shape remains the same over two or more time instances. For such static source volume, some or all of the texture data (texture pictures) may stay the same over two or more time instances. By encoding information into the bitstream of the static nature of the source volume the encoding efficiency may further be improved, as the same information may not need to be coded multiple times. In this manner, the decoder will also be able to use the same reconstruction or partially same reconstruction of the source volume (object) over multiple time instances.

[0107] In an analogous manner, the different source volumes may be coded into the bitstream with different frame rates. For example, a slow-moving or relatively unchanging object (source volume) may be encoded with a first frame rate, and a fast-moving and/or changing object (source volume) may be coded with a second frame rate. The first frame rate may be slower than the second frame rate, for example one half or one quarter of the second frame rate, or even slower. For example, if the second frame rate is 30 frames per second, the second frame rate may be 15 frames per second, or 1 frame per second. The first and second object (source volumes) may be "sampled" in synchrony such that some frames of the faster frame rate coincide with frames of the slower frame rate.

[0108] There may be one or more coordinate systems in the scene model. The scene model may have a coordinate system and one or more of the objects (source volumes) in the scene model may have their local coordinate systems. The shape, size, location and orientation of one or more projection surfaces may be encoded into or along the bitstream with respect to the scene model coordinates. Alternatively, or in addition, the encoding may be done with respect to coordinates of the scene model or said first source volume. The choice of coordinate systems may improve the coding efficiency.

[0109] Information on temporal changes in location, orientation and size of one or more said projection surfaces may be encoded into or along the bitstream. For example, if one or more of the objects (source volumes) being encoded is moving or rotating with respect to the scene model, the projection surface moves or rotates with the object to preserve the projection as similar as possible.

[0110] If the projection volumes are changing, for example splitting or bending into two parts, the projection surfaces may be sub-divided respectively. Therefore, information on sub- division of one or more of the source volumes and respective changes in one or more of the projection surfaces may be encoded into or along the bitstream.

[0111] The resulting bitstream may then be output to be stored or transmitted for later decoding and reconstruction of the scene model.

[0112] Decoding of the information from the bitstream may happen in analogous manner.

A first texture picture may be decoded from a bitstream to obtain first decoded texture data, where the first texture picture comprises a first projection of texture data of a first source volume of the scene model to be reconstructed onto a first projection surface. The scene model may comprise a number of further source volumes. Then, a first geometry picture may be decoded from the bitstream to obtain first decoded scene model geometry data. The first geometry picture may represent a mapping of the first projection surface to the first source volume. First projection geometry information of the first projection may be decoded from the bitstream, the first projection geometry information comprising information of position of the first projection surface in the scene model. Using this information, a reconstructed scene model may be formed by projecting the first decoded texture data to a first destination volume using the first decoded scene model geometry data and said first projection geometry information to determine where the decoded texture information is to be placed in the scene model.

[0113] A 3D scene model may be classified into two parts: first all dynamic parts, and second all static parts. The dynamic part of the 3D scene model may further be sub-divided into separate parts, each representing objects (or parts of) an object in the scene model, that is, source volumes. The static parts of the scene model may include e.g. static room geometry (walls, ceiling, fixed furniture) and may be compressed either by known volumetric data compression solutions, or, similar to the dynamic part, sub-divided into individual objects for projection-based compression as described earlier, to be encoded into the bitstream.

[0114] In an example, some objects may be a chair (static), a television screen (static geometry, dynamic texture), a moving person (dynamic). For each object, a suitable projection geometry (surface) may be found, e.g. cube projection to represent the chair, another cube for the screen, a cylinder for the person's torso, a sphere for a detailed representation of the person's head, and so on. The 3D data of each object may then be projected onto the respective projection surface and 2D planes are derived by "unfolding" the projections from three dimensions to two dimensions (plane). The unfolded planes will have several channels, typically three for the colour representation of the texture, e.g. RGB, YUV, and one additional plane for the geometry (depth) of each projected point for later reconstruction.

[0P5] Frame packing may be defined to comprise arranging more than one input picture, which may be referred to as (input) constituent frames, into an output picture. In general, frame packing is not limited to any particular type of constituent frames or the constituent frames need not have a particular relation with each other. In many cases, frame packing is used for arranging constituent frames of a stereoscopic video clip into a single picture sequence. The arranging may include placing the input pictures in spatially non-overlapping areas within the output picture. For example, in a side-by-side arrangement, two input pictures are placed within an output picture horizontally adjacently to each other. The arranging may also include partitioning of one or more input pictures into two or more constituent frame partitions and placing the constituent frame partitions in spatially non-overlapping areas within the output picture. The output picture or a sequence of frame-packed output pictures may be encoded into a bitstream e.g. by a video encoder. The bitstream may be decoded e.g. by a video decoder. The decoder or a post-processing operation after decoding may extract the decoded constituent frames from the decoded picture(s) e.g. for displaying.

[0116] A standard 2D video encoder may then receive the planes as inputs, either as individual layers per object, or as a frame-packed representation of all objects. The texture picture may thus comprise a plurality of projections of texture data from further source volumes and the geometry picture may represent a plurality of mappings of projection surfaces to the source volume.

[0117] For each object, additional information may be signalled to allow for reconstruction at the decoder side:

- in the case of a frame-packed representation: separation boundaries may be signalled to recreate the individual planes for each object,

- in the case of projection-based compression of static content: classification of each object as static/dynamic may be signalled,

- relevant data to create real-world geometry data from the decoded (quantised) geometry channel(s), e.g. quantisation method, depth ranges, bit depth, etc. may be signalled,

- initial state of each object: geometry shape, location, orientation, size may be signalled,

- temporal changes for each object, either as changes to the initial state on a per-picture level, or as a function of time may be signalled, and

- nature of any additional auxiliary data may be signalled.

[0118] For the described example above, signalling may, for example, be as follows: NUM OBJECTS 4 // folding-chair, TV, person body, person head FRAME PACKED 0 // individual inputs

for i=0:NUM OBJECTS // initial states for each projection

PROJ GEO // geometry, e.g. 0: cube, 1 : cylinder, 2: sphere, ...

PROJ CENTRE X/Y/Z // projection centre in real world coordinates PROJ SIZE X/Y/Z // projection dimensions in real world units

PROJ ROTATION X/Y/Z // projection orientation

PROJ STATUS // 0: dynamic 1 : static

DEPTH QUANT // depth quantisation, i.e. 0 for linear, ...

DEPTH MIN // minimum depth in real world units

DEPTH MAX // maximum depth in real world units

end for n=0 :NUM_FRAME S

for i=0 :NUM_OB JECT S

CHANGE 1 // i.e. 0=static, 1 translation, 2=trans+rotation, ...

TRANS VEC // translation vector

// relevant data to represent change

end

[0119] The decoder may receive the static 3D scene model data together with the video bitstreams representing the dynamic parts of the scene model. Based on the signalled information on the projection geometries, each object may be reconstructed in 3D space and the decoded scene model is created by fusing all reconstructed parts (objects or source volumes) together.

[0120] Standard video encoding hardware may be utilized for real-time

compression/decompression of the projection surfaces that have been unfolded onto planes.

[0121] Single projection surfaces might suffice for the projection of very simple objects. Complex objects or larger scenes may require several (different) projections. The relative geometry of the object/scene may remain constant over a volumetric video sequence, but the location and orientation of the projection surfaces in space can change (and can be possibly predicted in the encoding, wherein the difference from the prediction is encoded). [0122] Fig. 6 shows a projection of a source volume to a cylindrical projection surface, and inpainting of the sparse projection areas. A three-dimensional (3D) scene model, represented as objects OBJ1 comprising geometry primitives such as mesh elements, points, and/or voxel, may be projected onto one, or more, projection surfaces, as described earlier. As shown in Fig 6, these projection surface geometries may be "unfolded" onto 2D planes (two planes per projected source volume: one for texture TP1, one for depth GP1), which may then be encoded using standard 2D video compression technologies. Relevant projection geometry information may be transmitted alongside the encoded video files to the decoder. The decoder may then decode the video and performs the inverse projection to regenerate the 3D scene model object ROBJ1 in any desired representation format, which may be different from the starting format e.g. reconstructing a point cloud from original mesh model data.

[0123] In addition to the texture picture and geometry picture shown in Fig. 6, one or more auxiliary pictures related to one or more said texture pictures and the pixels thereof may be encoded into or along with the bitstream. The auxiliary pictures may e.g. represent texture surface properties related to one or more of the source volumes. Such texture surface properties may be e.g. surface normal information (e.g. with respect to the projection direction), reflectance and opacity (e.g. an alpha channel value). An encoder may encode, in or along with the bitstream, indication(s) of the type(s) of texture surface properties represented by the auxiliary pictures, and a decoder may decode, from or along the bitstream, indication(s) of the type(s) of texture surface properties represented by the auxiliary pictures.

[0124] Mechanisms to represent an auxiliary picture may include but are not limited to the following:

A colour component sample array, such as a chroma sample array, of the geometry picture.

An additional sample array in addition to the conventional three colour component sample arrays of the texture picture or the geometry picture.

A constituent frame of a frame-packed picture that may also comprise texture picture(s) and/or geometry picture(s).

An auxiliary picture included in specific data units in the bitstream. For example, the Advanced Video Coding (H.264/AVC) standard specifies a network abstraction layer (NAL) unit for a coded slice of an auxiliary coded picture without partitioning.

An auxiliary picture layer within a layered bitstream. For example, the High Efficiency Video Coding (HEVC) standard comprises the feature of including auxiliary picture layers in the bitstream. An auxiliary picture layer comprises auxiliary pictures. An auxiliary picture bitstream separate from the bitstream(s) for the texture picture(s) and geometry picture(s). The auxiliary picture bitstream may be indicated, for example in a container file, to be associated with the bitstream(s) for the texture pictures(s) and geometry picture(s).

[0125] The mechanism(s) to be used for auxiliary pictures may be pre-defined e.g. in a coding standard, or the mechanism(s) may be selected e.g. by an encoder and indicated in or along the bitstream. The decoder may decode the mechanism(s) used for auxiliary pictures from or along the bitstream.

[0126] The projection surface of a source volume may encompass the source volume, and there may be a model of an object in that source volume. Encompassing may be understood so that the object (model) is inside the surface such that when looking from the centre axis or centre point of the surface, the object's points are closer to the centre than the points of the projection surface are. The model may be made of geometry primitives, as described. The geometry primitives of the model may be projected onto the projection surface to obtain projected pixels of the texture picture. This projection may happen from inside-out.

Alternatively, or in addition, the projection may happen from outside-in.

Projecting 3D data onto 2D planes is independent from the 3D scene model representation format. There exist several approaches for projecting 3D data onto 2D planes, with the respective signalling. For example, there exist several mappings from spherical coordinates to planar coordinates, known from map projections of the globe, and the type and parameters of such projection may be signalled. For cylindrical projections, the aspect ratio of height and width may be signalled.

[0127] It may happen that when the projection of the object is performed on the projection surfaces PS1— PS6, some parts of the object OBJ1 or another object may occlude some other parts of the object OBJ1 which otherwise were visible from the projection surface in question. Hence, some parts of the object OBJ1 would not be projected to any of the surfaces of the projection format. Figure 7 illustrates an example of this kind of situation. In this example the person’s left hand occludes a part of the body of the person so that when viewed (projected) from the left hand’s side the occluded part of the body would not be projected. In the same way, the planar object on the person’s right hand occludes some parts of the person’s stomach when viewed from the front of the person. Also Figure 8a shows an example of projection occlusions in a simplified manner. In this Figure the arrows indicate projections of points of the object to projection surfaces PS1— PS6 (of which only four are shown for clarity, as was explained above). The arrows marked with X indicate projections which are not performed due to occlusions i.e. those points belong to a part of the 3D object which is not fully visible from that particular viewing direction.

[0128J Depth may be coded "outside-in" (indicating the distance from the projection surface to the 3D point), or "inside-out” (indicating the distance from the 3D point to the projection surface). In inside-out coding, depth of each projected point may be positive (with positive distance PD1) or negative (with negative distance). Fig. 8a shows an example of projecting an object OBJ1 using a cube map projection format, wherein there are six projection surfaces PS 1 , ... ,PS6 of the proj ection cube PC 1. In this example, the proj ection surfaces are one on the left side PS1, one in front PS2, one on the right side PS3, one in the back PS4, one in the bottom PS5, and one in the top PS6 of the cube PC1 in the setup of Figure 8a. For clarity, only four of the projection surfaces will be shown and used in the rest of the specification. For example, in Figure 8b the projection surfaces on the left PS1, on the right PS3, in the front PS2 and at in the back PS4 are shown. It is, however, clear to a skilled person to utilize similar principles on all six projection surfaces when the cube map projection format is used.

[0129] Figure 8b shows a cross section of a 3D object’s surface enclosed in a bounding box as viewed from the top of the bounding box (cube). This part of the figure shows how the un-occluded surfaces of the 3D object is mapped to the four sides of the bounding box, while the occluded surfaces are not. In Figure 8c new projection surface poses are established to project the previously occluded surfaces of the 3D object. It should be noted that in Figure 8c the pose of the chosen projection surfaces is independent of other chosen projection surfaces and are chosen such that they are sufficient enough to cover the surface that is to be projected.

[0130] The depth map for each projection surface PS1— PS6 may be formed on the basis of the distances PD1, PD2 of points of the object OBJ1 to the projection surface PS1— PS6. This distance is typically the distance of the surface point normal towards the projection surface, in other words a line from the 3D point to the projection surface so that the line is perpendicular to the projection surface.

[0131] When a scene or an object of the scene is projected to a projection format, which in this example is the cube map format, occlusion examination is performed to detect points, surfaces, other parts of the object which will not be projected onto the projection surface in question. In the example of Figures 8b and 8c some occluded parts of the example of Figure 8b are depicted in Figure 8b as lines OP1, OP2. The occlusion detection may be performed during the projection operation wherein when an occlusion is detected, that part will not be projected to the projection surface in question. Information of the non-projected (occluded) parts may be temporally saved, if necessary, and used to form auxiliary projection surfaces. The above procedure may be repeated for each projection surface of the projection model so that all possible occlusions could be detected for each projection surface.

[0132] The initial projection surfaces of the embodiment may also be called as main projection surfaces in this specification. Using the notations of Figure 8a, the main projection surfaces are the left PS1, front of PS2, right PS3, back PS4, bottom PS5 and top PS6 of the cube map format.

[0133] It is noted here that the occlusion detection may be time dependent wherein one occlusion detection and succeeding operations induced by the occlusion detection (some of which will be described below) may be valid only for one time instance of a time-varying 3D visual scene/object and similar operations may need to be performed for other time instances of a time-varying 3D visual scene/object as well. In accordance with an embodiment, the occlusion detection is performed for all time instances of a time- varying 3D visual scene/object.

[0134] Information of the detected occlusions may be utilized to form auxiliary projection surfaces. In accordance with an embodiment, a sequence of unique pose (location, orientation and size) of auxiliary projection surfaces with respect to a 3D object/scene are identified. For example, occluded points are examined to determine which of them belong to the same surface of the object. When an appropriate auxiliary projection surface has been determined, information regarding that auxiliary projection surface with respect to a main projection surface will be obtained. The pose of the auxiliary projection surfaces can be arbitrary in terms of location, orientation and size. This information may comprise the direction of the auxiliary projection surface with respect to a main projection surface, the size of the auxiliary projection surface (e.g. width and height), and location (distance) of the auxiliary projection surface with respect to the main projection surface.

[0135] In the example of Figures 8b and 8c the lines OP1, OP2 are examples of a set of occluded points of the object OBJ1. As can be seen from these examples, all occluded points which belong to the same surface of the object, need not be projected to the same auxiliary projection surface but they may be projected to different auxiliary projection surfaces.

However, one point of the 3D object/scene should only be projected to one auxiliary projection surface, in accordance with an embodiment. Furthermore, each of the determined auxiliary projection surfaces maps only those 3D surface points that are un-occluded in its viewing direction. [0136] When auxiliary projection surfaces have been defined and selected for use, a texture plane is computed for each of the chosen poses of the auxiliary projection surfaces by projecting the texture of the un-occluded 3D surface that intersects its viewing direction. In other words, the texture of the 3D surface which is visible (un-occluded) from the selected auxiliary projection and pose is projected to the selected auxiliary projection to obtain the texture plane. Also, a representation of the distance of 3D surface points from the auxiliary projection surface is computed for each of the chosen poses of the auxiliary projection surfaces, to generate a related depth plane.

[0137] For each of the chosen poses of the auxiliary projection surfaces, other 3D surface point related attributes, such as surface normals or bi-directional reflectance distribution functions (BRDF), may be projected to form their own individual, but related auxiliary projection surfaces.

[0138] The computed texture, depth and auxiliary projection surfaces are collected and may either be frame-packed or layered before encoding using 2D video compression algorithms.

[0139] The relative poses of auxiliary projection surfaces with respect to another of the projection surfaces are computed and signalled either within the encoded stream or by other external means. Additional signalling may also be formed that relates the depth and other auxiliary projection surfaces to their respective texture planes are included either within the encoded stream or by other external means.

[0140] Information of the auxiliary projection surface may comprise none, one or more of the following and possibly some other information not listed here:

- projection surface type, such as a cube, cylinder, sphere

- location of the projection surface in 3D space

- orientation of the projection surface in 3D space

- size of the projection surface in 3D space

- type of a projection centre, such as a projection centre point, axis, or plane

- location and/or orientation of a projection centre.

[0141] In an embodiment, the relative pose of the projection surfaces (main and auxiliary) is computed and signalled either within the encoded bit stream or by external means. For example, if the faces of a bounding box are used as the initial (main) projection surfaces of a 3D point cloud, then the pose of an auxiliary projection surface is signalled as a surface generated as a rotation of the normal of the main projection surface, a displacement vector of the central point of the auxiliary projection surface from the main projection surface, and if needed the horizontal and vertical dimensions of the surface. This process is illustrated in Figure 8c. The width and height of the projection surface can either be signalled explicitly or inferred from the spatial dimension of the coded texture, depth or other auxiliary projection surfaces. This process is iterated for all the projected surfaces of a 3D point cloud for one time instant.

[0142] Figure 8d illustrates the calculation of the relative pose of the projection surface Pl with respect to the projection surface P0. The pose of the projection surface Pl can be signalled as a combination of parameters (d,□) where d is the distance vector from the centre of the projection surface P0 to the centre of the projection surface Pl, and□ is the rotation of the planar normal of the projection surface P0 toward the direction of the planar normal of the projection surface Pl.

[0143] Encoding the projection information may be performed, for example, as follows.

[0144] In an embodiment, the projected texture planes of a point cloud for one time instant can all be collected and frame-packed and similarly the depth and the possible auxiliary planes are frame-packed such that the frame packing is consistent across all the planes for one time instant. Each of the planes are then coded using traditional 2D video coders using layered video coding or coded independently and related to each other using some form of higher level signalling (for e.g. using tracks and track references of ISO Base Media File Format). Alternatively, each texture, depth and auxiliary planes of a single time instant of a point cloud could be considered independent of each other in that time instant and coded serially one after the other. For example, if there are nine projection surfaces identified of a point cloud, then for that time instant, first the nine texture planes are coded, followed by nine depth planes and the followed by nine planes that carry surface normal and so on until all the auxiliary data planes are coded.

[0145] Figures 9a and 9b provide an overview of an example of compression and decompression processes, respectively and Figure 10 depicts a simplified flow diagram for a possible processing chain to achieve temporal patch alignment of a current frame to a previous processed reference frame.

[0146] A point cloud is received by a patch generator 902 in which a patch generation process aims at decomposing (block 610 in Figure 10) the point cloud into a minimum number of patches with smooth boundaries, while also minimizing the reconstruction error. This may be performed by, for example, the following approach. [0147] First, the normal at every point is estimated and an initial clustering of the point cloud is then obtained by associating each point with one of the following six oriented planes, defined by their normals:

(1.0, 0.0, 0.0),

(0.0, 1.0, 0.0),

(0.0, 0.0, 1.0),

(-1.0, 0.0, 0.0),

(0.0, -1.0, 0.0), and

(0.0, 0.0, -1.0).

[0148] More precisely, each point is associated with the plane that has the closest normal (e.g. maximizes the dot product of the point normal and the plane normal).

[0149] The initial clustering is then refined by iteratively updating the cluster index associated with each point based on its normal and the cluster indices of its nearest neighbors. The final step consists of extracting patches by applying a connected component extraction procedure.

[0150] The extracted patches are provided to a packing element 904 in which the packing process aims at mapping the extracted patches onto a 2D grid (Figure l2a), while trying to minimize the unused space, and trying to guarantee that every TxT (e.g., 16x16) block of the grid is associated with a unique patch. The parameter T may be a user-defined parameter that is encoded in the bitstream and sent to the decoder. Figure 11 illustrates an example of packing. In Figure 11 white areas illustrate empty pixels.

[0151] An image generation process performs both a geometry image generation 906 and a texture image generation 908 by applying the 3D to 2D mapping computed during the packing process to store the geometry and texture of the point cloud as images. Each patch is projected onto one image, which may also be referred to as a layer. More precisely, let H(u,v) be the set of points of the current patch that get projected to the same pixel (u, v). If more than one 3D point is projected to the same location on the current patch, a single value for that location H(u,v) may be selected. The layer stores the point of H(u,v) with the closest distance to its projection surface, e.g. the lowest depth DO. The generated videos may have the following characteristics, for example:

Geometry: width (W) x height (H) YUV420-8bit,

Texture: width (W) x height (H) YUV420-8bit,

[0152] It should be noted that that the geometry video may be monochromatic. [0153] The geometry image and the texture image may be padded by an image padding element 910. Padding aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression. According to an approach, a following padding strategy may be used:

[0154] Each block of TxT (e.g., 16x16) pixels is processed independently. If the block is empty (i.e., all its pixels belong to an empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels, then the empty pixels are iteratively filled with the average value of their non-empty neighbors.

[0155] The generated images/layers may be stored as video frames and compressed. For example, the padded geometry image and the padded texture image are provided to a video compression element 912 for compressing the padded geometry image and the padded texture image, from which the compressed geometry and texture images are provided, for example, to a multiplexer 914 which multiplexes the input data to a compressed bitstream(s).

[0156] There may also be an occupancy map compression element 916 and an auxiliary patch information compression element 918 for compressing an occupancy map and auxiliary patch information, respectively, before providing the compressed occupancy map and auxiliary patch information to the multiplexer 914.

[0157] Auxiliary patch information may also be coded for example as follows. The signalling structure of the auxiliary per-patch information may be as follows:

• Index of the projection plane

- Index 0 for the planes (1.0, 0.0, 0.0) and (-1.0, 0.0, 0.0)

- Index 1 for the planes (0.0, 1.0, 0.0) and (0.0, -1.0, 0.0)

- Index 2 for the planes (0.0, 0.0, 1.0) and (0.0, 0.0, -1.0).

• 2D bounding box (uO, vO, ul, vl)

• 3D location (xO, yO, zO) of the patch represented in terms of depth 50, tangential shift sO and bi-tangential shift rO. According to the chosen projection planes, (dq, sO, rO) are computed as follows:

- Index 0, d0= xO, s0=z0 and rO = yO

- Index 1, d0= yO, s0=z0 and rO = xO

- Index 2, d0= zO, s0=x0 and rO = yO

[0158] Also, mapping information providing for each TxT block its associated patch index may be encoded as follows: [0159] For each TxT block, let L be an ordered list of the indexes of the patches such that their 2D bounding box contains that block. The order in the list is the same as the order used to encode the 2D bounding boxes. L is called the list of candidate patches.

[0160] The empty space between patches is considered as a patch and is assigned the special index 0, which is added to the candidate patches list of all the blocks.

[0161] Let I be an index of the patch to which the current TxT block belongs and let J be the position of I in L. Instead of explicitly encoding the index I, its position J is arithmetically encoded instead, which may lead to better compression efficiency.

[0162] The occupancy map may consist of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. One cell of the 2D grid would produce a pixel during the image generation process.

[0163] The point cloud geometry reconstruction process exploits the occupancy map information in order to detect non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels are computed by levering the auxiliary patch information and the geometry images.

[0164] The smoothing procedure aims at alleviating potential discontinuities that may arise at the patch boundaries due to compression artifacts. According to an approach boundary points are moved to the centroid of their nearest neighbors.

[0165] In a texture reconstruction stage texture values may be directly read from the texture images.

[0166] In the following, some example approaches for patch image packing will be described.

[0167] According to a simple packing strategy patches may be iteratively tried to insert into a WxH grid. The weight (W) and height (H) may be user defined parameters, which correspond to the resolution of the geometry/texture images that will be encoded. The patch location is determined through an exhaustive search that is performed in raster scan order. The first location that can guarantee an overlapping- free insertion of the patch is selected and the grid cells covered by the patch are marked as used. If no empty space in the current resolution image can fit a patch, then the height H of the grid is temporarily doubled, and search is applied again. At the end of the process, H is clipped so as to fit the used grid cells.

[0168] In accordance with another approach, patches are padded in the 2D image according to size to reduce unused space, thus allowing for a tighter packing. This approach may reduce memory requirements for encoder and decoder but may not take into account temporal consistency between mapping positions. [0169] In accordance with yet another approach, simplified temporal-consistent patch allocation is utilized. Some details of this approach include the following steps:

[0170] Patches of the current frame are organized in order of descent based on 2D dimensions of the patch.

[0171] For each patch from the previous frame, a projection plane is obtained, and the best matching patch is searched for e.g. using a maximum Intersection over Union (IoU) from the patches of the current frame with the same projection plane. The best matched patch from the current frame can be considered as found if its IoU is greater than a threshold.

[0172] A usable position in the occupancy map is searched and the matched patch is packed into it.

[0173] For the unmatched patches remaining in the current frame, a usable position in the occupancy map is searched and the matched patch is packed into it. The unmatched patches are the patches for which the largest IoU is smaller than the threshold.

[0174] In the following, some details of an embodiment of temporal aligned patch packing of projected volumetric video will be explained in more detail.

[0175] An initial frame, which is coded without any temporal prediction, is initialised based some kind of patch decomposition. Before patch packing or patch organisation in a 2D picture, each patch is extended by a given number of pixels in a“guard area”, i.e. a certain number of pixel is introduced around each patch. Fig. l2b illustrates such an example with a 2 -pixel guard area 124 (black rectangles) around the original patch 125 (cross-hatched rectangles). In practice, such a guard area could be larger than two pixels, more in the range of 10-20 pixels. Furthermore, the guard area need not be equal, but the number of pixels forming the guard area may be different at different parts around the patch. In an embodiment, the guard area is selected to align with a block grid, such as a coding tree unit grid in HEVC.

It should also be noted that the guard area could vary between patches, i.e. larger patches could move more or vice-versa. As an example, the guard area could be a function (or percentage) of total number of pixels in a patch. To enhance compression efficiency, it is desirable for the guard area not to be larger than necessary. The size of the guard area could be signalled in the bitstream; the index of a lookup table may indicate the guard area, said lookup table being either static or dynamically created depending on previously observed characteristics of the video being processed. The guard area may be non-uniform around the patch. As an example, it may be wider on one“edge” of the patch than another. The width of the guard area for each patch, and for each boundary of each patch, may be determined by a linear or non-linear algorithm. As an example, an algorithm may utilise the variance of previously observed motion (the“uncertainty”) and previous direction of motion to increase or decrease the size of the guard area around each section of the patch. In one embodiment, the algorithm for determining guard area size is integrated with the 2D video encoding or decoding algorithm, so that said algorithm for determining guard area size is aware of the codec state, for example Coding Unit (CU) mode. Thus an efficient and integrated algorithm may be used while maintaining the underlying 2D video codec syntax.

[0176] The patch packing of patches including their guard area may be performed with any appropriate method, e.g. using the method described earlier in the specification, and may additionally be constrained in a manner that each block along a pre-defined block grid is occupied by at most one patch. The block grid may, for example, be a coding tree unit grid (e.g. 64x64 luma samples), a finest-grained coding unit grid (e.g. 8x8 luma samples), or a finest-grained prediction unit grid (e.g. 4x4 luma samples).

[0177] In addition to the guard area, the encoder can introduce some kind of "book keeping" of all patches to simplify the patch alignment process for successive frames, i.e. reduce the number of search operations and the search range in steps 640 and 650 as will be described later in this specification.

[0178] Such book keeping could consist of look-up tables containing current patch data, i.e. patches organized in lists by projection plane; each patch list is sorted by a defined criterion which should match the criteria selected in step 630, for example, but not limited to: 2D location of each patch in the reference frame, which will be a starting point for refined match accuracy search in step 650; available guard interval per patch if not constant for all patches; feature descriptors and locations (e.g. SIFT, SURF) in case feature matching is used in step 650; or any combination of such information.

[0179] The above described patch information is only required during the encoding processes and does not have to be signalled to the decoder.

[0180] The initial frame is encoded into a coded initial frame. Likewise, a decoder decodes the coded initial frame into a reconstructed initial frame. The reconstructed initial frame acts as a reference frame for subsequent frames in (de)coding order.

[0181] I slice(s) (e.g. as defined in HEVC) may be used for encoding the initial frame. The coded initial frame may for example be an IDR picture (as defined in H.264/AVC or HEVC) or a BLA picture (as defined in HEVC). It is noted that an open-GOP intra picture (e.g. a CRA picture as defined in HEVC) might not be considered an initial frame, since it is followed, in (de)coding order, by pictures that may be predicted from one or more pictures preceding the open-GOP intra picture in (de)coding order, which could affect temporally aligned patch packing.

[0182J In the following some steps to be performed for a current frame in the patch generation phase and shown in Fig. 10 will be described, also with reference to the simplified block diagram of the patch generator of Figure l4a. It is assumed here that a previous frame has been processed, and it and its patch look-up table are available as reference for temporal patch alignment.

[0183J In Step 610 the 3D content is decomposed e.g. by a decomposer 701 to several patches e.g. using the principles presented earlier in this specification.

[0184] In Step 620 a patch organizer 702 assigns each patch to one of three processing lists. The assignment is based on the patch's projection plane index, e.g. list "X" contains all patches which have index "0" according to the description in Auxiliary patch information also presented earlier in this specification.

[0185] Patches with auxiliary patch information projection plane index may be assigned as follows:

o Index 0 for the planes (1.0, 0.0, 0.0) and (-1.0, 0.0, 0.0) will be put in list "X” o Index 1 for the planes (0.0, 1.0, 0.0) and (0.0, -1.0, 0.0) will be put in list "Y" o Index 2 for the planes (0.0, 0.0, 1.0) and (0.0, 0.0, -1.0) will be put in list "Z"

[0186] The naming and arrangement of the three lists can vary, but the concept of sorting by projection plane should be clear. The same lists for the reference frame and the current frame shall represent the same projection planes.

[0187] In Step 630 the patches in each list are sorted by a sorting element 703 according to a certain criterion. Examples of sorting criteria are patch dimension, i.e. patch X and/or Y dimension; patch size, i.e. number of pixels per patch; patch location in 3D space, i.e. by C,U,Z location or distance from a coordinate origin; patch content, e.g. luma/chroma values mean and/or variance over all pixel in a texture patch; or patch surface structure, e.g.

mean/variance over all pixel in a geometry patch.

[0188] Also, a combination of such approaches is feasible, e.g. size thresholds are set to create a certain number of patch groups. Each group is then refined sorted based on an additional criterion. Whatever sorting algorithm is chosen, it should be the same as in the sorting of the patches used in the reference frame for sorting the patch look-up table (list).

[0189] After or during the sorting, the same patch book keeping as described for the reference frame shall be performed, so the same information is available for the current frame. [0190] In Step 640 each patch in a list is compared by a patch comparator 704 to possible reference patches in the reference frame. This process will be performed on the look-up tables for the reference and current frame only. Thus, it may be performed very fast and efficiently.

[0191] For every patch in a list, a reference patch candidate is searched based on a chosen criterion. Such criteria could be:

- Difference in patch dimension is below a certain threshold

- Difference in patch size is below a certain threshold

- Difference in patch location is below a certain threshold

- A combination of above criteria and thresholds

[0192] Alternatively of or in addition to performing the comparison based on the look-up tables, the matching can be performed directly on the patches themselves. This approach may be more computational complex. Possible matching criteria could be:

- Intersection over Union (IoU) above a certain threshold percentage

- A distortion metric, such as sum of absolute/squared differences (SAD/SSD),

between patches (for texture and/or geometry) below a certain threshold. The distortion metric may be derived only for the pixels that are occupied in the current frame. The distortion of the unoccupied pixels may be ignored in any rate- distortion-based selections performed by the encoder, thus favouring such coding of unoccupied pixels where the prediction of those pixels need not be refined and that the prediction parameter selection is based on the distortion for the occupied pixels.

- A combination of look-up table information and patch content information

[0193] The outcome of step 640 is one reference patch per current patch. In case a reference patch has already been assigned to a previous patch in the current frame, the next best reference patch is selected.

[0194] If no suitable reference patch was found, step 640 can be iterated on the two remaining reference plane patch lists, e.g. a patch original mapped on plane X can be aligned with a patch mapped on plane Y.

[0195] In the case of no suitable reference patches at all, the current patch may be put at the end of the patch packing processing chain and mapped on any available space left after all other patches have been mapped.

[0196] In Step 650 a more detailed, per-pixel-accurate search is performed between a current patch and its reference patch by a refiner 705. The goal is to find a displacement vector maximising the content overlap between the two patches. This process may be computationally heavier, thus step 640 may be applied beforehand to minimise the search range.

[0197J The displacement vector may be derived by minimization of a distortion metric, such as SSD or SAD minimization: Like the motion search in video coding, a search range is defined and the vector with the smallest resulting difference value is selected as an alignment vector. The search range shall not extend the available "guard area" defined for the reference patch. The distortion metric may be derived only for the pixels that are occupied in the current frame. The distortion of the unoccupied pixels may be ignored in any rate-distortion-based selections performed by the encoder, thus favouring such coding of unoccupied pixels where the prediction of those pixels need not to be refined and that the prediction parameter selection is based on the distortion for the occupied pixels.

[0198] Alternatively, the displacement vector may be derived by optical flow. Optical flow calculations may be used to determine the mean per-pixel optical flow vector between current patch and reference patch. This vector can be used as an alignment vector.

[0199] Alternatively, the displacement vector may be derived by feature matching, wherein content feature descriptors, e.g. SURF or SIFT, can be matched between the current and the reference patch. The mean location shift between matching features can be used as an alignment vector.

[0200] Still alternatively, the displacement vector may be derived by point cloud position matching, in which an optimal match may be found by minimizing the error between XYZ 3D positions between the reference frame and the current frame.

[0201] In accordance with an embodiment, this search would be performed only on the texture image and only a 2D vector is derived to limit computational load. However, it should be noted that the matching could also be performed on the geometry picture or a combination of geometry and texture picture to derive a 3D displacement vector.

[0202] It shall be noted that step 650 may be skipped, either on a per- frame or per-patch basis. For example, if step 640 already delivered a sufficiently close match for a patch or if computational resources are limited. In this case, no displacement vectors will be calculated, and the process continues with step 660. Step 650 can be skipped.

[0203] In Step 660 the current patch is packed as closely to the 2D position of its reference patch, within the constraints of the available guard area. The position of the current patch is shifted by the displacement vector so that the resulting position of the current patch approximately matches the position of the reference patch. This shifting may be referred to as patch alignment. Fig. l2c depicts the result of this packing process for the example patch of Fig. l2b. The reference patch 125 and guard area 124 are illustrated in the left, the un-aligned current patch 126 in the middle (highlighted with a thick line for clarity), and the aligned current patch 127 in the right. The current patch 126 is slightly different but fully fits in the guard area 124 of the reference patch 125, thus it will not affect other patches packed after this one. Pixels now occupied in the guard area 124 are marked as hatched rectangles 120. A 2D patch displacement vector marked as an arrow 122 describes the patch alignment.

[0204] The 2D bounding box information for each patch, explained above in the description on auxiliary patch data, is adapted to reflect the new patch location. I.e. for a 2D displacement vector [xl,yl] the bounding box would change from an original 2D bounding box values (uO, vO, ul, vl), to an adapted 2D bounding box (uO-xl, nq-yl, ul-xl, vl-yl).

[0205] In case of a 3D displacement vector, [xl,yl,zl], the depth offset zl would be applied as an offset to the patch 3D location representing the geometry for the selected projection plane.

[0206] Steps 640-660 may be repeated until all patches of a frame are successfully aligned with its reference patches. Any "left over patches" can be packed in the remaining unused space.

[0207] In an embodiment, the 2D picture(s) generated as output of step 660 are encoded into respective coded picture(s).

[0208] In an embodiment, the displacement vector or its opposite vector is encoded in or along the bitstream comprising the coded picture(s). For example, the displacement vector or its opposite vector may be encoded as one piece of the per-patch information of the auxiliary patch information. This may be advantageous for example to keep the 2D bounding box and/or other information that relates to the patch location onto the sampling grid of the projection plane unchanged over multiple frames. Consequently, such information needs to be coded only once for said multiple frames. The displacement vector or its opposite vector may be coded per each picture or access unit, and may be coded with a variable length coding scheme, such as an exponential-Golomb code.

[0209] The patch information look-up tables are updated to represent the current frame as the reference frame. The next frame in coding order is loaded and the process is started again at step 10.

[0210] In the following, some additional embodiments will be described.

[0211 ] In an embodiment, depicted in Fig. l2d, multi-pass encoding or pre-analysis of at least two pictures is performed so that the padding used in the reference patch within the guard area is performed on the basis of the current patch. The patches of the current frame are aligned with the patches of the reference frame as described in other embodiments. Source pixels for padding are such pixels that are occupied in a current patch but fall into the guard area in the reference frame (indicated as hatched rectangles 120 in Figure l2d). These pixels in the guard area, indicated as hatched rectangles 124 in Figure l2d, in the reference frame are essentially padded using the source pixels for padding and are padded in the reference frame prior to its encoding.

[0212] In an embodiment, multi-pass encoding or pre-analysis of at least two pictures is performed so that respective patches of more than two frames, such as a GOP, are temporally aligned.

[0213] In an embodiment, where different frames are coded at different qualities, e.g. typical hierarchical B-frame structure, the reference frame might not be updated to the current frame, if quality difference (QP value) between the reference frame and the current frame is above a certain threshold (i.e. it can be expected that the video coder would still choose the reference frame as motion-prediction reference for the next frame in coding order).

[0214] In an embodiment, where different frames are coded in a hierarchical structure, the reference frame might not be updated to the current frame, if the POC difference (temporal difference) between the reference frame and the current frame is above a certain threshold (i.e. it can be expected that the video coder would still chose the reference frame as motion- prediction reference for the next frame in coding order).

[0215] In an embodiment, where different frames might be predicted from more than one reference picture, the reference frame might not be updated to the current frame, if a certain distance to the next hierarchical level is reached. For example, in Fig. 13 a hierarchical structure with 16 frames is depicted. Frames marked with letter R, G or Y, would have at least one frame from the frames marked with letter M, S or H available as a reference (assuming biprediction with 2 reference pictures per list), thus the reference frame does not have to be updated after reaching a certain picture order count (POC). However, it would be necessary to keep several look up tables (keep previous reference look-up table stored) as references would change. As an example for Fig. 13: POC 0 is the reference for POC 16, POC 16 is the reference for POC8, POCO look up table is loaded again and serves as a reference for POC 1-4, POC8 look-up table is loaded again and serves as a reference for POC 5-7 and 9-12, POC 16 look-up table is loaded again and serves as a reference for POC 13-15.

[0216] The above-described embodiments have been specified with reference to three projection planes. It needs to be understood that in general any projection surfaces (not limited to projection planes) may be used and that the number of projection surfaces may differ from three.

[0217] It should also be noted that, although the description of Figures 8b and 8c above described that auxiliary projection surfaces are formed due to projection occlusions, such auxiliary projection surfaces need not be formed. Furthermore, the projection directions need not be fixed but may be variable at different parts of the 3D scene. For example, some parts of a scene may utilize more dense projection directions than other parts of the scene e.g. to provide higher reconstruction quality for such parts. Still the temporal patch alignment principles presented above may be implemented.

[0218] The embodiments presented above may significantly improve temporal alignment and consistency between the current frame and its reference frame. Thus, motion- compensated video compression may perform at its best and coding efficiency may be drastically improved.

[0219] Compared to prior-art, per-pixel alignment is possible.

[0220] The introduction of a guard area in the reference frame may allow for better alignment between reference and current patch.

[0221] Due to pre-sorting on“simple” criteria (step 640), the number of more complex search operations (step 650) may be reduced.

[0222] In general, the invention is not limited to putting the patches only vertically or horizontally in a rectangle. It may also include putting the patches in any arbitrary direction with a defined angle and also other forms for the grids than rectangles could be used.

Furthermore, patches are not necessarily located vertically or horizontally inside the grid.

[0223] The process of locating the patches on different parts of a grid is performed in the encoder side. However, the grid size should be communicated with the decoder size.

[0224] The horizontaFvertical locating of the patches and also the angle or arbitrary direction putting of patches may be signalled enabling a decoder to accurately fetching the patches with correct presentation/direction. The information regarding each patch should also be communicated with the decoder and signalled that e.g. the current patch has a specific direction/orientation which can be fetched in the decoder side. Such signalling may include the shape, size, location, and orientation/direction of the patch in the 2D grid.

[0225] In another embodiment, the grid does not necessarily have a rectangular shape but may have a different shape e.g. diamond or Parallelogram. Then, the size of the grid may be determined so that its size is as small as possible but still being able to cover the patches. The gist of the algorithm remains the same for all different shapes but the locating and sizing may change.

[0226J In the following, the operation at a decoder side is explained in more detail with reference to the block diagram of Figure l4b. A decoder 720 receives a bitstream and a decoding element 721 decodes the bitstream to reconstruct the encoded information from the bitstream. The decoded information may comprise information of the geographical shape of the grid, unless a predetermined shape is used, the size (e.g. the width and height) of the grid, and information of the patches and their location within the grid. A patch parser 722 uses this information to reconstruct the patches from the grids. Reconstructed patches may then be converted to point clouds and further to volumetric video by an image reconstructor 723.

[0227] Figure 9b depicts some elements of a decoder 920, in accordance with an embodiment. These elements may be a part of the decoding element of Figure l4b, for example. A demultiplexer 922 demultiplexes different information streams to correct decoding elements. The compressed geometry image and the compressed texture images are provided to a video decompression element 924 for decompressing to obtain decompressed geometry image and decompressed texture image. The compressed occupancy map is provided to an occupancy map decompressing element 926 to obtain a decompressed occupancy map and the compressed auxiliary patch information is provided to an auxiliary patch information decompressing element 928 to obtain decompressed auxiliary patch information. A geometry reconstruction element 930 uses the decompressed information to reconstruct the geometry image. The reconstructed geometry image may be smoothened by a smoothing element 932. A texture reconstruction element 934 uses the decompressed video information and geometry information to reconstruct the texture image.

[0228] In an embodiment, a displacement vector or its opposite vector is decoded from or along the bitstream comprising the compressed texture and/or geometry image(s). For example, the displacement vector or its opposite vector may be decoded from the auxiliary patch information where it may be one piece of the per-patch information. The displacement vector or its opposite vector is used to interpret the sample locations in the decoded texture and/or geometry image(s) and/or occupancy map(s). In this manner the reconstruction of the decoded point cloud from the decoded texture image(s), geometry image(s), and occupancy map(s) may use a conventional process except that the 2D sample locations within the decoded picture relative to the sampling grid of the projection plane is interpreted by taking into account the displacement vector or its opposite vector per each patch (when available). In an embodiment, modified versions of the decoded texture image(s), geometry image(s) and/or occupancy map(s) are generated by shifting each patch by the opposite of the displacement vector, when available, and the modified versions are used in the subsequent process for reconstructing the point cloud.

[0229] In H.264/AVC and HEVC, it is possible to code sample arrays as separate colour planes into the bitstream and respectively decode separately coded colour planes from the bitstream. When separate colour planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.

[0230] A partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.

[0231] When describing the operation of HEVC encoding and/or decoding, the following terms may be used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non-overlapping LCUs.

[0232] A CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU. Typically, a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes. Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively. Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs).

[0233] Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It is typically signalled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU. The division of the image into CUs, and division of CUs into PUs and TUs is typically signalled in the bitstream allowing the decoder to reproduce the intended structure of these units.

[0234] In HEVC, a picture can be partitioned in tiles, which are rectangular and contain an integer number of LCUs. In HEVC, the partitioning to tiles forms a regular grid, where heights and widths of tiles differ from each other by one LCU at the maximum. In HEVC, a slice is defined to be an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit. In HEVC, a slice segment is defined to be an integer number of coding tree units ordered consecutively in the tile scan and contained in a single NAL unit. The division of each picture into slice segments is a partitioning. In HEVC, an independent slice segment is defined to be a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment, and a dependent slice segment is defined to be a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order. In HEVC, a slice header is defined to be the slice segment header of the independent slice segment that is a current slice segment or is the independent slice segment that precedes a current dependent slice segment, and a slice segment header is defined to be a part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment. The CUs are scanned in the raster scan order of LCUs within tiles or within a picture, if tiles are not in use. Within an LCU, the CUs have a specific scan order.

[0235] A motion-constrained tile set (MCTS) is such that the inter prediction process is constrained in encoding such that no sample value outside the motion-constrained tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the motion-constrained tile set, is used for inter prediction of any sample within the motion-constrained tile set. Additionally, the encoding of an MCTS is constrained in a manner that motion vector candidates are not derived from blocks outside the MCTS.

This may be enforced by turning off temporal motion vector prediction of HEVC, or by disallowing the encoder to use the TMVP candidate or any motion vector prediction candidate following the TMVP candidate in the merge or AMVP candidate list for PUs located directly left of the right tile boundary of the MCTS except the last one at the bottom right of the MCTS. In general, an MCTS may be defined to be a tile set that is independent of any sample values and coded data, such as motion vectors, that are outside the MCTS. In some cases, an MCTS may be required to form a rectangular area. It should be understood that depending on the context, an MCTS may refer to the tile set within a picture or to the respective tile set in a sequence of pictures. The respective tile set may be, but in general need not be, collocated in the sequence of pictures.

[0236J It is noted that sample locations used in inter prediction may be saturated by the encoding and/or decoding process so that a location that would be outside the picture otherwise is saturated to point to the corresponding boundary sample of the picture. Hence, if a tile boundary is also a picture boundary, in some use cases, encoders may allow motion vectors to effectively cross that boundary or a motion vector to effectively cause fractional sample interpolation that would refer to a location outside that boundary, since the sample locations are saturated onto the boundary. In other use cases, specifically if a coded tile may be extracted from a bitstream where it is located on a position adjacent to a picture boundary to another bitstream where the tile is located on a position that is not adjacent to a picture boundary, encoders may constrain the motion vectors on picture boundaries similarly to any MCTS boundaries.

[0237] The temporal motion-constrained tile sets SEI message of HEVC can be used to indicate the presence of motion-constrained tile sets in the bitstream.

[0238] The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.

[0239] The filtering may for example include one more of the following: deblocking, sample adaptive offset (SAO), and/or adaptive loop filtering (ALF). H.264/AVC includes a deblocking, whereas HEVC includes both deblocking and SAO. [0240] In typical video codecs the motion information is indicated with motion vectors associated with each motion compensated image block, such as a prediction unit. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently those are typically coded differentially with respect to block specific predicted motion vectors. In typical video codecs the predicted motion vectors are created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signalling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, it can be predicted which reference picture(s) are used for motion- compensated prediction and this prediction information may be represented for example by a reference index of previously coded/decoded picture. The reference index is typically predicted from adjacent blocks and/or co-located blocks in temporal reference picture.

Moreover, typical high efficiency video codecs employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signalled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.

[0241] In typical video codecs the prediction residual after motion compensation is first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding.

[0242] Typical video encoders utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired coding mode for a block and associated motion vectors. This kind of cost function uses a weighting factor l to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:

C = D + XR, (1) where C is the Lagrangian cost to be minimized, D is the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).

[0243] Video coding standards and specifications may allow encoders to divide a coded picture to coded slices or alike. In-picture prediction is typically disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture to independently decodable pieces. In H.264/AVC and HEVC, in-picture prediction may be disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission. In many cases, encoders may indicate in the bitstream which types of in picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighboring CU may be regarded as unavailable for intra prediction, if the neighboring CU resides in a different slice.

[0244] An elementary unit for the output of an H.264/AVC or HEVC encoder and the input of an H.264/AVC or HEVC decoder, respectively, is a Network Abstraction Layer (NAL) unit. For transport over packet-oriented networks or storage into structured files, NAL units may be encapsulated into packets or similar structures. A bytestream format has been specified in H.264/AVC and HEVC for transmission or storage environments that do not provide framing structures. The bytestream format separates NAL units from each other by attaching a start code in front of each NAL unit. To avoid false detection of NAL unit boundaries, encoders run a byte-oriented start code emulation prevention algorithm, which adds an emulation prevention byte to the NAL unit payload if a start code would have occurred otherwise. In order to enable straightforward gateway operation between packet- and stream-oriented systems, start code emulation prevention may always be performed regardless of whether the bytestream format is in use or not. A NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with emulation prevention bytes. A raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit. An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0. [0245] NAL units consist of a header and payload. In H.264/AVC and HEVC, the NAL unit header indicates the type of the NAL unit

[0246] In HEVC, a two-byte NAL unit header is used for all specified NAL unit types. The NAL unit header contains one reserved bit, a six-bit NAL unit type indication, a three-bit nuh_temporal_id_plusl indication for temporal level (may be required to be greater than or equal to 1) and a six-bit nuh layer id syntax element. The temporal_id_plusl syntax element may be regarded as a temporal identifier for the NAL unit, and a zero-based Temporalld variable may be derived as follows: Temporalld = temporal_id_plusl - 1. The abbreviation TID may be used to interchangeably with the Temporalld variable. Temporalld equal to 0 corresponds to the lowest temporal level. The value of temporal_id_plusl is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes. The bitstream created by excluding all VCL NAL units having a Temporalld greater than or equal to a selected value and including all other VCL NAL units remains conforming.

Consequently, a picture having Temporalld equal to tid value does not use any picture having a Temporalld greater than tid value as inter prediction reference.

[0247] NAL units can be categorized into Video Coding Layer (VCL) NAL units and non- VCL NAL units. VCL NAL units are typically coded slice NAL units. In HEVC, VCL NAL units contain syntax elements representing one or more CU.

[0248] A non- VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit. Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the

reconstruction of decoded sample values.

[0249] Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set. In addition to the parameters that may be needed by the decoding process, the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation. In HEVC a sequence parameter set RBSP includes parameters that can be referred to by one or more picture parameter set RBSPs or one or more SEI NAL units containing a buffering period SEI message. A picture parameter set contains such parameters that are likely to be unchanged in several coded pictures. A picture parameter set RBSP may include parameters that can be referred to by the coded slice NAL units of one or more coded pictures. [0250] In HEVC, a video parameter set (VPS) may be defined as a syntax structure containing syntax elements that apply to zero or more entire coded video sequences as determined by the content of a syntax element found in the SPS referred to by a syntax element found in the PPS referred to by a syntax element found in each slice segment header.

[0251] A video parameter set RBSP may include parameters that can be referred to by one or more sequence parameter set RBSPs.

[0252] Out-of-band transmission, signaling or storage can additionally or alternatively be used for other purposes than tolerance against transmission errors, such as ease of access or session negotiation. For example, a sample entry of a track in a file conforming to the ISO Base Media File Format may comprise parameter sets, while the coded data in the bitstream is stored elsewhere in the file or in another file.

[0253] A SEI NAF unit may contain one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, rendering, error detection, error concealment, and resource reservation. Several SEI messages are specified in H.264/AVC and HEVC, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use. H.264/AVC and HEVC contain the syntax and semantics for the specified SEI messages but no process for handling the messages in the recipient is defined. Consequently, encoders are required to follow the H.264/AVC standard or the HEVC standard when they create SEI messages, and decoders conforming to the H.264/AVC standard or the HEVC standard, respectively, are not required to process SEI messages for output order conformance. One of the reasons to include the syntax and semantics of SEI messages in H.264/AVC and HEVC is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.

[0254] In HEVC, there are two types of SEI NAL units, namely the suffix SEI NAL unit and the prefix SEI NAL unit, having a different nal unit type value from each other. The SEI message(s) contained in a suffix SEI NAL unit are associated with the VCL NAL unit preceding, in decoding order, the suffix SEI NAL unit. The SEI message(s) contained in a prefix SEI NAL unit are associated with the VCL NAL unit following, in decoding order, the prefix SEI NAL unit.

[0255] A coded picture is a coded representation of a picture. [0256] In HEVC, a coded picture may be defined as a coded representation of a picture containing all coding tree units of the picture. In HEVC, an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh layer id. In addition to containing the VCL NAL units of the coded picture, an access unit may also contain non- VCL NAL units. Said specified classification rule may for example associate pictures with the same output time or picture output count value into the same access unit.

[0257] A bitstream may be defined as a sequence of bits, in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences. A first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a

communication protocol. An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams. The end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream. In HEVC and its current draft extensions, the EOB NAL unit is required to have nuh layer id equal to 0.

[0258] In H.264/AVC, a coded video sequence is defined to be a sequence of consecutive access units in decoding order from an IDR access unit, inclusive, to the next IDR access unit, exclusive, or to the end of the bitstream, whichever appears earlier.

[0259] In HEVC, a coded video sequence (CVS) may be defined, for example, as a sequence of access units that consists, in decoding order, of an IRAP access unit with

NoRaslOutputLlag equal to 1 , followed by zero or more access units that are not IRAP access units with NoRaslOutputLlag equal to 1 , including all subsequent access units up to but not including any subsequent access unit that is an IRAP access unit with NoRaslOutputLlag equal to 1. An IRAP access unit may be defined as an access unit in which the base layer picture is an IRAP picture. The value of NoRaslOutputLlag is equal to 1 for each IDR picture, each BLA picture, and each IRAP picture that is the first picture in that particular layer in the bitstream in decoding order, is the first IRAP picture that follows an end of sequence NAL unit having the same value of nuh layer id in decoding order. There may be means to provide the value of HandleCraAsBlaLlag to the decoder from an external entity, such as a player or a receiver, which may control the decoder. HandleCraAsBlaLlag may be set to 1 for example by a player that seeks to a new position in a bitstream or tunes into a broadcast and starts decoding and then starts decoding from a CRA picture. When HandleCraAsBlaFlag is equal to 1 for a CRA picture, the CRA picture is handled and decoded as if it were a BLA picture.

[0260J In HEVC, a coded video sequence may additionally or alternatively (to the specification above) be specified to end, when a specific NAL unit, which may be referred to as an end of sequence (EOS) NAL unit, appears in the bitstream and has nuh layer id equal to 0.

[0261J A group of pictures (GOP) and its characteristics may be defined as follows. A GOP can be decoded regardless of whether any previous pictures were decoded. An open GOP is such a group of pictures in which pictures preceding the initial intra picture in output order might not be correctly decodable when the decoding starts from the initial intra picture of the open GOP. In other words, pictures of an open GOP may refer (in inter prediction) to pictures belonging to a previous GOP. An HEVC decoder can recognize an intra picture starting an open GOP, because a specific NAL unit type, CRA NAL unit type, may be used for its coded slices. A closed GOP is such a group of pictures in which all pictures can be correctly decoded when the decoding starts from the initial intra picture of the closed GOP. In other words, no picture in a closed GOP refers to any pictures in previous GOPs. In

H.264/AVC and HEVC, a closed GOP may start from an IDR picture. In HEVC a closed GOP may also start from a BLA W RADL or a BLA N LP picture. An open GOP coding structure is potentially more efficient in the compression compared to a closed GOP coding structure, due to a larger flexibility in selection of reference pictures.

[0262J A Decoded Picture Buffer (DPB) may be used in the encoder and/or in the decoder. There are two reasons to buffer decoded pictures, for references in inter prediction and for reordering decoded pictures into output order. As H.264/AVC and HEVC provide a great deal of flexibility for both reference picture marking and output reordering, separate buffers for reference picture buffering and output picture buffering may waste memory resources. Hence, the DPB may include a unified decoded picture buffering process for reference pictures and output reordering. A decoded picture may be removed from the DPB when it is no longer used as a reference and is not needed for output.

[0263] In many coding modes of H.264/AVC and HEVC, the reference picture for inter prediction is indicated with an index to a reference picture list. The index may be coded with variable length coding, which usually causes a smaller index to have a shorter value for the corresponding syntax element. In H.264/AVC and HEVC, two reference picture lists

(reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice.

[0264J Many coding standards, including H.264/AVC and HEVC, may have decoding process to derive a reference picture index to a reference picture list, which may be used to indicate which one of the multiple reference pictures is used for inter prediction for a particular block. A reference picture index may be coded by an encoder into the bitstream is some inter coding modes or it may be derived (by an encoder and a decoder) for example using neighboring blocks in some other inter coding modes.

[0265] Several candidate motion vectors may be derived for a single prediction unit. For example, motion vector prediction HEVC includes two motion vector prediction schemes, namely the advanced motion vector prediction (AMVP) and the merge mode. In the AMVP or the merge mode, a list of motion vector candidates is derived for a PU. There are two kinds of candidates: spatial candidates and temporal candidates, where temporal candidates may also be referred to as TMVP candidates.

[0266] A candidate list derivation may be performed for example as follows, while it should be understood that other possibilities may exist for candidate list derivation. If the occupancy of the candidate list is not at maximum, the spatial candidates are included in the candidate list first if they are available and not already exist in the candidate list. After that, if occupancy of the candidate list is not yet at maximum, a temporal candidate is included in the candidate list. If the number of candidates still does not reach the maximum allowed number, the combined bi-predictive candidates (for B slices) and a zero motion vector are added in. After the candidate list has been constructed, the encoder decides the final motion information from candidates for example based on a rate-distortion optimization (RDO) decision and encodes the index of the selected candidate into the bitstream. Likewise, the decoder decodes the index of the selected candidate from the bitstream, constructs the candidate list, and uses the decoded index to select a motion vector predictor from the candidate list.

[0267] In HEVC, AMVP and the merge mode may be characterized as follows. In AMVP, the encoder indicates whether uni-prediction or bi-prediction is used and which reference pictures are used as well as encodes a motion vector difference. In the merge mode, only the chosen candidate from the candidate list is encoded into the bitstream indicating the current prediction unit has the same motion information as that of the indicated predictor. Thus, the merge mode creates regions composed of neighbouring prediction blocks sharing identical motion information, which is only signalled once for each region. [0268] Texture picture(s) and the respective geometry picture(s) may have the same or different chroma format.

[0269] Depending on the context, a pixel may be defined to a be a sample of one of the sample arrays of the picture or may be defined to comprise the collocated samples of all the sample arrays of the picture.

[0270] Projecting 3D data onto 2D planes is independent from the 3D scene model representation format. There exist several approaches for projecting 3D data onto 2D planes, with the respective signalling. For example, there exist several mappings from spherical coordinates to planar coordinates, known from map projections of the globe, and the type and parameters of such projection may be signalled. For cylindrical projections, the aspect ratio of height and width may be signalled.

[0271] In the above, some embodiments have been described with reference to encoding. It needs to be understood that said encoding may comprise one or more of the following:

encoding source image data into a bitstream, encapsulating the encoded bitstream in a container file and/or in packet(s) or stream(s) of a communication protocol, and announcing or describing the bitstream in a content description, such as the Media Presentation

Description (MPD) of ISO/IEC 23009-1 (known as MPEG-DASH) or the IETF Session Description Protocol (SDP). Similarly, some embodiments have been described with reference to decoding. It needs to be understood that said decoding may comprise one or more of the following: decoding image data from a bitstream, decapsulating the bitstream from a container file and/or from packet(s) or stream(s) of a communication protocol, and parsing a content description of the bitstream,

[0272] In the above, some embodiments have been described with reference to encoding or decoding texture pictures, geometry pictures (e.g. depth pictures), and/or projection geometry information into or from a single bitstream. It needs to be understood that embodiments can be similarly realized when encoding or decoding texture pictures, geometry pictures, and/or projection geometry information into or from several bitstreams that are associated with each other, e.g. by metadata in a container file or media presentation description for streaming.

[0273] The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology. A communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection.

[0274] Although the above examples describe embodiments of the invention operating within a wireless communication device, it would be appreciated that the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received. Thus, for example, embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).

[0275] In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof. While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.

[0276] Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

[0277] Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.

[0278] The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.