Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR CONTENT-AWARE POINT CLOUD COMPRESSION USING HEVC TILES
Document Type and Number:
WIPO Patent Application WO/2020/072304
Kind Code:
A1
Abstract:
A method includes receiving a data cloud including a plurality of data points. The method further includes identifying each data point including a region-of-interest (ROI) and dividing the data cloud into a ROI cloud and one or more non-ROI clouds. The method includes performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI. The method includes performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile of the map, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile.

Inventors:
VOSOUGHI ARASH (US)
YEA SEHOON (US)
LIU SHAN (US)
Application Number:
PCT/US2019/053453
Publication Date:
April 09, 2020
Filing Date:
September 27, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TENCENT AMERICA LLC (US)
International Classes:
H04N19/597; H04N19/119; H04N19/43; H04N19/513; H04N19/82
Foreign References:
US20170347120A12017-11-30
US20170094269A12017-03-30
US20170118540A12017-04-27
US20150016504A12015-01-15
US20150350652A12015-12-03
US20130114703A12013-05-09
US20180268570A12018-09-20
Other References:
PERRA ET AL.: "HIGH EFFICIENCY CODING OF LIGHT FIELD IMAGES BASED ON TILING AND PSEUDO-TEMPORAL DATA ARRANGEMENT", IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, July 2016 (2016-07-01), Seattle, USA, XP032970808, Retrieved from the Internet [retrieved on 20191126]
See also references of EP 3861753A4
Attorney, Agent or Firm:
MA, Johnny et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method performed by a video encoder comprising:

receiving a data cloud including a plurality of data points representing a three- dimensional (3D) space;

identifying each data point including a region-of-interest (ROI) associated with the data cloud;

dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI;

performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and

performing a patch packing process on the ROI cloud, the patch packing process including;

(i ) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map,

(ii) determining whether at least two ROI patches fro the plurality of ROI patches are located in more than one tile, and

(iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.

2. The method according to claim 1, further comprising:

performing the patch generation process on each non-ROI cloud including creating a non-ROI patch for each data point that does not include the ROI, and

performing the patch packing process on each non-ROI cloud including mapping each of the non-ROI patches to one or more empty spaces in the two dimensional map that do not include the ROI patch.

3. The method according to claim 1, wherein the patch packing process on the ROI cloud and the patch packing process for each non-RGI cloud is performed in parallel.

4. The method according to claim 2, further comprising:

compressing the tile including each of the ROI patches in accordance with a first compression ratio; and

compressing each other tile from the plurality of tiles not including the ROI patches in accordance with a second compression ratio higher than the first compression ratio.

5. The method according to claim 1, further comprising:

determining whether the ROI is larger than each tile included in the 2D map; and in response to the determination that the ROI is larger than each tile included in the 2D map, dividing the ROI cloud into one or more sub-ROI clouds,

wherein the patch generation process and the patch packing process is performed for each of the one or more sub-ROI clouds.

6. The method according to claim 5, wherein the patch packing process is performed on the one or more sub-ROI clouds in parallel.

7. The method according to claim 1, the method further comprising:

determining whether a size of each tile in the 2D map is specified by the video encoder; and

in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, setting a height of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

8. The method according to claim 7, the method further comprising:

in response to the determination that the size of each tile in the 2D map is not specifi ed by the video encoder, setting a width of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

9. The method according to claim 2, wherein the data cloud includes a plurality of ROIs, the data cloud is divided into a plurality of ROI clouds with each ROI cloud corresponding to a respective ROI, and the patch generation process and the patch packing process is performed on each ROI cloud.

10. The method according to claim 9, wherein the patch packing process performed on each ROI cloud results in each ROI being mapped to a different tile in the 2D map.

11. A video encoder comprising:

processing circuitry configured to:

receive a data cloud including a plurality of data points representing a three- dimensional (3D) space,

identify each data point including a region-of-interest (ROI) associated with the data cloud,

divide the data cloud into a ROI cloud and one or more non -ROI clouds, the ROI cloud including each data point including the ROI,

perform a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and perform a patch packing process on the ROI cloud, the patch packing process that includes:

(i) map each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map,

(ii) determine whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and

(iii) in response to the determination that at least two ROI patches are located in more than one tile, move each of the ROI patches to a tile from the plurality of tiles.

12. The video encoder according to claim 11, wherein the processing circuitry is further configured to:

perform the patch generation process on each non-ROI cloud including creating a non-ROI patch for each data point that does not include the ROI, and

perform the patch packing process on each non-ROI cloud including mapping each of the non-ROI patches to one or more empty spaces in the two dimensional map that do not include the ROI patch.

13. The video encoder according to claim 11, wherein the patch packing process on the ROI cloud and the patch packing process for each non-ROI cloud is performed in parallel.

14. The video encoder according to claim 12, wherein the processing circuitry is further configured to:

compress the tile including each of the ROI patches in accordance with a first compression ratio, and compress each other tile from the plurality of tiles not including the ROI patches in accordance with a second compression ratio higher than the first compression ratio.

15. The video encoder according to claim 1 1, wherein the processing circuitry is further configured to:

determine whether the ROI is larger than each tile included in the 2D map; and in response to the determination that the ROI is larger than each tile included in the 2D map, divide the ROI cloud into one or more sub-ROI clouds,

wherein the patch generation process and the patch packing process is performed for each of the one or more sub-ROI clouds.

16. The video encoder according to claim 15, wherein the patch packing process is performed on the one or more sub-ROI clouds in parallel.

17. The video encoder according to claim 1 1, wherein the processing circuitry is further configured to:

determine whether a size of each tile in the 2D map is specified by the video encoder; and

in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, set a height of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

18. The video encoder according to claim 17, wherein the processing circuitry is further configured to: in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, set a width of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

19. The video encoder according to claim 12, wherein the data cloud includes a plurality of ROIs, the data cloud is divided into a plurality of ROI clouds with each ROI cloud corresponding to a respective ROI, and the patch generation process and the patch packing process is performed on each ROI cloud.

20. A n on-transitory computer readable medium having instructions stored therein, which when executed by a processor in a video encoder causes the processor to perform a method comprising:

receiving a data cloud including a plurality of data points representing a three- dimensional (3D) space;

identifying each data point including a region-of-interest (ROI) associated with the data cloud;

dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI;

performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and

performing a patch packing process on the ROI cloud, the patch packing process including:

(i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map,

(ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.

Description:
METHOD AND APPARATUS FOR CONTENT-AWARE POINT CLOUD

COMPRESSION USING HEVC TILES

INCORPORATION BY REFERENCE

[0001] This present disclosure claims the benefit of priority to U.S. Patent

Application No 16/568,002, filed on September 11, 2019, which claims priority to a series of U.S Provisional Application Nos. 62/740,299, filed on October 2, 2018 and 62/775,879, filed on December 5, 2018. Each of the above-listed applications are incorporated by reference herein in their entirety.

TECHNICAL FIELD

[0002] The present disclosure describes embodiments generally related to video coding.

BACKGROUND

[0003] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.

[0004] Three-dimensional (3D) representations of the world are enabling more immersive forms of interaction and communication, and also allow machines to understand, interpret and navigate the world. Point clouds have emerged as one of such 3D enabling representations. The Moving Picture Experts Group (MPEG) has identified a number of use cases associated with point cloud data, and developed corresponding requirements for point cloud representation and compression.

SUMMARY

[0005] According to an exemplary embodiment, a method performed by a video encoder includes receiving a data cloud including a plurality of data points representing a three-dimensional (3D) space. The method further includes identifying each data point including a region-of-interest (ROI) associated with the data cloud. The method further includes dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI. The method further includes performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI. The method further includes performing a patch packing process on the ROI cloud, the patch packing process including:

(i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map, (ii) determining whether at least two RQI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.

[0006] According to an exemplar }' embodiment, a video encoder includes processing circuitry configured to receive a data cloud including a plurality of data points representing a three-dimensional (3D) space. The processing circuitry is further configured to identify each data point including a region-of-interest (ROI) associated with the data cloud. The processing circuitry is further configured to divide the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI. The processing circuitry is further configured to perform a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI. The processing circuitry is further configured to perform a patch packing process on the ROI cloud, the patch packing process that includes: (i) map each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map,

(ii) determine whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, move each of the ROI patches to a tile from the plurality of tiles.

[0007] According to an exemplary embodiment, a non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a video encoder causes the processor to perform a method. The method includes receiving a data cloud including a plurality of data points representing a three-dimensional (3D) space. The method further includes identifying each data point including a region-of-interest (ROI) associated with the data cloud. The method further includes dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROT The method further includes performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROT The method further includes performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:

[0009] FIG. 1A shows an example point cloud.

[0010] FIG. IB shows a recursive subdivision process in accordance with an embodiment.

[0011] FIG. 2 show's an exemplary video codec in accordance with an embodiment.

[0012] FIG. 3 shows an example of mapping of patches onto a two-dimensional (2D) grid.

[0013] FIGs. 4A and 4B shows exemplary patch packing in accordance with embodiments of the present disclosure.

[0014] FIGs. 5 A and 5B shows exemplary patch packing in accordance with embodiments of the present disclosure.

[0015] FIGs. 6A and 6B shows exemplary' patch packing in accordance with embodiments of the present disclosure

[0016] FIG. 7 show's an example three dimensional (3D) ROI in accordance with an embodiment.

[0017] FIGs. 8A and 8B shows exemplary' patch packing in accordance with embodiments of the present disclosure

[0018] FIG. 9 shows an example patch packing order in accordance with an embodiment.

[0019] FIG. 10 show an exemplary process performed by a video codec in accordance with an embodiment.

[0020] FIG. 1 1 show an exemplary process performed by a video codec in accordance with an embodiment.

[0021] FIG. 12 show an exemplary process performed by a video codec in accordance with an embodiment.

[0022] FIG. 13 show an exemplary process performed by a video codec in accordance with an embodiment. [0023] FIGs. 14A and 14B show exemplaiy patch packing with multiple ROIs in accordance with embodiments of the present disclosure.

[0024] FIG. 15 is a schematic illustration of a computer system in accordance with an embodiment.

DETAILED DESCRIPTION OF EMBODIMENTS

[0025] Point cloud data is used to represent a three-dimensional (3D) scene or object in some emerging applications such as immersive virtual reality (VR)/augmented reality (ARVmixed reality (MR), automotive/robotic navigation, medical imaging, and the like. A point cloud includes a collection of individual 3D points. Each point is associated with a set of 3D coordinates indicating a 3D position of the respective point and a number of other attributes such as color, surface normal, opaque, reflectance, etc. In various embodiments, input point cloud data can be quantized and subsequently organized into a 3D grid of cubic voxels that can be described using an octree data structure. A resulting voxelized octree facilitates the traversal, search, and access of the quantized point cloud data.

[0026] A point cloud is a set of points in a 3D space, each with associated attributes, e.g. color, material properties, etc. FIG. 1 A illustrates an example point cloud with points P0-P8. Point clouds can be used to reconstruct an object or a scene as a composition of such points. They can be captured using multiple cameras and depth sensors in various setups, and may be made up of thousands and even billions of points in order to realistically represent reconstructed scenes.

[0027] Compression technologies are needed to reduce the amount of data required to represent a point cloud. As such, technologies are needed for lossy compression of point cl ouds for use in real-time communications and six Degrees of Freedo (6 DoF) virtual reality. In addition, technologies are sought for lossless point cloud compression in the context of dynamic mapping for autonomous dri ving and cultural heritage applications, etc. Further, standards are needed to address compression of geometry and attributes (e.g., colors and reflectance), scalable/progressive coding, coding of sequences of point clouds captured over time, and random access to subsets of the point cloud.

[0028] Fig. IB illustrates an example of a 2D occupancy map 110. The occupancy map may be a binary' 2D image where l’s and 0’s represent occupied and unoccupied pixels, respectively. Back projection may be used to reconstruct the point cloud using the 2D occupancy map 1 10 and the geometry' video

[0029] According to some embodiments, video codecs compress geometry ' , motion, and texture information of a dynamic point cloud as three separate video sequences. The S extra metadata needed to interpret the three video sequences (i.e , occupancy map and auxiliary patch information) may he compressed separately. The metadata information may represent a small amount of an overall bitstream, and the metadata could be efficiently encoded/decoded by using a software implementation. The hulk of the information may he handled by the video codec.

[0030] FIG. 2 illustrates an embodiment of a video codec 200. A point cloud frame is inputted into a patch generation unit 202 to generate patches from the point cloud frame.

After the patch generation is performed, a packing unit 204 receives the output from the patch generation unit 202 to perform a packing process on the patches. The output of the packing unit 204 is fed into texture image generation unit 208. The texture image generation unit 208 receives smoothed geometry from smoothing unit 210 and outputs texture images to image padding 212. For example, geometry is first reconstructed using the decompressed geometry video and decompressed occupancy map. Geometry smoothing is applied on the resulting cloud to alleviate distortion due to video codec compression artifacts at patch boundaries. Geometry- image generation unit 206 receives inputs from the point cloud frame, patch info, and the output of the packi ng unit 204. The patch info may include information such as patch origin along with a shift with respect to an origin of the image, patch size, etc. Geometry image generation 206 outputs geometry images to image padding unit 212. The image padding unit 212 further receives an occupancy map. Image padding unit 212 outputs padded geometry- images and padded texture images to video compression unit 218. Video compression unit 218 outputs compressed geometry video and compressed texture video to multiplexer 220. The video compression unit 218 further feeds back reconstructed geometry- images to smoothing unit 210. Occupancy map compression unit 214 receives patch info, and outputs a compressed occupancy map to multiplexer 220. Auxiliary- patch info compression 216 receives patch info and outputs compressed auxiliary- patch information to multiplexer 220. The multiplexer 220 outputs a compressed bitstream.

[0031] According to some embodiments, a patch generation process decomposes a point cloud into a minimum number of patches with smooth boundaries, while also minimizing a reconstruction error. Encoders may implement various methods to generate this type of decomposition.

[0032] According to some embodiments, a packing process maps the patches onto a 2D grid, as illustrated in FIG. 3, while minimizing unused space, and guaranteeing that every' MxM (e.g., 16x16) block of the grid is associated with a unique patch. M may be an encoder defined parameter that is encoded in a bitstream and sent to the decoder. In FIG 3, patches may be readily distinguished from the background. In some examples, the occupancy map is a binary image with exactly the same dimensions as the image in FIG. 3, where any pixel that belongs to a patch is set to 1, and any pixel that belongs to the background is set to 0. The aforementioned occupancy map may be full -re solution map. However, for lossy

compression, the full-resolution occupancy map may be down-sampled and then compressed. At the decoder, the occupancy map may be decompressed and up-sampled back to the original full-resolution. For lossless compression, however, the occupancy map is coded with the original full-resolution format.

[0033] Certain models use a packing strategy that iteratively tries to insert patches into a w x h grid w and h may be user defined parameters, which correspond to the resolution of the geometry/texture/motion of video images to be encoded. The patch location may be determined through an exhaustive search that is applied in a raster scan order. The first location that can guarantee an overlapping-free insertion of the patch may be selected and the grid cells covered by the patch are marked as used. If no empty space in the current resolution image can fit a patch, then the height h of the grid may be temporarily doubled, and the search is performed again. At the end of the process, h may be clipped to fit the used grid cells. For a video sequence, a process that determines w and h for the entire GOP may be used.

[0034] Furthermore, certain models segment an input cloud into several patches and packs those patches into two 2D images (geometry and attributes), and subsequently compresses those images using High Efficiency Video Coding (HE VC). Given a

specification of region of interest (ROI) in the form of a 3 D bounding box, a content-aware point cloud compression system is desired to fulfill the below features:

1. The ROI is coded with a higher quality than other parts of the point-cloud.

2. The ROI is coded independently from other parts of the point-cloud to facilitate spatial random-access without full-decoding.

3. The independent coding of ROI needs to be harmonized with any system

requirements regarding independent (parallel) encoding/decoding

4. Multiple ROIs need to be supported.

[0035] Embodiments of the present disclosure use HE VC tiles to realize content- aware point cloud compression to provide the above desired features. Embodiments of the present disclosure provide the significantly advantageous feature of running only one instance of a compression model without having to enlarge the size of a packed image. [0036] According to some embodiments, content-aware coding is achieved by using HE VC tiles, where a given 3D RQI is projected into one (or more) tile, and that tile is coded with a higher quality. One problem with current compression models is that there is no guarantee that the 3D ROI is projected into neighboring locations. In this regard, a projection of a given ROI spans more than one tile although it is possible to completely fit the ROI in a single tile. In a content-aware video compression system, when this situation occurs, a lower QP is chosen for more than a single tile, which leads to a performance degradation given a limited bit rate budget.

[0037] In some embodiments, to exploit HEVC tiles for efficient content-aware point cloud compression, a given 3D ROI is fitted into as few HEVC tiles as possible. A tiling map may be provided a priori (e.g., 2x2 tiling map). A ROI may be fitted into a tile (or tiles) with higher priority (or priorities). Patches may be packed into a 2D grid after they are ordered with respect to their size.

[0038] In some embodiments, the patches that intersect with a given ROI specified by a 3D bounding box are found. This results in two sets of patches: the ones that intersect with the ROI (e.g., ROI patches), and the ones that do not intersect with the ROI. The ROI patches and non-ROI patches may be packed according to their sizes. For example, a first ROI patch that is larger than a second ROI patch may be given higher priority and thus, packed before the second ROI patch to use the available space more efficiently. A condition that a 2D bounding box of an ROI patch does not extend to neighboring tiles may be specified.

FIG. 4A illustrates an example patch packing for a“soldier” picture. The boundary of each tile is depicted by a dashed line. Furthermore, each patch is bounded by a 2D bounding box depicted by a solid black line. Here, a ROI may be selected as the head of the soldier. As illustrated in FIG. 4A, there are three ROIs that are each bounded by a 2D box, and span three different tiles. FIGS. 5A and 6A illustrate other examples in which ROI patches span more than one tile after packing.

[0039] According to some embodiments, the ROI and the rest of the points are each regarded as a separate point cloud. For example, a point cloud is separated into a ROI cloud and a non-ROI cloud. The ROI cloud includes points in the point could that include the ROI. The non-ROI cloud includes points in the point cloud that do not include the ROI. The ROI cloud and non-ROI cloud independently undergo the patch generation process to generate ROI patches and non-ROI patches, respectively. After the patch generation process, the ROI patches are fitted into as few tiles as possible. After the ROI patches are mapped, the non- ROI patches are mapped to the 2D grid. FIGs. 4B, 5B, and 6B illustrate the resulting 2D grid after the packing process is performed on ROI and non -ROI patches that were generated from separate ROI and non-ROI clouds, respectively. As illustrated in FIGs. 4B, 5B, and 6B, the ROI patches are grouped together (i.e., placed) into a single tile, which leads to more efficient compression since the tiles that do not contain the ROI patches may be compressed with a lower rate (lower quality) to save more bits for the tile that contains the ROI patches.

[0040] According to some embodiments, a point cloud includes a single ROI and has a tile map specified a priori by a system requirement. A V-PCC anchor patch generation process and packing scheme typically project a given 3D ROI all over the 2D grid in a projected image. This behavior of the anchor patch generation and packing does not allow for efficient use of HE VC tiles. For example, FIG. 7 illustrates a point cloud where the ROI is colored in light gray (i.e., area within box 700). FIG. 8A illustrates a resulting packed image in which a ROI cloud is not separated from a non-ROI cloud, and FIG. 8B illustrates a resulting packed image in which the ROI cloud is separated from the non-ROI cloud. As illustrated in FIG. 8 A, the projected ROI is scattered all over the 2D grid, whereas in FIG.

8B, the projected ROI is placed into a single tile.

[0041] In some embodiments, a point cloud is separated into a ROI cloud and a non- ROI cloud. The ROI cloud includes ail the ROI points in the point cloud. The non-ROI cloud includes ail the non-ROI points in the point cloud. The patches for the ROI cloud and non-ROI cloud may be generated independently, which provides the significantly

advantageous feature of guaranteeing that any single patch entirely belongs either to the ROI cloud or the non-ROI cloud. Accordingly, when the patch generation process is performed independently on the ROI cloud and the non-ROI cloud, two sets of patches are produced: (1) a set of ROI patches, where all the points of any ROI patch belong to the ROI cloud, and (2) a set of non-ROI patches, where all the points of any non-ROI patch belong to the non-ROI cloud.

[0042] Table 1 illustrates an example patch process.

Table 1

[0043] r o i pate h ni etad a ta en bl ed ff ag indicates whether ROI patches are enabled. This flag is used to indicate that a point cloud has been separated into a separate ROI cloud and a non-ROI cloud.

[0044] roi_patch_metadata_present_flag indicates whether any ROI (single or multiple) is present or not.

[0045] n u m b e r of r o i p a t e h es [ r ] indicates the number of patches belonging to the r-th ROI The value of number of ro patches[r] shall be in range of 1 to 2 32 - 1, inclusive.

[0046] According to some embodiments, based on a tile map specified by the system, ROI patches are packed into as few tiles as possible. An example tile map may have a size W x H where there are H rows and W columns of tiles, respectively. The H rows may be set as TileRowHeightArray :::: and the W columns may he set as

TileColumnWidthArray = w^w^—^ w (see FIG. 9). FIG. 9 illustrates a proposed patch packing order (i.e., dashed line) when a tile map is specified by the system. As illustrated in FIG. 9, a 4x6 tile map example is illustrated where W=4 and H=6. This patch packing order remains the same for the case of multiple ROIs. This patch packing order may remain the same when a point cloud has multiple ROIs. For multiple ROIs,

[0047] Based on a tile map, the ROI and non-ROI patches may be packed onto the tile map in accordance with the process illustrated in FIG. 10. The process illustrated in FIG. 10 may be performed by video codec 200. The process may start at step SI 000 where an input cloud of r points and an ROI specified by a 3D bounding box is specified. The point cloud of p points may be separated into a ROI cloud and a non-ROI cloud. The process proceeds to step SI 002 where a number of ROI patches are generated. The ROI patches may be generated in accordance with the process illustrated in Table 1. The process proceeds to step SI 004 where a number of non-ROI patches are generated. In this regard, steps SI 002 and SI 004 result in the independent generation of ROI and non-ROI patches.

[0048] The process proceeds from step SI 004 to step SI 006 where a variable k is set to 0. The process proceeds to step SI 008 where the variable k is incremented by 1. The process proceeds to step SI 010 where the kth ROI patch is packed with respect to an order as illustrated in FIG. 9. The process proceeds to step S 1012 where it is determined if k is equal to the number of ROI patches. If k is not equal to the number of ROI patches, the process returns from step 1012 to step 1008 Accordingly, steps SI 006 to SI 012 result in the packing of each ROI patch. [0049] If k is equal to the number of ROI patches, the process proceeds to step S 1014 where k is set to 0. The process proceeds to step S 1016 where k is incremented by 1. The process proceeds to step S 1018 where the kth non-ROI patch is packed into an empty space. The process proceeds to step SI 020 where it is determined if k is equal to the number of non- ROI patches. If k is not equal to the number of non-ROI patches, the process returns from step S1020 to step SI 016 Accordingly, steps SI 014 to SI 020 result in the packing of each non-ROI patch. If k is equal to the number of non-ROI patches, the process illustrated in Figure 10 ends. Although the process illustrated in FIG. 10 is performed for a single ROI, when a point cloud has multiple ROIs, steps Si 002 - S1012 may be performed for each ROI included in the point cloud.

[0050] According to some embodiments, the ROI cloud is divided into several smaller sub-clouds, which may be referred to as chunks. For example, when it is determined that a ROI cloud is bigger than a tile, the ROI cloud is divided into smaller chunks. In some embodiments, a patch generation process is independently performed on each chunk.

Chunking generates smaller ROI patches, which provides improved packing so that more spaces in the 2D grid are filled by the projected ROI cloud.

[0051] In some embodiments, dividing the ROI cloud into chunks is based on finding eigenvectors of the ROI cloud and performing chunking along the axes corresponding to the eigenvectors. In another embodiment, chunking is performed based on finding a bounding box of the ROI points in the ROI cloud, and finding the longest axis of that bounding box, where chunking may be performed along the longest axis. For example, referring to FIG. IB, the chunking may be performed in a direction along the longest axis of the bounding box. In another embodiment, the chunking may be performed along one, two, or all three axes of the bounding box. Chunking may be performed uniformly or non-uniformJy based on criteria, for example, a local density of points. For example, an area of the ROI cloud having a higher density of points may be divided into smaller chunks compared to an area of the ROI cloud with a lower density of points. Furthermore or more of the above embodiments may be combined. For example, chunking may be performed along the longest axis of the bounding box either uniformly or non-uniformly based on certain criteria.

[0052] Fig 1 1 illustrates an embodiment of a process of performing packing on a ROI cloud divided into chunks. The process illustrated in FIG 11 may be performed by the video codec 200. The process may start at step S 1 100 where an input cloud of p points and an ROI specified by a 3D bounding box is specified. The process may proceed to step SI 102 where an RGI is divided into a number of chunks (C). An RQI may be divided when it is determined that the ROI does not fit into an HE VC tile.

[0053] The process proceeds to step SI 104 where ROI patches for each chunk are generated and a variable c is set to 0. The ROI patches for each chunk may be performed in accordance with the process illustrated in Table 1. The process proceeds to step S i 106 where the variable c is incremented by 1 and a variable k is set to 0. Steps SI 108, SI 1 10, and SI 112 are performed in the same manner as steps S1008, S1010, and S1012, respectively. At step SI 1 12, if k is equal to the number of patches for the c-th chunk, the process proceeds to step S I 114, where it is determined if c is equal to the number of chunks (C). Accordingly, steps S 1 106-SI 1 14 result in packing being performed for each ROI patch in each chunk.

[0054] If c is equal to the number of chunks (C), the process proceeds from step SI 114 to step SI 16. Steps Si 116, SI 118, SI 120, and SI 122 are performed in the same manner as described for steps S 1014, SI 016, SI 018, and SI 020, respectively. The process illustrated in FIG. 11 ends after step SI 122 is performed. Although the process illustrated in FIG. 11 is performed for a single ROI, when a point cloud has multiple ROIs, steps SI 102 - SI 114 may be performed for each ROI included in the point cloud.

[0055] According to some embodiments, the tile map is not specified by the system.

In this regard, the tile map is not fixed and may be designed flexibly. When the tile map is not specified by the system, the point cloud may still be separated into an ROI cloud and a non-RGI cloud as discussed above, where patch generation is performed independently for each separate cloud.

[0056] When there is a single ROI, the tile map may include one horizontal tile that encloses all the ROI patches, and another horizontal tile that encloses all the non -ROI patches. Each horizontal tile may span the width of an image, and be stacked on top of each other. In another example, when there is a single ROI, one tile may enclose all the ROI patches, and another tile may enclose all the non-RQI tiles, where the tiles are concatenated, and the l ength of the concatenated tiles spans the width of an image.

[0057] FIG. 12 illustrates an embodiment in which ROI and non-ROI patches may be packed onto a 2D grid in which a tile map is not specified. As illustrated in FIG. 12, all the steps of the process illustrated in FIG. 10 are included. Furthermore, if at step S1012 it is determined that all the ROI patches have been packed (i.e., k == N RO I), the process proceeds to step SI 200 to set a bottom tile border as a maximum height of a ROI patch bounding boxes. In this regard, at step S1200, the size of the tile containing all the ROI patches may be set such that the tile encompasses all the ROI patches. Furthermore, if at step S1020 it is determined that all the non-ROI patches have been packed (i.e., k ::::::: the process proceeds to step S1202, where it is determined if extra tiles need to he added to 2D grid. For example, when the tile map is not specified, the tile map can be designed to efficiently use the available space. An efficient tile map design may be a 2x1 tile map where the top tile includes ROI patches and the bottom one has the non-ROI patches (e.g , design depicted in FIG. 12). To further benefit from the available design flexibility, the ROI patches may be packed into a tile with the smallest dimensions possible. In this scenario, if the ROI patches do not span the entire width of a canvas, a vertical line may be placed right after the right most ROI patch to create one extra tile at the right, which is empty. A benefit of this approach is better compression efficiency since the texture/geometry' images are background filled with redundant information, and background filling for that empty tile is avoided when each tile is encoded/decoded independently. If it is determined that extra tiles need to be added to the 2D grid, the process proceeds to step S I 204 where a maximum bounding box width of the ROI patches is derived, and a vertical tile is placed at the derived bounding box. Although the process illustrated in FIG. 12 is performed for a single ROI, when a point cloud has multiple ROIs, steps SI 102 - S 1012 including step S1200 may be performed for each ROI included in the point cloud.

[0058] FIG. 13 illustrates an embodiment in which ROI chunking is performed, and a tile map is not specified. As illustrated in FIG. 13, all the steps of the process illustrated in FIG. 11 are included. Furthermore, FIG. 13 includes steps S1300 and SI 304, which are performed in the same manner as steps SI 200 and SI 204, respectively, as described above. Although the process illustrated in FIG. I l ls performed for a single ROI, when a point cloud has multiple ROIs, steps SI 102— SI 114 including step SI 300 may be performed for each ROI included in the point cloud.

[0059] In some embodiments, to support a random-access feature for a single ROI, the indexes of ROI patches are sent to the decoder so that decoder decodes only the ROI patches to reconstruct the ROI cloud. Using the indexes of ROI patches, the decoder can determine the tiles filled by ROI patches and only decode those tiles. As alternative to the indexes, a flag is coded for each patch indicating whether the patch is a ROI-patch or a non- ROI patch. As another alternative, the indexes of ROI patches is changed to [0, number of ROI patches - I], where only a“number of ROI patches” is sent to the decoder, where the decoder knows that the first“number of ROI patches” are all ROI patches, and reconstructs the ROI cloud by decoding those patches. [0060] According to some embodiments, a point cloud may include multiple ROIs. When multiple ROIs are included in a point cloud, multiple sets of ROI patches (e.g., one set of ROI patches per ROI cloud) may be created instead of creating only a single set of ROI patches. These sets may be packed one after another in accordance with the process illustrated in FIG. 10 when the tile map is specified, or the process illustrated in FIG. 12 when the tile map is not specified. Chunking may also be applied to each ROI cloud and packed in accordance with the process illustrated in FIG. 11 when the tile map is specified, or the process illustrated in FIG. 13 when the tile map is not specified.

[0061] FIG. 14A illustrates an embodiment in which each ROI of multiple ROIs is placed in a tile that spans a width of an image FIG. 14B illustrates an embodiment in which each ROI is placed in vertically stacked tiles and an empty space is divided into additional tiles. The tile boundaries in FIGS. 14 A and 14B are depicted by dashed lines.

[0062] According to some embodiments, to support the random-access feature for multiple ROIs, the indexes of ROI patches are modified. Patches of ROI # I may be indexed by [0, number of patches of ROI #1], patches of ROI #2 may be indexed by [number of patches of ROI #1, number of patches of ROI #1 + number of patches of ROI #2 - 1], and so forth each additional ROI. The number of patches of each ROI may only be sent to the decoder, where the decoder can determine the patch indexes of a particular ROI and reconstruct that ROI accordingly.

[0063] In an example decoding process, an array of integers which indicate the number of patches per ROI may be specified as an array of integers as follows;

A = [n__0, n_l, n__2, ..., n_(R-l)],

where n r indicates the number of patches of the r-th ROI.

[0064] The decoder knows that the patch indexes of the r-th ROI is in the range;

B = [n__0 + n__l + ... + n_(r-l) - 1, n__0 + n__l + ... + n__r - 1]

[0065] Thus, in some embodiments, to decode the r-th ROI, the decoder only needs to decode the patches with indexes given in the range B.

[0066] The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example, FIG. 15 shows a computer system (1500) suitable for implementing certain embodiments of the disclosed subject matter

[0067] The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like

mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like.

[0068] The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gami ng devices, internet of things devices, and the like.

[0069] The components shown in FIG. 15 for computer system (1500) are exemplary in nature and are not intended to suggest any limitation as to the scope of use or functionality of the computer software implementing embodiments of the present disclosure. Neither should the configuration of components be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary embodiment of a computer system (1500).

[0070] Computer system (1500) may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video).

[0071] Input human interface devices may include one or more of (only one of each depicted): keyboard (1501), mouse (1502), trackpad (1503), touch screen (1510), data-glove (not shown), joystick (1505), microphone (1506), scanner (1507), camera (1508).

[0072] Computer system (1500) may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (1510), data-glove (not shown), or joystick (1505), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (1509), headphones (not depicted)), visual output devices (such as screens (1510) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability— some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted).

[0073] Computer system (1500) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (1520) with CD/DVD or the like media (1521), thumb-drive (1522), removable hard drive or solid state drive (1523), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.

[0074] Those skilled in the art. should also understand that term“computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals

[0075] Computer system (1500) can also include an interface to one or more communication networks. Networks can for example be wireless, wireline, optical.

Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, del ay -tolerant, and so on. Examples of networks include local area networks such as

Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general purpose data ports or peripheral buses (1549) (such as, for example USB ports of the computer system (1500)); others are commonly integrated into the core of the computer system (1500) by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system (1500) can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi directional, for example to other computer systems using local or wide area digital networks. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as descri bed above.

[0076] Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (1540) of the computer sy stem (1500).

[0077] The core (1540) can include one or more Central Processing Units (CPU) (1541), Graphics Processing Units (GPU) (1542), specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) (1543), hardware accelerators for certain tasks (1544), and so forth. These devices, along with Read-only memory (ROM) (1545), Random-access memory (1546), internal mass storage such as internal non-user accessible hard drives, SSDs, and the like (1547), may be connected through a system bus (1548) In some computer systems, the system bus (1548) can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core’s system bus (1548), or through a peripheral bus (1549) Architectures for a peripheral bus include PCI, USB, and the like.

[0078] CPUs (1541), GPUs (1542), FPGAs (1543), and accelerators (1544) can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM (1545) or RAM (1546). Transitional data can be also be stored in RAM (1546), whereas permanent data can be stored for example, in the internal mass storage (1547). Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU (1541), GPU (1542), mass storage (1547), ROM (1545), RAM (1546), and the like.

[0079] The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts.

[0080] As an example and not by way of limitation, the computer system having architecture (1500), and specifically the core (1540) can provide functionality as a result of processor! s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core (1540) that are of non-transitory nature, such as core-internal mass storage (1547) or ROM (1545). The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core (1540). A computer- readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core (1540) and specifically the processors therein

(including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM (1546) and modifying such data structures according to the processes defined by the software. In addition or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator (1544)), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (1C)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software.

[0081] While this disclosure has described several exemplar} embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.

[0082] (1) A method performed by a video encoder includes receiving a data cloud including a plurality of data points representing a three-dimensional (3D) space; identifying each data point including a region-of-interest (ROI) associated with the data cloud; dividing the data cloud into a ROI cloud and one or more non -ROI clouds, the ROI cloud including each data point including the ROI; performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles

[0083] (2) The method according to feature (1), further including performing the patch generation process on each non-ROI cloud including creating a non-ROI patch for each data point that does not include the ROI, and performing the patch packing process on each non-ROI cloud including mappi ng each of the non-ROI patches to one or more empty spaces in the two dimensional map that do not include the ROI patch.

[0084] (3) The method according to feature (1) or (2), in which the patch packing process on the ROI cloud and the patch packing process for each non-ROI cloud is performed in parallel.

[0085] (4) The method according to feature (2) or (3), further including compressing the tile including each of the ROI patches in accordance with a first compression ratio; and compressing each other tile from the plurality of tiles not including the ROI patches in accordance with a second compression ratio higher than the first compression ratio. [0086] (5) The method according to any one of features (1) - (4), further including determining whether the ROI is larger than each tile included in the 2D map; and in response to the determination that the ROI is larger than each tile included in the 2D map, dividing the ROI cloud into one or more sub-RQI clouds, in which the patch generation process and the patch packing process is performed for each of the one or more sub-RQI clouds.

[0087] (6) The method according to feature (5), in which the patch packing process is performed on the one or more sub-ROI clouds in parallel.

[0088] (7) The method according to any one of features (1) - (6), the method further including determining whether a size of each tile in the 2D map is specified by the video encoder; and in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, setting a height of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

[0089] (8) The method according to feature (7), the method further including in response to the determinati on that the size of each tile in the 2D map is not specified by the video encoder, setting a width of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

[0090] (9) The method according to any one of features (2) - (8), in which the data cloud includes a plurality of ROIs, the data cloud is divided into a plurality of ROI clouds with each ROI cloud corresponding to a respective ROI, and the patch generation process and the patch packing process is performed on each ROI cloud.

[0091] (10) The method according to feature (9), in which the patch packing process performed on each RQI cloud results in each RQI being mapped to a different tile in the 2D map.

[0092] (11) A video encoder includes processing circuitry ' configured to receive a data cloud including a plurality of data points representing a three-dimensional (3D) space, identify each data point including a region-of-interest (ROI) associated with the data cloud, divide the data cloud into a RQI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the RQI, perform a patch generation process on the ROI cloud, the patch generation process including generating a RQI patch from each data point including the ROI, and perform a patch packing process on the ROI cloud, the patch packing process that includes: (i) map each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map, (ii) determine whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two RQI patches are located in more than one tile, move each of the ROI patches to a tile from the plurality of tiles.

[0093] (12) The video encoder according to feature (1 1), in which the processing circuitry is further configured to: perform the patch generation process on each non-ROI cloud including creating a non-ROI patch for each data point that does not include the ROI, and perform the patch packing process on each non-ROI cloud including mapping each of the non-ROI patches to one or more empty spaces in the two dimensional map that do not include the ROI patch.

[0094] (13) The video encoder according to feature (11) or (12), in which the patch packing process on the ROI cloud and the patch packing process for each non-ROI cloud is performed in parallel.

[0095] (14) The video encoder according to feature (12) or (13), in which the processing circuitry is further configured to compress the tile including each of the ROI patches in accordance with a first compression ratio, and compress each other tile from the plurality of tiles not including the ROI patches in accordance with a second compression ratio higher than the first compression ratio.

[0096] (15) The video encoder according to any one of features (11) - (14), in which the processing circuitry is further configured to: determine whether the ROI is larger than each tile included in the 2D map; and in response to the determination that the ROI is larger than each tile included in the 2D map, divide the ROI cloud into one or more sub-ROI clouds, in which the patch generation process and the patch packing process is performed for each of the one or more sub-ROI clouds.

[0097] (16) The video encoder according to feature (15), in which the patch packing process is performed on the one or more sub-ROI clouds in parallel.

[0098] (17) The video encoder according to any one of features (11) - (16), in which the processing circuitry is further configured to determine whether a size of each tile in the 2D map is specified by the video encoder, and in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, set a height of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches.

[0099] (18) The video encoder according to feature (17), in which the processing circuitry is further configured to in response to the determination that the size of each tile in the 2D map is not specified by the video encoder, set a width of the tile including the ROI patches such that the ROI patches are bounded by the tile including the ROI patches. [0100] (19) The video encoder according to any one of features (12) - (18), in which the data cloud includes a plurality of ROIs, the data cloud is divided into a plurality of ROI clouds with each ROI cloud corresponding to a respective ROT and the patch generation process and the patch packing process is performed on each ROI cloud.

[0101] (20) A non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a video encoder causes the processor to perform a method including receiving a data cloud including a plurality of data points representing a three-dimensional (3D) space; identifying each data point including a region- of-interest (ROI) associated with the data cloud; dividing the data cloud into a ROI cloud and one or more non-ROI clouds, the ROI cloud including each data point including the ROI; performing a patch generation process on the ROI cloud, the patch generation process including generating a ROI patch from each data point including the ROI; and performing a patch packing process on the ROI cloud, the patch packing process including: (i) mapping each ROI patch to a two dimensional (2D) map including a plurality of tiles arranged as a grid in the 2D map, (ii) determining whether at least two ROI patches from the plurality of ROI patches are located in more than one tile, and (iii) in response to the determination that at least two ROI patches are located in more than one tile, moving each of the ROI patches to a tile from the plurality of tiles.