Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND SYSTEM FOR RECONSTRUCTING TEXTURED MESHES FROM POINT CLOUD DATA
Document Type and Number:
WIPO Patent Application WO/2022/133569
Kind Code:
A1
Abstract:
In at least one embodiment, the present invention provides methods and systems for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the mesh, texturing the mesh comprising the steps of generating a texture, and applying the texture to at least one polygon of the mesh.

Inventors:
HERPIN MAXIME (CA)
Application Number:
PCT/CA2020/051787
Publication Date:
June 30, 2022
Filing Date:
December 22, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PREVU3D INC (CA)
International Classes:
G06T17/30; G06T15/80; G09G5/377
Domestic Patent References:
WO2013029232A12013-03-07
Foreign References:
US20200020155A12020-01-16
US20200225356A12020-07-16
US20140354632A12014-12-04
US20170104980A12017-04-13
US20190213778A12019-07-11
Attorney, Agent or Firm:
MOFFAT & CO. (CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of:

Simplifying the input mesh to result in a proxy input mesh;

Parametrizing the proxy input mesh to create a parameterized proxy mesh, the parametrized proxy mesh having at least one polygon;

Transferring the parametrized proxy mesh onto the input mesh, transferring the parametrized proxy mesh onto the input mesh comprising the step of:

Defining at least one of the at least one polygon of the parametrized proxy mesh that overlaps with at least one corresponding polygon of the input mesh;

Texturing the input mesh, texturing the input mesh comprising the steps of:

Generating a texture; and

Applying the texture to at least one polygon of the parametrized proxy mesh.

2. The method of claim 1 wherein simplifying the input mesh comprises the step of: applying at least one of an edge collapse method to each of the at least one polygon of the input mesh and an angle-based decimation method to each of the at least one polygon of the input mesh

3. The method of claim 1 or claim 2 wherein the step of defining at least one of the at least one polygon of the parametrized proxy mesh that overlaps with at least one corresponding polygon of the input mesh further comprises the step of:

Calculating a ratio of the overlapping at least one polygon of the parametrized proxy mesh and the at least one corresponding polygon of the input mesh, the ratio calculated as an average of a first projection area of the at least one polygon of the parametrized proxy mesh and the corresponding at least one polygon of the input mesh and a second

32 projection of the at least one polygon of the input mesh and the corresponding at least one polygon of the parametrized proxy mesh.

4. The method of any one of claims 1 to 3, wherein the input mesh is derived from point cloud data of the environment.

5. The method of any one of claims 1 to 4 further comprising the step of:

Capturing the environment to capture the point cloud data of the environment.

6. The method of claim 4 or claim 5 wherein the step of generating a texture further includes the step of:

Calculating a resolution of the texture, the resolution of the texture being a mean distance calculated between each point in the point cloud data and a nearest neighboring point to each point in the point cloud data.

7. The method of any one of claims 1 to 6 wherein the step of generating a texture further includes the step of:

Determining the position of at least one pixel of the texture;

Identifying a corresponding at least one point in the point cloud data that has the same position as the position of at least one pixel of the texture.

8. The method of claim 7, further comprising the step of:

Interpolating a color of the at least one pixel of the texture, the color determined from at least one neighboring point to the corresponding at least one point in the point cloud that has the same position as the position of the at least one pixel of the texture.

9. A method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of:

Texturing the input mesh, texturing the input mesh comprising the steps of:

33 Generating a texture;

Applying the texture to at least one polygon of an input mesh, wherein the input mesh is derived from point cloud data of the environment, each point of the point cloud data of the environment belonging to a first station, at least a second point of the point cloud belonging to a second station; and

Correcting at least one of each point of the point cloud data of the first station using at least the at least a second point of the point cloud.

10. The method of claim 9 further comprising the step of:

Capturing the environment to capture the point cloud data of the environment.

11. The method of claim 10 wherein correcting each at least one point in the at least one station using at least two points from at least two stations further comprises the step of:

Subtracting the colors of each at least one point of the at least one station by the difference between the smoothed colors of the station and the smoothed average colors of all at least one station that include the at least one point.

12. The method of any one of claims 9 to 11 further comprising the step of:

Calculating a resolution of the texture, the resolution of the texture being a mean distance calculated between each point in the point cloud data and a nearest neighboring point to each point in the point cloud data.

13. The method of any one of claims 9 to 12 wherein the step of generating a texture further includes the steps of:

Determining the position of at least one pixel of the texture; and

Identifying a corresponding at least one point in the point cloud data that has the same position as the position of at least one pixel of the texture.

14. The method of claim 9 to 13, further comprising the step of:

Interpolating a color of the at least one pixel of the texture, the color determined from at least one neighboring point to the corresponding at least one point in the point cloud that has the same position as the position of the at least one pixel of the texture.

15. A method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of:

Texturing the input mesh, the input mesh derived from point cloud data of the environment, the point cloud data obtained from a scanning device that has scanned the environment, the point cloud data further including image data, texturing the input mesh further comprising the steps of:

Color correcting the image data, color correcting the image data comprising the steps of:

Generating at least one image from at least one of the input mesh and the point cloud data; and

Transferring color information from at least one generated image to at least one image of the image data;

Generating a texture; and

Applying the texture to at least one polygon of the input mesh.

16. The method of claim 15 wherein generating at least one image from at least one of the input mesh and the point cloud data further comprises the steps of:

Generating at least one proxy texture for the input mesh;

Applying the at least one proxy texture to the input mesh to create a textured mesh;

Identifying at least one viewpoint for at least one image of the image data, and;

Rendering the textured mesh from the at least one viewpoint.

17. The method of claim 15 wherein generating at least one image from at least one of the input mesh and the point cloud data comprises the steps of:

Identifying at least one viewpoint for at least one image of the image data; and,

Rendering at least one point of the point cloud from the at least one viewpoint.

18. The method of claim 16 or claim 17 wherein transferring color information from at least one generated image to at least one image of the image data further comprises the steps of:

Averaging the difference between the generated image and the at least one image of the input data; and

Subtracting the averaged difference from the at least one image of the image data.

19. The methods of any of the claims 15 to 18 wherein generating a texture further comprises the steps of:

Identifying the viewpoint of at least one image of the image data; and projecting the image of at least one viewpoint to the input mesh.

20. The method of claim 19 wherein projecting the image of at least one viewpoint to the input mesh comprises the steps of:

Projecting at least one polygon of the input mesh onto the plane of the viewpoint;

Separating at least one projected polygon into at least one fragment;

Associating one fragment to at least one pixel of the at least one image of the image data and at least one pixel of the texture; and

Assigning the color of at least one pixel of the texture associated to at least one fragment using the color of the pixel of the at least one image of the image data associated to the fragment.

36 The method of claim 20 wherein the steps of projecting the image of at least one viewpoint to the input mesh are accelerated using a graphics processing unit (GPU). The method of claim 19 wherein projecting the image of at least one viewpoint to the input mesh comprises the steps of:

Applying a shader to at least one polygon of the input mesh, the shader executing at least one of the following steps:

Generating screen space coordinates for at least one vertex of the at least one polygon;

Using the coordinates to map at least one image to the at least one polygon;

Moving at least one vertex of the input mesh such that its position on the render corresponds to its coordinates in UV space; and

Rendering the at least one triangle of the mesh onto at least one pixel of the texture. The method of claim 22 wherein the wherein projecting the image of at least one viewpoint to the input mesh further comprises the steps of:

Rendering a depth map of at least one polygon of the input mesh;

Using a vertex shader, obtaining a distance from at least one vertices of at least one polygon of the mesh to a camera that obtained the image data;

Using a fragment shader, obtaining a distance of the depth mask at the coordinates of the fragment and comparing the distance to the distance of the fragment to the camera. The method of any one of claims 16 to claim 23 wherein the step of generating a texture further comprising the steps of:

Calculating a resolution of the texture, the resolution of the texture being a mean distance calculated between each point in the point cloud data and a nearest neighboring point to each point in the point cloud data.

37

25. The method of any one of claims 16 to 24 wherein the step of generating at least one proxy texture for the input mesh further comprises the step of:

Determining the position of at least one pixel of the texture; and

Identifying a corresponding at least one point in the point cloud data that has the same position as the position of at least one pixel of the texture.

26. The method of claim 25, further comprising the step of:

Interpolating a color of the at least one pixel of the texture, the color determined from at least one neighboring point to the corresponding at least one point in the point cloud that has the same position as the position of the at least one pixel of the texture. 27. The method of any one of claims 15 to 26, further comprising the step of:

Capturing the environment to capture the point cloud data of the environment.

28. The method of claim any one of claims 15 to 27 wherein correcting each at least one point in the at least one station using at least two points from at least two stations further comprises the step of: Subtracting the colors of each at least one point of the at least one station by the difference between the smoothed colors of the station and the smoothed average colors of all at least one station that include the at least one point.

Description:
METHODS AND SYSTEM FOR RECONSTRUCTING TEXTURED MESHES FROM POINT CLOUD DATA

FIELD

The present invention relates to software and apparatuses for editing virtual three-dimensional spaces that are accessible through a computing device. More specifically, the present invention relates to methods and systems for extracting and editing data from virtual representations of three- dimensional spaces that are based on scanned visual data.

BACKGROUND

Real world environments can be digitally captured using many technologies, such as laser scanners, structured light sensors, or photogrammetry. The resulting captured data can often be visually represented in the form of three-dimensional point cloud data that is comprised of individual, colored points.

This type of visual representation is useful for engineering purposes but is often of little interest for other applications, such as three-dimensional visual content creation or three-dimensional visualization of scanned environments, which both typically use polygonal meshes to represent the scanned three-dimensional environment and physical assets located within the environment.

In these applications, scanned environments are therefore often processed into polygonal meshes using various methods that can be automatic or user driven. In order to ensure that the resulting polygonal mesh is an accurate representation of the scanned environment, it will be appreciated that the resulting polygonal mesh has to not only present the same geometric features as the environment, but also the same colors that are present in the environment. When an environment is digitally captured as point cloud data, photos of the environment taken by the scanning device are used to color the point cloud in a process that varies according to the scanning technology used. In some cases, the photos and their localizations can be included in the point cloud data file where the scanned point cloud data is stored.

On the other hand, it will be appreciated that colors can be represented in multiple ways on a polygonal mesh. Two known methods of representing colors in a polygonal mesh are vertex colors and textures.

When using vertex colors, it is understood that each vertex of each polygon that comprises the three-dimensional mesh can be assigned a color, and the colors inside each polygon can be interpolated from the colors of these boundary vertices of the polygon. In some situations, the vertex colors can be determined from the colors of the point cloud used to generate the mesh. For example, the colors of each vertex can be assigned to the color of the closest corresponding point in the point cloud.

As such, it will be appreciated that vertex colors are a simple way of coloring a mesh but require a high resolution of polygons (or in other words, must use a large number of polygons) in the mesh in order to provide sufficient spatial resolution of the colors that reflect the real- world environment that the mesh represents in a three-dimensional format.

When using textures, each three-dimensional polygon of the mesh is given corresponding coordinates in two-dimensional space, creating a mapping between the two-dimensional representational of the environment and the three-dimensional surface of the mesh. A twodimensional texture can therefore be mapped onto the mesh, giving each point of each polygon of the mesh a color that corresponds to the corresponding pixel of the two-dimensional representation of the environment. In these situations, the pixels of the texture are usually colored according to the colors of the point cloud data of the environment, in an analogous manner as discussed above.

It will be appreciated that issues can arise when using the colors of the point cloud data to texture the mesh. First and depending on the method of capturing the real-world environment in a three- dimensional format, close points in the point cloud data can be colored by photos taken at different times and from different viewpoints, leading to visible noise in the colors of the point cloud data and, as a result, in the colors that are applied to the resulting texture. Secondly, the local density of points in the point cloud data can vary due to occlusion and distance from the sensor, leading to zones of low resolution of the colors applied to the texture.

In applications where the point cloud data file also contains photos of the environment, such photos can be used to provide color data that can be applied to the textures mapped to the resulting mesh, thereby avoiding loss of resolution in situations where there is a low density of points. However, it will be appreciated that changes of colorimetry between separate yet overlapping photos can create visible seams in the resulting texture.

Accordingly, there is a need for methods and systems for consistently, accurately and easily texturing a polygonal mesh created from colored point cloud data that, in some embodiments, can also contain image data.

BRIEF SUMMARY

The present invention provides methods and systems for texturing a polygonal mesh created from colored point cloud data that, in some embodiments, can contain image data. In at least one embodiment, it is contemplated that the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of simplifying the input mesh to result in a proxy input mesh, parametrizing the proxy input mesh to create a parameterized proxy mesh, the parametrized proxy mesh having at least one polygon, transferring the parametrized proxy mesh onto the input mesh, transferring the parametrized proxy mesh onto the input mesh comprising the step of defining at least one of the at least one polygon of the parametrized proxy mesh that overlaps with at least one corresponding polygon of the input mesh, texturing the input mesh, texturing the input mesh comprising the steps of generating a texture, and applying the texture to at least one polygon of the parametrized proxy mesh.

In at least one embodiment, it is contemplated that the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the input mesh, texturing the input mesh comprising the steps of generating a texture, applying the texture to at least one polygon of an input mesh, wherein the input mesh is derived from point cloud data of the environment, each point of the point cloud data of the environment belonging to a first station, at least a second point of the point cloud belonging to a second station, and correcting at least one of each point of the point cloud data of the first station using at least the at least a second point of the point cloud.

In at least one embodiment, it is contemplated that the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the input mesh, the input mesh derived from point cloud data of the environment, the point cloud data obtained from a scanning device that has scanned the environment, the point cloud data further including image data, texturing the input mesh further comprising the steps of color correcting the image data, color correcting the image data comprising the steps of generating at least one image from at least one of the input mesh and the input point cloud; and transferring color information from at least one generated image to at least one image of the input data, generating a texture, and applying the texture to at least one polygon of the input mesh.

DESCRIPTION OF THE FIGURES

The present invention will be better understood in connection with the following FIGURES, in which

FIGURE 1 is an illustration of a simplification and a parametrization process applied to an input mesh in accordance with at least one embodiment of the present invention;

FIGURE 2A is an illustration of a sample image containing a first station and a second sample image containing a second station in accordance with at least one embodiment of the present invention;

FIGURE 2B is an illustration of a color corrected point cloud image based on an original point cloud image in accordance with at least one embodiment of the present invention;

FIGURE 3A is a first image of destined for color correction in accordance with at least one embodiment of the present invention;

FIGURE 3B is a second image that is obtained from the same viewpoint as the image shown in Figure 3A;

FIGURE 3C is a resulting color-corrected image in accordance with at least one embodiment of the present invention; FIGURE 4 is an illustration of a suitable system for use in accordance with at least one embodiment of the present invention;

FIGURE 5 is an illustration of a suitable user device for use in accordance with at least one embodiment of the present invention;

FIGURE 6 is a diagram of a suitable method for applying a texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention;

FIGURE 7 is a diagram of a suitable method for applying a color corrected texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention; and

FIGURE 8 is a diagram of another suitable method for applying a color corrected texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

It is contemplated that the present invention can provide methods and systems for texturing a polygonal mesh created from a colored point cloud that can contain image data.

It is contemplated that the present methods and systems can avoid common problems such as noise in the texture in cases where texturing is done using colors obtained from the point cloud data, and seams in the textures when textures are generated from obtained images that are included in the point cloud data. Additionally, it will be appreciated that the process of texturing using images is greatly accelerated by the use of a graphics processing unit (GPU) and common rasterization techniques, as will be discussed in further detail herein. In the context of the present invention it is contemplated that a “user device” can be any suitable electronic device with appropriate capabilities, including but not limited to a smartphone, laptop, tablet, desktop, server or a wearable device (such as a virtual, augmented or extended reality device), as required by the end user application of the present invention. As will be appreciated by the skilled person, a suitable user device will have both visual display means and user input means that permit a user to access, edit, navigate and manipulate an interactive and editable three- dimensional map as required.

As will be appreciated by the skilled person, a suitable user device will be in electronic communication with suitable data storage means as discussed herein. In some embodiments it is contemplated that the user device has local data storage means and in other embodiments it is contemplated that the user device additionally or alternatively will be in electronic communication with remotely located data storage means over an electronic communication network.

It is further contemplated that a suitable user device has suitable processing means in electronic communication with a suitable radio communication module such that the user device can be in electronic communication with a larger electronic communication network, as discussed herein.

To this point, it contemplated that the present invention can be executed over an electronic communication network such as a local or wide area network as required by the end-user application. As such, it is contemplated that suitable data storage will be provided that can be located remotely (i.e. in the cloud and electronically accessed via typical wireless communication protocols) or in a locally oriented server stored onsite or in local storage on the user device and electronically accessed by way of standard wired or wireless communication protocols, as required by the end user application of the present invention. It will further be contemplated and appreciated by the skilled person that in some embodiments a suitable user device can be adapted and configured to run a suitable graphics engine that is suitable for rendering and displaying an interactive and editable three-dimensional map in accordance with the present invention.

In other embodiments, it is contemplated that the present invention can be accessed by a suitable user device through a web browser having access to a suitable electronic communication network, such as the Internet or a local network.

In the context of the present invention, it will be appreciated that a suitable “scanner” or “scanning device” is a user device and includes any suitable three-dimensional scanning device that is adapted to convert visual data into a suitable format of digital data including point cloud data.

For example, a suitable scanning device includes but is not limited to a digital camera, a structured light scanner, a photogrammetry workflow, a structure from a motion capture scanner, a simultaneous localization and mapping (SLAM) scanner, a light field camera and a LIDAR scanner among other suitable and available three-dimensional scanning devices that will be readily appreciated by the skilled person. Other suitable devices could include a suitably equipped unmanned aerial vehicle (“UAV”, i.e.: a drone), as will be appreciated by the skilled person.

In the context of the present invention, it will be appreciated that a “scanner” or “scanning device” can include any suitable scanning device for capturing a visual representation of a real-world environment in a digital format which can include but is not limited to a polygonal mesh or as point cloud data, as will be readily appreciated by the skilled person.

It will further be appreciated that in some embodiments a suitable scanning device can be in electronic communication with suitable data storage means over the electronic communication network. In other embodiments it is contemplated that the suitable scanning device has suitable local storage, as will be appreciated by the skilled person.

In the context of the present invention, it will be appreciated that a “point cloud” or “point cloud data” will be understood to mean a collection of “data points” that are arranged in a visual manner to represent a three-dimensional space of a real-world environment, as will be readily understood by the skilled person. It will be further appreciated that a “point cloud” is comprised of a set of individual “data points” or “points” that collectively comprise the “point cloud” or “point cloud data”.

In the context of the present invention, it is contemplated that an “image” will be understood to mean a digital photographic image stored in any suitable digital format as “image data”.

In the context of the present invention, it will be appreciated that a “real world environment”, “scanned environment” or “environment” can include any interior or exterior space and the physical assets that are contained within that space, as will be appreciated by the skilled person.

In the context of the present invention, it will be appreciated that a “polygonal mesh” or “mesh” is a three-dimensional visual representation of a real-world environment that is comprised of a plurality of adjoining polygons, as will be readily appreciated by the skilled person.

It will be further appreciated that a “polygon” can include any polygon that is defined by edges, vertices and a surface, such as but not limited to a triangle. Moreover, it will be appreciated that a “polygon” and a “triangle” can be considered synonymously interchangeable in the context of the present invention. In the context of the present invention, it will be appreciated that a “texture” means the overlying visual detail that is applied to the surface of a polygon that comprises the mesh, which can include color information, geometric detail, or any other visual detail that is required by the end user application of the present invention, as will be appreciated by the skilled person. In at least one embodiment, this procedure is comprised of the following steps:

The input polygonal mesh is mapped in two dimensions

In at least one embodiment, in order for a texture to be “wrapped around” or “applied to” a mesh of plurality of polygons, a mapping between a two-dimensional image of the environment and the surface of each polygon that makes up the mesh must be established. In the present context, this step will be referred to as “mesh parametrization” or “parametrizing the mesh”. The skilled person will appreciate that creating a high-quality mesh parametrization, with few distortions and few “islands” (i.e. empty spaces absent of color) is a complex optimization problem that does not scale well for use in applications where the input mesh is particularly dense.

It will be appreciated that meshes derived from point clouds are usually very dense. In these embodiments, it is contemplated that the following sub-steps can be executed:

Simplification of the Input Mesh

First, it is contemplated that the input mesh can be decimated in order to reduce its number of polygons using any known method that will be appreciated by the skilled person, including but not limited to an edge collapse method with error quadrics and angle-based decimation. In the context of the present application, the decimated mesh will be referred as the “decimated proxy” or a “decimated proxy mesh”.

Parametrization of the Decimated Proxy Mesh

The resulting decimated proxy mesh can then subsequently parametrized using a known method, thereby optimizing the coverage of the two-dimensional polygons of the decimated proxy in the parametrized space.

It will be appreciated that following the decimation step, the irregularities of the mesh are “smoothed” and as a result few “islands” are created which helps avoid artefacts in the following step.

Reprojection of parametrized mesh on the input mesh

The parametrization of the input mesh can subsequently be created from the parametrization of the decimated proxy mesh that is created in the previous step. More specifically, the parametrized decimated proxy mesh can be projected on to the input mesh to result in a parametrized input mesh.

In at least one embodiment, the input mesh can be parametrized as follows:

For each polygon pi of the input mesh, find the polygon pp on the parametrized mesh such that pp is the polygon(s) that overlap(s) the most with pi. As a result, the coordinates of parametrization of each vertex v of pi are then the points that have the same barycentric coordinates with respect to pp in the parametrized space as the barycentric coordinates of v with respect to pp in the three- dimensional space.

It is further contemplated that the measure of overlap between two polygons can be defined as follows: pl overlaps p2 with ratio r if pl intersects p2, r is average between the ratio of the area of pl projected on p2 and the area of p2 and the ratio of the area of p2 projected on pl and the area of pl.

With reference to Figure 1, it will be appreciated that the original input mesh can be simplified by a decimation process in order to result in a decimated proxy mesh that is simpler than the original input mesh. Moreover, it is contemplated that that if a suitable parametrization process is applied to the original input mesh, a relatively complex project parametrization mesh results, while on the other hand if a suitable parametrization process is applied to the decimated proxy mesh, the resulting proxy parametrization mesh derived from the decimated proxy mesh is far simpler than the projected parametrization mesh of the original input mesh, as can be seen in Figure 1.

Texturing the Parametrized Mesh using the Point Cloud Colors

Next, the colors provided in the point cloud data can be used to create a texture for the resulting parametrized mesh. A number of common ways to obtain point cloud data and how colors are determined and assigned are discussed below:

Embodiments using a Laser Scanner:

In embodiments where a laser scanner or a structured light scanner is used to obtain the initial point cloud data, the procedure is as follows: a) The scanning device is placed at a given position in the environment; b) The environment is captured using the corresponding scanner in order to generate a three- dimensional point cloud representation of the environment; c) A 360 degree photo of the environment is captured suing a suitable device and this photo is used to color the points taken in step (b); and d) Steps (a) to (c) are repeated in different locations until the environment is scanned to a predetermined degree of coverage. Embodiments Using Photogrammetry/Videogrammetry:

In other embodiments where photogrammetry or videogrammetry methods are used to obtain a three-dimensional representation of the real-world environment, the procedure can be as follows: a) Images of the environment are captured from multiple viewpoints using a suitable device until a predetermined degree of coverage is obtained for the real- world environment; b) The images are processed into a point cloud representation of the environment; and c) For each point in the point cloud data that comprises the point cloud representation, a corresponding image is associated with the point and the assigned color of the point is the color of the corresponding pixel on the corresponding image.

Assuming that a point cloud representation of the environment has been obtained using one of the methods described above, it is further contemplated that:

• A “station” can be defined as a subset of the point cloud data that is all the points of the point cloud data that have been colored using the same image;

• As such, the point cloud data can be considered a collection of stations;

It will be further appreciated that close points may belong to different stations; Since all points of a given station have been colored by the same image, there is no more noise in the colors of the points of a given station than there is in the colors of the original image; and

• From one station to another, the images may represent the same parts of the environment, but with different lighting environments, causing noise in the colors of the point cloud.

With reference to Figure 2A if a first station 1 is defined having no visible noise and a second station 2 is similarly defined with no visible noise, it follows that the differences between each of these separately defined stations will result in visible noise when these two stations are combined, as can be seen in Figure 2A and as will be appreciated by the skilled person. As such, it will be appreciated that each station needs to be separately corrected in order to remove the noise in point colors when the two stations are combined.

Point cloud color correction

As discussed previously, it will be appreciated that point cloud data is comprised of multiple overlapping stations. If:

• Ci(p) is the color of the point cloud from the station i taken at position p;

• Let [/(p)] be an average of function f in a neighborhood of the position p; and

Let G(p) be the ground truth color of the point cloud at the position p. G represents the color of the desired point cloud. In other words, G is supposed to be close to Ci and the goal is to correct Ci so that it equals G at all points.

This relationship can be expressed as: Ci(p) = G(p) + (p) where N is the noise caused by the changes in the lighting when the station was colored.

In some embodiments, it can be assumed that N has low frequency since the lighting that caused it was also at a low frequency. Furthermore, the skilled person will appreciate that, in practice, there may be other factors that contribute to N (such as moving objects), but these factors will not have a strong impact over the present results for the purposes of the present invention.

If it is considered that the local average of the point cloud color at each point is a good approximation of the local average color of G at each point, thus:

[G (p)] = Ct(p) with n being the number of stations This approximation is also correct in practice.

Also, if follows that:

[Ct(p) - G(p)] = [(p)J.

Moreover, if it is assumed that the noise functions are low frequency (or in other words, the given environment is small enough): [Ci(p)] - [G(p)] = (p).

As a result: n

(p) = [Ci(p)] - -Y Ci(p) n /_i i

Therefore, it will be appreciated that all stations can be color corrected by subtracting the colors of each point p of a particular station by the difference between the smoothed colors of the station and the smoothed average colors of all stations at point p.

With reference to Figure 2B, a corrected point cloud image is illustrated demonstrating how the present invention can be used to eliminate noise from an image derived from the original point cloud data.

Generating the texture

Next, it is contemplated that a texture can be created and mapped onto the parametrized mesh. It is contemplated that, in at least one embodiment, the resolution of the texture can be inferred from the mean distance between each point and the nearest neighboring point.

Moreover, it will be appreciated that each pixel of a texture falling inside a polygon of the parametrized mesh has a corresponding position in three dimensions. This position can be used to interpolate the colors of neighboring points to further reduce the noise and provide a smooth result, especially when the point cloud data cannot be colored using the method presented above due to a lack of information concerning each station that makes up the point cloud data.

When choosing the set of points used for each pixel, it is contemplated that a space partitioning method can be used (such as but not limited to a kd-tree or an octree) yielding a complexity of: where R is the resolution of the texture, K is the average number of points used to color a pixel, C(/V)is the complexity of selecting a point within a distance of a target in a set of N points.

With the use of a space partitioning structure, it will be appreciated that C(1V) is a near constant and thus the complexity of the procedure scales linearly with the number of pixels in the texture and is quasi-constant with respect to the number of points in the point cloud.

Embodiments where Point Cloud Data includes Image Data

The following steps may only apply in embodiments where the point cloud data contains image data.

Color Correction of Image Data

As discussed above, it is contemplated that each point of the point cloud can be colored by at least one image.

It will be appreciated that different images may share parts of the environment but differ on their representation of these shared parts. For example, the colorimetry of the two adjacent or overlapping images often do not match, resulting in “noise” (i.e.: mismatching) in the colors of the point cloud, as some points in the point cloud may be colored by one image while neighboring points are colored by another image, when all points under consideration could have been colored by either of the two overlapping images.

Accordingly, this step creates a texture that blends the colors from a neighborhood of points, effectively averaging the potential multitude of images coloring the neighborhood of points. In at least one embodiment, it is contemplated that a neighborhood of points can be defined as a set of at least two points of the point cloud data that are spatially close to one another, as will be readily appreciated by the skilled person.

A similar approach can be applied when directly using the available images to texture the mesh, as it is contemplated that each pixel of the texture can be calculated as an average of the corresponding pixels from each image that contains a particular represented visual detail. However, it will be appreciated that errors in the relative position and orientation of the images would cause the represented visual detail to be blurred in the resulting texture.

In order to maximize the quality of the texture, it is preferred that each pixel can only be colored by one image, and in order to avoid neighboring pixels coming from different images having a different colorimetry, it is contemplated that the images must be corrected such that:

• For any two images il and 12, if any group of pixels in il represents the same visual detail as a group of pixels in 12, then the two groups of pixels must share the same colorimetry;

• There must be no visible discontinuity in the colorimetry introduced by the correction; and

• There must be no loss of visual detail introduced by the correction.

Property (1) states that the visual details shared between images are corrected to look the same on all images, while property (2) makes sure that any corrections are not local to shared visual details but also effect their neighborhoods in order to avoid introducing visible seams.

It will be appreciated that these properties may be combined into a cost function subject to minimization in order to balance these considerations, but in the context of the present invention a simpler approach can be adopted as follows. First, a texture Tgt is created such that each pixel p is an average of corresponding pixels in the images that contain the visual detail on which p is mapped on the mesh.

Then, T gt can be Tg& is rendered from the same positions, orientations, and camera properties as each image to be corrected.

The result is a set of images that comply to property (1) and may comply to (2) and (3). It is further contemplated that the render may be performed using rasterization on a GPU to accelerate the process, as will be readily appreciated by the skilled person.

Next, images can be grouped into couples (71, 72) such that II is from the set of provided images as seen in Figure 3A, and 12 is the render of the mesh from the same position, orientation, and camera properties as 71, as can be seen in Figure 3B. In other words, Il is the original image that needs to be color corrected and 12 is a render of the environment that is used to color correct II.

In these embodiments, the colorimetry of 12 is considered ground truth, and 71 is modified such that the local average of the difference between 71 and 72 is 0, satisfying the property (1). Moreover, once this constraint is smoothly applied, it will be appreciated that after 71 is modified, Il also complies with properties (2) and (3).

As such, this operation produces modified images as can be seen in Figure 3C, such that every pixel of the texture(s) can be colored by any image containing the corresponding visual detail without any visible seams caused by colorimetry differences between adjacent or overlapping images that include the same visual detail.

Creation of a Texture Using the Corrected Image Data It will be appreciated that these corrected images may be used to create a texture without noticeable seams. In at least one embodiment, it is contemplated that a texture can be created using a revert process of rasterization.

When rendering a three-dimensional mesh into a two-dimensional image by way of rasterization, a viewpoint is chosen. Every polygon that comprises the mesh can then be projected onto a view plane of the resulting two-dimensional image. These projected polygons can then be subdivided into fragments that only cover at most one pixel of the rendered image and subsequently the color of each fragment of the rendered image can be evaluated and matched to the color of the underlying pixel covered by the fragment, thereby filling the rendered image with the colors of the underlying input mesh.

In order to color the mesh using an already existing image, a viewpoint can be selected that corresponds to the viewpoint of that the existing image was taken from relative to the mesh. The polygons of the mesh can then be projected onto that viewpoint and subdivided into fragments. The pixels of the texture are considered ground truth, and each fragment can subsequently be colored by the underlying pixel it lies on top off. Each fragment then gives its colors to the pixels covering it in the texture.

It will be appreciated that the revert rasterization process presented herein is not standard and therefore may not be implemented in consumer graphic processing units (GPUs). As a result, it is contemplated that certain intermediate steps may be taken in order to conform to usual pipelines and benefit from accelerated implementations. For example, a shader may be used to give every point p of the mesh a color(r, g, b) such that r and g are the parametrized coordinates of p. and b is proportional to the distance between p and the viewpoint.

In these embodiments, for each corrected image Ic, the mesh can then be rendered into an image Ir from the viewpoint of Ic. Due to the shader applied, each pixel of Ir represents a mapping from the same pixel in Ic to the parametrized space and as a result a mapping from the texture space and Ic can thus be established. Pixels of the texture can then be directly mapped from those in Ic.

Turning to Figure 4, at least one embodiment of a system for use in connection with the present invention is illustrated. In this embodiment, it is contemplated that a user device 2, a scanning device 4 and data storage 6 are in electronic communication by way of an electronic communication network 8. It is contemplated that user device 2 has visual display means and user interface means, as discussed herein. It is further contemplated that a scanning device 4 can be, for example a digital camera, a LIDAR scanner or a UAV-based scanner and that data storage 6 is a remotely located server.

It is further contemplated that user device 2, scanning device 4 and data storage 6 are in electronic communication with each other through an electronic communication network 8 that is a wireless communication network operated through remote servers, also known as a cloud-based network, although other arrangements such as hard-wired local networks are also contemplated as discussed herein.

Turning to Figure 5, at least one embodiment of a suitable user device 2 is illustrated. In this embodiment, it is contemplated that user device 2 has a suitable radio communication module 3, local data storage 5, input means 7, display means 9 and processing means 11. It is contemplated that each of radio communication module 3, local data storage 5, input means 7, display means 9 and processing means 11 are all in electronic communication with one another through a suitable bus 13.

In this way, it is contemplated that a suitable user device 2 is electronic communication with a suitable electronic network by way of radio communication module 3 in electronic communication with processing means 11.

Turning to Figure 6, a method in accordance with at least one embodiment of the present invention is depicted. In this embodiment, the method starts and proceeds to the step where the input mesh is simplified 20 in order to create a simplified proxy input mesh. It will be appreciated that the input mesh is a three-dimensional representation of a real- world environment that can be captured in a number of ways, as discussed herein.

It is contemplated that the input mesh is comprised of a plurality of polygons that in some embodiments are triangles. Moreover, it is contemplated that the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device.

It is contemplated that the input mesh can be simplified to result in a proxy input mesh in a number of ways, including applying an edge collapse method to the input mesh and applying an angle based decimation method to the input mesh, as will be appreciated by the skilled person. In this way, it is contemplated that the input mesh can be simplified in order to perform further processes on the resulting simplified proxy input mesh, as discussed in further detail herein.

Next, it is contemplated that the proxy input mesh is parametrized 22 to create a proxy parametrized mesh. It is contemplated that the proxy parametrized mesh is similarly comprised of a plurality of polygons that in some embodiments are triangles. Once the proxy input mesh is parametrized to create a parametrized proxy mesh it is contemplated that this resulting parametrized proxy mesh is transferred onto the input mesh 24. This step further involves the step of defining at least one polygon of the parametrized proxy mesh that at least somewhat overlaps with a corresponding underlying polygon of the input mesh 26. In some embodiments, this can further include the step of calculating a ratio of the degree of overlap between the overlapping polygon of the parametrized proxy mesh and the corresponding underlying polygon of the input mesh.

Once the parametrized proxy mesh has been transferred onto the input mesh 24 by defining the overlapping polygons between the parametrized proxy mesh and the input mesh, the input mesh can be subsequently textured 28 with a generated texture 30.

In some embodiments where the input mesh is derived from suitably captured point cloud data, it is contemplated that the step of generating the texture 30 further involves the step of calculating a resolution of the texture. It is contemplated that a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under under consideration.

In some embodiments where the input mesh is derived from suitably captured point cloud data, it is further contemplated that each pixel of the generated texture can be color corrected based on color data obtained from the point cloud data. In these embodiments, it is contemplated that a color can be interpolated for each pixel in the texture by obtaining a color from a point in the point cloud data that has a similar or identical position to the particular pixel of the texture under consideration. In this way, it is contemplated that each pixel of the generated texture can be color corrected based on the color of a point in the point cloud data that positionally corresponds to the pixel under consideration.

Finally, it is contemplated that the generated texture can be applied to the input mesh 32 to result in a textured input mesh. In this way, a method is provided for texturing an input mesh where the input mesh can be simplified to result in a proxy input mesh, parametrized to result in a parametrized proxy mesh, and then subsequently textured using a generated texture that, in some embodiments, can be generated based on point cloud data that corresponds to the pixels of the generated texture.

Turning to Figure 7, a method in accordance with another embodiment of the present invention is depicted. In this embodiment, it is contemplated that the method starts and proceeds to the step where an input mesh is to be textured 28 with a generated texture 30. It will be appreciated that the input mesh is a three-dimensional representation of a real-world environment that can be captured in a number of ways, as discussed herein.

It is contemplated that the input mesh is comprised of a plurality of polygons that in some embodiments are triangles. Moreover, it is contemplated that the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device.

In this embodiment, it is contemplated that the point cloud data further includes image data. It will be appreciated that image data includes at least one image that visually corresponds to a portion of the point cloud data. Moreover, in this embodiment it is contemplated that each point in the point cloud data belongs to a station. As discussed herein, it is contemplated that a station is a subset of points in the point cloud that have an associated color that has been derived from the same image.

As such, it is contemplated that the generated texture 30 can subsequently applied to at least one polygon of the input mesh 132. It is further contemplated that the colors of a point in the point cloud data can be corrected 134 using a first point that belongs to a first station and a second point that belongs to a second station.

More specifically, in at least one embodiment it is contemplated that the colors of a point in the point cloud data can be corrected 134 by subtracting the color of the point under consideration (belonging to a first station) from the difference between the smoothed colors of the neighborhood of that point under consideration (belonging to the first station) and the smoothed average colors of all stations that include a neighborhood of the point under consideration.

In other words, it is contemplated that a single point in the point cloud may be close to other points from different stations and may correspond to different images each having different coloration.

Therefore, it is first contemplated that all colors of all points in a first station can be smoothed to result in a smoothed color for a particular point that belongs to that particular station. Secondly, it is contemplated that average colors can be determined based on the average color of all colors of the neighborhood to that particular point in all stations of the neighborhood.

As such, it will be appreciated that a color can be corrected by taking the original color of the point and subtracting that original color from the difference between the smoothed color (derived from a single station) and the smoothed average color of that point (derived from a plurality of stations). In some embodiments where the input mesh is derived from suitably captured point cloud data, it is contemplated that the step of generating the texture 30 further involves the step of calculating a resolution of the texture. It is contemplated that a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under consideration.

In some embodiments where the input mesh is derived from suitably captured point cloud data, it is further contemplated that each pixel of the generated texture can be color corrected based on color data obtained from the point cloud data. In these embodiments, it is contemplated that a color can be interpolated for each pixel in the texture by obtaining a color from a point in the point cloud data that has a similar or identical position to the particular pixel of the texture under consideration.

In this way, it is contemplated that each pixel of the generated texture can be color corrected based on the color of a point in the point cloud data that positionally corresponds to the pixel under consideration.

Turning to Figure 8, a method in accordance with another embodiment of the present invention is depicted. In this embodiment, it is contemplated that the method starts and proceeds to the step where an input mesh is be textured 28. It will be appreciated that the input mesh is a three- dimensional representation of a real-world environment that can be captured in a number of ways, as discussed herein.

It is contemplated that the input mesh is comprised of a plurality of polygons that in some embodiments are triangles. Moreover, it is contemplated that the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device. In this embodiment, it is contemplated that the point cloud data can further include image data. It will be appreciated that image data includes at least one image that visually corresponds to a portion of the point cloud data.

In this embodiment, it is contemplated that the image data contained in the point cloud data can be color corrected 200. In this step, the image data can be color corrected 200 by generating a new image 202 based on the input mesh or the point cloud data that the input mesh is based on.

It is next contemplated that color information can be transferred 204 from the newly generated image to a corresponding image that is contained in the image data contained in the point cloud data.

Subsequently, a texture can be generated 30 and the texture can be applied to the polygons of the input mesh 32.

In some embodiments, it is contemplated that the new image can be generated 202 by generating a proxy texture for the input mesh and subsequently applying this proxy texture to the input mesh in order to create a textured mesh.

Next, it is contemplated that a viewpoint can be identified that corresponds to an image that is included in the image data of the point cloud data. Finally, the textured mesh can be rendered from a viewpoint that corresponds to the viewpoint identified to the corresponding image that is included in the image data of the point cloud data.

In at least one embodiment, it is contemplated that the new image can be generated 202 based on the input mesh or the point cloud data that the input mesh is based on. In some embodiments, this image can be generated by identifying a viewpoint of an image that is included in the image data of the point cloud data and the points of the point cloud can be rendered from a corresponding viewpoint to the viewpoint of the identified image that is included in the image data.

In some embodiments, color information can be transferred 204 from the newly generated image to a corresponding image that is contained in the image data contained in the point cloud data by averaging the difference between the colors of the newly generated image and the colors of the corresponding image that is contained in the image data contained in the point cloud data.

Once this averaged difference in color between the new image and the original image is obtained, this averaged difference of color can be subtracted from each color in the original image included in the image data in order to correct the color of the original image included in the image data.

In some embodiments, it is contemplated that the texture can be generated 30 by identifying a viewpoint from an image included in the image data and projecting this image on the input mesh.

It is contemplated that projecting this image on the input mesh can include projecting a polygon of the input mesh onto the plane of the identified viewpoint and separating this projected polygon into a plurality of fragments. In some embodiments, it is contemplated that this process can be accelerated by using a graphics processing unit (GPU), as will be understood by the skilled person.

Next, a fragment can be associated with a pixel of the original image included in the image data and a pixel of the generated texture. A color can subsequently be assigned to the pixel of the texture associated with the fragment using a color that has been derived from the pixel of the original image associated with the fragment.

In some embodiments, it is contemplated that projecting the image of at least one viewpoint onto the input mesh further involves applying a shader to a polygon of the mesh. It is further contemplated that the shader can execute a number of additional steps depending on the particular end user application.

For example, it is contemplated that the shader can generating screen space coordinates for at least one vertex of the polygon of the input mesh. It is also contemplated that the shader can use these coordinates to map an image onto the polygon. It is also contemplated that the shader can move a vertex of the input mesh such that its position on the render corresponds to coordinates in UV space. It is also contemplated that the shader can render a polygon of the mesh onto at least one pixel of the generated texture.

It is contemplated that projecting the image of a viewpoint onto the input mesh can further include rendering a depth map of a polygon of the input mesh. It is also contemplated that a distance can be obtained using a vertex shader between the vertices of a polygon of the input mesh and the camera that captured the image data. It is also contemplated that a distance of the depth mask can be obtained using a fragment shader between the coordinates of a fragment and the camera that captured the image data and comparing this distance to the distance in the depth map at the UV coordinates of the fragment.

It is contemplated that at least one proxy texture for the input mesh can be generated by determining the position of a pixel of the texture and subsequently identifying a corresponding point in the point cloud data that has the same position as the position of the pixel of the texture.

In some embodiments where the input mesh is derived from suitably captured point cloud data, it is contemplated that the step of generating the texture further involves the step of calculating a resolution of the texture. In these embodiments, it is contemplated that a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under consideration.

In at least one embodiment it is contemplated that the colors of a point in the point cloud data can be corrected by subtracting the color of the point under consideration (belonging to a first station) from the difference between the smoothed colors of the neighborhood of that point under consideration (belonging to the first station) and the smoothed average colors of all stations that include points form the neighborhood of the point under consideration.

In other words, it is contemplated that a single point in the point cloud may be close to other points from different stations and may correspond to different images each having different coloration.

Therefore, it is first contemplated that all colors of all points in a first station can be smoothed to result in a smoothed color for a particular point that belongs to that particular station. Secondly, it is contemplated that average colors can be determined based on the average color of all colors of that particular point derived from all stations to which that particular point under consideration belongs.

As such, it will be appreciated that a color can be corrected by taking the original color of the point and subtracting that original color from the difference between the smoothed color (derived from a single station) and the smoothed average color of that point (derived from a plurality of stations).

In this way, the present invention provides methods and systems for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the mesh, texturing the mesh comprising the steps of generating a texture, and applying the texture to at least one polygon of the mesh. The embodiments described herein are intended to be illustrative of the present compositions and methods and are not intended to limit the scope of the present invention. Various modifications and changes consistent with the description as a whole and which are readily apparent to the person of skill in the art are intended to be included. The appended claims should not be limited by the specific embodiments set forth in the examples but should be given the broadest interpretation consistent with the description as a whole.