Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GENERATING A SPATIAL MODEL OF AN INDOOR STRUCTURE
Document Type and Number:
WIPO Patent Application WO/2020/049112
Kind Code:
A1
Abstract:
According to an embodiment computer implemented method for generating a spatial model (500) of an indoor structure from a point cloud (100), the method comprising the steps of identifying (602) shapes in the point cloud (100); enveloping (603) the shapes by a bounding volume (300); intersecting (604) the bounding volume (300) with planes determined by the shapes, thereby obtaining structural faces (400, 410, 420); and constructing (606) the spatial model (500) by the structural faces.

Inventors:
COUDRON INGE (BE)
HANNAERT JURGEN (BE)
Application Number:
PCT/EP2019/073727
Publication Date:
March 12, 2020
Filing Date:
September 05, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3FROG NV (BE)
International Classes:
G06T7/11; G06T7/143
Foreign References:
US20170316115A12017-11-02
US20170263052A12017-09-14
US20170148211A12017-05-25
Other References:
NAN LIANGLIANG ET AL: "PolyFit: Polygonal Surface Reconstruction from Point Clouds", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 22 October 2017 (2017-10-22), pages 2372 - 2380, XP033283101, DOI: 10.1109/ICCV.2017.258
L. NAN, 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), pages 2372 - 2380
Attorney, Agent or Firm:
IP HILLS NV (BE)
Download PDF:
Claims:
CLAIMS

1 . A computer implemented method for generating a spatial model (500) of an indoor structure from a point cloud (100), the method comprising the steps of:

- identifying (602) shapes in the point cloud (100);

- enveloping (603) the shapes by a bounding volume (300);

- intersecting (604) the bounding volume (300) with planes determined by the shapes, thereby obtaining structural faces (400, 410, 420); and

- constructing (606) the spatial model (500) by the structural faces.

2. The computer implemented method according to claim 1 , further comprising the step of:

- adding a plane defining a segment of an outer surface of the bounding volume

(300) to the structural faces when a distance between the segment and any one of the shapes exceeds a predefined threshold.

3. The computer implemented method according to any one of the claims 1 to 2, wherein the bounding volume (300) is a convex polyhedron, and wherein the intersecting further comprises intersecting the convex polyhedron at a respective vertex (701 ) when the respective vertex (701 ) is within a predefined accuracy distance (700).

4. The computer implemented method according to claim 3, wherein the intersecting further comprises intersecting the convex polyhedron by planes parallel to the planes determined by the shapes at the predefined accuracy distance at both sides of the planes.

5. The computer implemented method according to any one of the claims 3 to 4, further comprising the step of:

- ordering vertices of the convex polyhedron in a lexicographically order;

and wherein the intersecting further is performed by the lexicographically order.

6. The computer implemented method according to any one of claims 3 to 5, wherein the intersecting further comprises rejecting a structural face when the structural face is duplicate.

7. The computer implemented method according to claim 1 or 2 wherein the enveloping comprises determining a three-dimensional cell complex as the bounding envelope.

8. The computer implemented method according to claim 7 wherein the intersecting comprises recursively splitting the three-dimensional complex by the respective shapes thereby obtaining spatially connected cells.

9. The computer implemented method according to claim 8 wherein the constructing comprises selecting a subset of the spatially connected cells that lay inside the indoor structure and deriving therefrom the structural faces.

10. The computer implemented method according to any one of the preceding claims, wherein the constructing further comprises selecting structural faces by weighting for a structural face a number of points in the structural face over the number of points in the point cloud (100), and wherein the spatial model (500) is constructed (606) by the selected structural faces.

1 1 . The computer implemented method according to claim 10, wherein the weighting further comprises weighting for a structural face a number of boundary edges adjacent to non-coplanar structural faces over the number of edges.

12. The computer implemented method according to any one of the claims 10 to 1 1 , wherein the weighting further comprises weighting for a structural face a number of points of the point cloud (100) faced by the structural face over a surface covered by the bounding volume (300).

13. The computer implemented method according to any of the claims 10 to 12, wherein the weighting further comprises weighting for a structural face the number of points in the structural face over the total number of points in the point cloud (100), the number of boundary edges adjacent to non-coplanar structural faces over the number of edges and/or the number of points of the point cloud faced by the structural face over a surface covered by the bounding volume.

14. The computer implemented method according to any one of the claims 10 to 13, wherein the selecting further comprises imposing a 2-manifold constraint and/or an orientation constraint.

15. The computer implemented method according to any one of the claims 10 to 14 and according to claim 9 wherein the constructing further comprises selecting structural faces by further weighting for a cell a number of points inside over the number of points in the point cloud (100).

16. The computer implemented method according to any one of the preceding claims, the method further comprising the step of down sampling the point cloud prior to the identifying.

17. A data processing system comprising means for carrying out the method of any of the preceding claims. 18. A computer program product comprising instructions which, when the program is executed on a computer, cause the computer to carry out the method of any of the claims 1 to 16.

19. A computer-readable storage medium comprising instructions, when executed by a computer, cause the computer to carry out the steps of the method of any of the claims 1 to 16.

Description:
GENERATING A SPATIAL MODEL OF AN INDOOR STRUCTURE

Technical Field

[01] The present invention relates to a system and method for generating from scanned data a spatial model of an indoor structure, such as an interior of a building.

Background

[02] An interior of a building comprises a variety of different spaces and surfaces, like, for example, rooms, hallways, window-panes, floors and ceilings. In the spaces of a building, a diversity of furniture and other accessories are placed for reasons of comfort, taste, and cosiness, or even structural necessity. Examples are couches, chairs, armatures, beams and many other elements, whether fixed to the building structure or not. The interior of a building is thus a unique composition depending on, among others, the purpose of the building, preferences of inhabitants, and architectural styling.

[03] To construct a spatial or three-dimensional model of the interior of a building, one can use a paper floor-map originating from, for example, an architect, and digitalize it using a computer-aided design program. A problem however is that such floor-maps may differ from the real-life situation due to changes during the construction phase, or when the building is changed during renovation and/or rebuilding operations.

[04] A floor-plan may also not be present at all . Thus, to either verify an existing floor- plan or drafting one when absent, one could site measuring the interior of the building. This is however a time intensive task and sensitive to errors since it needs to be performed manually. Furthermore, site measuring may even not be practical or even impossible, for example when the dimensions of rooms are very complex and/or spacious. [05] Another way to construct a spatial model of an interior of a building is using a three-dimensional scanning device configured to scan the interior thereby obtaining the spatial model in a direct way. Such a method is disclosed in US20170263052A1 . Herein, an interior of a building is scanned thereby obtaining a point cloud. Next, based on this point cloud, a spatial model is generated. However, a drawback is that the generated spatial model comprises a variety of accessories, as already discussed. The spatial model is thus a distorted image since these accessories may easily be replaced over time. A way to overcome this drawback is to empty the building completely, which is, however impractical and time consuming.

[06] Points in a point cloud from a scanned room may also be labelled into either building points for the structure, either object points for the accessories, thereby making a distinction between the structure and the accessories. The object points may also be labelled as clutter. By making a distinction between these points, a floor plan may be constructed based on the building points, and subsequently wall samples may be generated for such a floor plan thereby obtaining a spatial model. Such a method is disclosed in US2017014821 1 A1 . A drawback, however, is that the final spatial model is generated by extruding the floor plan with respect to a ceiling height. As a result, simple vertical walls are assumed, which may not correspond to the real-life situation. Furthermore, another drawback is that points may wrongly be labelled as clutter and thus removed, resulting in a spatial model wherein construction elements are missing.

[07] Another way of constructing a spatial model based on a point cloud is disclosed in Polygonal surface reconstruction from point clouds, L. Nan et. al., published in 2017 IEEE International Conference on Computer Vision (ICCV), pages 2372-2380. Herein, in contrast to the just discussed method, clutter is not eliminated from the point cloud, but a spatial model is generated based on a selection of structural faces from a large number of candidate faces. The structural faces are then assembled thereby obtaining the spatial model. However, a drawback is that degenerate faces are generated, wherein a degenerate face is to be regarded as a face that is not connected with an adjacent face. Because of this, a hypothesis must be made between such a face and a neighbouring one not adjacent to it to generate a watertight mesh. This may then result in a mesh which does not correspond to the real-life situation. Furthermore, another drawback is that a long and very thin structural face may be selected resulting in a large optimization problem hard to solve. Again, this leads to spacious models likewise not corresponding to the real-life situation.

[08] It is therefore an object of the present invention to alleviate the above drawbacks and to provide an improved solution for generating a spatial model of an indoor structure from a point cloud in an efficient manner.

Summary of the Invention

[09] This object is achieved, in a first aspect, by a computer implemented method for generating a spatial model of an indoor structure from a point cloud, the method comprising the steps of:

- identifying shapes in the point cloud;

- enveloping the shapes by a bounding volume;

- intersecting the bounding volume with planes determined by the shapes, thereby obtaining structural faces; and

- constructing the spatial model by the structural faces.

[10] The point cloud is obtained from a three-dimensional scanning device and comprises points from, for example, a single room or multiple rooms in a building. The scanning device is thus placed within the one or more rooms and generates the point cloud, wherein the points comprises coordinates in, for example, a cartesian coordinate system, or any other coordinate system suitable for representing three- dimensional points. It should be further understood that the point cloud may also be obtained by merging, for example, a set of two-dimensional images originating from a consumer good, such as a smartphone. In other words, the point cloud is considered as an input for the computer implemented method and may be obtained via a variety of methodologies.

[11] As a first step, shapes are identified in the point cloud. Shapes may be two- and/or three-dimensional and are, for example, planar segments or planar primitives, polygons, lines, cylinders and/or spheres. The identifying may, for example, be performed by a random sample consensus, RANSAC, method through which subsets of points from the point cloud fitting a model of geometric primitives are detected. For each of these subsets a best fit shape according to that geometric primitive is identified.

[12] Next, in a second step, the identified shapes from the previous step are enveloped by a bounding volume. In other words, the bounding volume encloses or envelops all the shapes, or even all the points in the point cloud. The bounding volume is, for example, a volume covered by a globe or by an ellipsoid.

[13] Then, in a subsequent step, the bounding volume is intersected with planes determined by the shapes, wherein the planes have a zero curvature or a spherical geometry depending on the shapes determining the planes. In other words, the bounding volume is cut in pieces by planes defined by the shapes, wherein these pieces are intersections of the volume on the one hand and the planes defined or determined by the shapes on the other hand. The intersections are thus two- dimensional shapes, whether curved or not, and are further labelled as structural faces. As a result, a set of structural faces are thus obtained.

[14] Finally, the spatial model of the indoor structure is constructed by using these structural faces. A way to do so is, for example, using solely or mainly structural faces located at the outside, thus structural faces which do not face another structural face on one side. Alternatively, or supplementary, a structural face may intersect another structural face, such that the spatial model is constructed by intersecting the structural faces mutually, wherein the spatial model corresponds to a watertight mesh of these intersected structural faces.

[15] Advantageously, since the spatial model is constructed from a point cloud by scanning a room, it is ensured that the model corresponds to the real-life situation without emptying the room. Next, there is no need of having in advance a floor plan. Secondly, the point cloud only has to comprise coordinates of the points, so per point no additional data is needed. Furthermore, there is also no need the make a distinction between structural points and clutter in the point cloud. Since shapes are identified in the point cloud, such a distinction is made in an indirect way, thereby avoiding that structural points are wrongly labelled as clutter. [16] Next, by enveloping the shapes identified in the point cloud, a first source of clutter is eliminated, for example originating from a window opening when the room is scanned. Through this window opening points may be detected but which do not form a shape. These points will thus not be enveloped or enclosed. Besides, by enveloping the shapes, it is also ensured that a watertight spatial model is generated.

[17] In a point cloud of a regular room that is scanned, at every side of the room a shape may be detected, wherein a shape may correspond to a part of at least a wall, a ceiling, and/or a floor. This way, a shape may be identified at each side of the room, even when only a part of a shape is visible. Subsequently, by intersecting the bounding volume with planes determined by the shapes, even when parts of the surfaces of the walls, the ceiling and the floor that are not visible are taken into account. The bounding volume is thus intersected and cut into pieces, wherein the intersections are labelled as structural faces and used to construct the spatial model. Moreover, the type and form of the bounding volume may be adapted to the type of the room that is scanned such that the structural faces are obtained in an efficient and fast manner. Finally, by using the structural faces to construct the spatial model, clutter inside the room may also be eliminated by, for example, using the structural faces on the outside. This way, a distinction between structural elements and accessories is made in an indirect manner, thereby reducing the risk that points are wrongly labelled as clutter.

[18] According to an embodiment, the computer implemented method further comprises the step of:

- adding a plane defining a segment of an outer surface of the bounding volume to the structural faces when a distance between the segment and any one of the shapes exceeds a predefined threshold.

[19] Differently formulated, when at a segment of the outer surface no shape is detected, a plane defining this segment is added to the structural faces such that it may be used to generate the spatial model. A shape may be absent when, for example, boundary walls of the scanned room are not captured due to the presence of clutter, door openings, window openings, and other sources causing such a problem. Thus, by adding a plane defining this segment, this problem is alleviated, and it is further ensured that a watertight mesh of the spatial model is generated. [20] According to an embodiment, the bounding volume is a convex polyhedron, and wherein the intersecting further comprises intersecting the convex polyhedron at a respective vertex when the respective vertex is within a predefined accuracy distance.

[21] Differently formulated, the bounding volume envelopes the shapes by a convex polyhedron, wherein a convex polyhedron is to be regarded as a solid in three dimensions with flat polygonal faces, straight edges and corners or vertices. An example of a convex polyhedron is thus a cube or a bar. This way, a bounding volume may be defined by vertices and to which other vertices they are connected to. Next, by imposing that the polyhedron is intersected at a vertex when said vertex is within a predefined accuracy distance with regard to the intersecting plane, the intersecting step is performed in a fast and efficient manner. The predefined accuracy distance may, for example, be determined by defining a sphere with a vertex as centre point and the predefined accuracy distance as radius. This way, a limit of a floating-point precision is levered in this way, or differently formulated, a floating-point precision problem is avoided. Furthermore, by enveloping the shapes by a convex polyhedron, the resulting structural faces are polygonals, or again convex polyhedrons when intersected with planes, such that the spatial model is constructed in a straightforward manner.

[22] According to an embodiment, the intersecting further comprises intersecting the convex polyhedron by planes parallel to the planes determined by the shapes at the predefined accuracy distance at both sides of the planes.

[23] In other words, the planes determined by the shapes are offset at both sides, such that the convex polyhedron is intersected with two parallel planes. As a result, the obtained structural faces have a thickness corresponding to twice the predefined accuracy distance, since for each plane determined by a shape the bounding volume or convex polyhedron is intersected twice. Advantageously, the spatial model constructed by these structural faces comprises sides with a thickness, which is more representative for a real-life situation. [24] Further, by intersecting the convex polyhedron by parallel planes, a vertex lying between these planes will be considered as the intersecting point, such that in this way the predefined accuracy distance is also taken into account.

[25] According to an embodiment, the computer implemented method further comprises the step of:

- ordering vertices of the convex polyhedron in a lexicographically order;

and wherein the intersecting is performed in the lexicographically order.

[26] This way, by ordering the vertices of the convex polyhedron in a lexicographically order vertices are intersected by the planes in a consistent manner, namely by taking into account the ordering of the vertices. Differently formulated, an intersection is performed by first comparing x-coordinates in an ordered manner, then y-coordinates in an ordered manner, and finally z-coordinates in an ordered manner. Alternatively, another coordinate system, such as a spherical coordinate system, may be used to compare coordinates in a systematic order. By performing the intersecting step through this way, another source of limiting a precision due to floating points is tackled as well, since, comparing points in one direction may result in a different result when comparing the points in the opposite direction. Hence, a numerically robust method of intersecting the bounding volume is obtained. The ordering may be performed temporarily each time an intersecting step is performed and prior hereto

[27] According to an embodiment, the intersecting further comprises rejecting a structural face when the structural face is duplicate.

[28] When intersecting the bounding volume by the planes determined by the shapes, or by planes parallel hereon, duplicate structural faces may be obtained. This may occur, for example, when different identified shapes determine a same plane, or when parallel planes are used to intersect the bounding volume. Through the latter intersecting step, two structural faces are obtained, one facing to the inside of the scanned room, and one facing to the outside of the scanned room, which are also considered as duplicate. The structural faces facing to the outside of the scanned room may then be rejected. Advantageously, by rejecting duplicate faces and/or faces pointing to the outside when a parallel face is pointing to the inside, a unique set of structural faces is obtained, such that the spatial model is constructed in a straightforward and unique manner.

[29] According to an embodiment, the enveloping comprises determining a three- dimensional cell complex as the bounding envelope.

[30] The intersecting may then be performed by recursively splitting the three- dimensional complex by the respective shapes thereby obtaining spatially connected cells.

[31] The constructing may then be performed by selecting a subset of the spatially connected cells that lay inside the indoor structure and deriving therefrom the structural faces.

[32] In other words, an initial cell is created as the bounding envelope. The cell is then split into two cells by a first structural shape. Thereafter, the cells are further split recursively by the next shape. The spatial model is then constructed by selecting the cells that lay inside the indoor structure. Because of the dependencies between cells in such a cell complex, two spatially connected cells always share a face. If only one of two such spatially connected cells are selected during the selecting of the subset, then the share face defines a structural face.

[33] The advantage of the cell complex approach is that the optimization problem becomes simpler because the amount of variables is smaller, i.e. there are less cells than there are faces.

[34] According to an embodiment, the constructing further comprises selecting structural faces by weighting for a structural face a number of points in the structural face over the number of points in the point cloud, and wherein the spatial model is constructed by the selected structural faces.

[35] Thus, when constructing the spatial model by the structural faces, a subset of the set of structural faces may be selected by which the model is constructed. A first way to select structural faces for the subset is by evaluating how many points of the point cloud are within a structural face weighted over the total number of points in the point cloud. This way, it is ensured that structural faces comprising at least a minimum amount of inside points or inliers over the total number of points in the point cloud are selected, since the likelihood that they represent a surface is higher compared to structural faces comprising only a few inliers compared to the total number of points in the point cloud.

[36] According to an embodiment, the weighting further comprises weighting for a structural face a number of boundary edges adjacent to non-coplanar structural faces over the number of edges.

[37] Alternatively, or additionally, in a second criterion, the weighting is based on a number of boundary edges, which is a measure for the complexity of the resulting constructed spatial model. An edge is a boundary edge if the two structural faces adjacent to this edge do not lie in a same plane, thus are non-coplanar. Thus, the less boundary edges selected structural faces comprise, the less complex the resulting constructed spatial model will be, which is achieved by selecting through weighting the structural faces having a small ratio of the number of boundary edges over the total number of edges. These selected structural faces are then used, in case combined with the structural faces selected through the number of inliers, to construct the spatial model. Furthermore, this way, the type of the scanned room or rooms may be taken into account as well.

[38] According to an embodiment, the weighting further comprises weighting for a structural face a number of points of the point cloud faced by the structural face over a surface covered by the bounding volume.

[39] A third selecting criterion is selecting structural faces which are located at the outside, but close enough to the inside to still cover the interior of the scanned room. To balance between these to constraints, structural faces are selected that are facing at least a number of points on one side, which is the side facing to the inside of the room, thus the midpoint of the point cloud. This is achieved by weighting for a structural face the number of points it is facing over the surface covered by the bounding volume. [40] According to an embodiment, the weighting further comprises weighting for a structural face the number of points in the structural face over the number of points in the point cloud, the number of boundary edges adjacent to non-coplanar structural faces over the number of edges and/or the number of points of the point cloud faced by the structural face over a surface covered by the bounding volume.

[41] In other words, the three selecting weighting criterions may be combined by weighting the different parameters relatively to each other. These parameters are the number of points in the structural face over the number of points in the point cloud, the number of boundary edges adjacent to non-coplanar structural faces over the number of edges, and the number of points of the point cloud faced by the structural face over a surface covered by the bounding volume. Advantageously, this way the type of the scanned room may be taken into account. For example, if a room comprises a lot of corners and sides the number of boundary edges adjacent to non-coplanar structural faces over the total number of edges may be given a higher weighting to take into account the complexity of the room. Another example is the functionally of the room that is scanned, which may likewise be taking into account. If, for example, a working place is scanned comprising few accessories, the number of points or inliers in the structural face may be given priority in a higher manner, compared to the situation wherein, for example, a living room is scanned comprising a variety of furniture and other accessories.

[42] According to an embodiment, the selecting further comprises imposing a 2- manifold constraint and/or an orientation constraint.

[43] Besides or combined with selecting structural faces based on the number of points in the structural face over the number of points in the point cloud, the number of boundary edges adjacent to non-coplanar structural faces over the number of edges, and the number of points of the point cloud faced by the structural face over a surface covered by the bounding volume, the selecting may further comprise imposing constraints. A first constraint is a 2-manifold constraint to guarantee that the constructed spatial model is 2-manifold, in other words, that the spatial model has a surface corresponding to the real-life situation of the scanned room. Another constraint that may be imposed is an orientation constraint. The orientation constraint ensures that a structural face is selected with an edge in one direction, while an adjacent face is selected with an edge in the opposite direction. The union of the selected structural faces then result in a final constructed spatial model.

[44] When performing the cell complex approach the 2-manifold constraint or orientation constraint is inherently included in the optimization.

[45] When performing the cell complex approach, the constructing may further comprise selecting structural faces by further weighting for a cell a number of points inside over the number of points in the point cloud (100). Due to its internal representation, the cell complex approach thus allows adding an extra weighting criterium, i.e. to take into account the number of points laying within a certain cell under consideration.

[46] According to an embodiment, the method further comprises the step of down sampling the point cloud prior to the identifying.

[47] Thus, prior to the identifying of shapes through, for example, a RANSAC method, the point cloud is first filtered. The filtering is performed by down sampling the input point cloud using a voxel grid filter at a predefined resolution. This way, by this first filtering step, a part of clutter may already be eliminated such that the preceding steps are performed in a fast and efficient manner.

[48] According to a second aspect, the invention relates to a data processing system comprising means for carrying out the method according to the first aspect.

[49] According to a third aspect, the invention relates to a computer program product comprising instructions which, when the program is executed on a computer, cause the computer to carry out the method according to the first aspect.

[50] According to a fourth aspect, the invention relates to a computer-readable storage medium comprising instructions, when executed by a computer, cause the computer to carry out the steps of the method according to the first aspect. Brief Description of the Drawings

Some example embodiments will now be described with reference to the accompanying drawings.

[51] Fig. 1 illustrates a point cloud obtained from a three-dimensional scanning device;

[52] Fig. 2 illustrates planar primitives extracted from the point cloud as illustrated in Fig. 1 and Fig. 2;

[53] Fig. 3 illustrates a bounding volume enveloping detected shapes within the point cloud as illustrated in Fig. 1 ;

[54] Fig. 4 illustrates the bounding volume illustrated in Fig. 3 intersected by planes;

[55] Fig. 5 illustrates a generated spatial model from the point cloud illustrated in Fig. 1 ;

[56] Fig. 6 illustrates an apparatus comprising means configured to generate a spatial model of an indoor structure from a point cloud;

[57] Fig. 7 A illustrates clipping a convex polygon by a plane;

[58] Fig. 7B illustrates an ordering of faces among an edge;

[59] Fig. 8 illustrates a computer system that can be configured to execute one or more embodiments of the method for generating a spatial model; and

[60] Fig. 9 illustrates a cloud-interface to execute one or more embodiments of the method for generating a spatial model. Detailed of Embodiment(s)

[61] Fig. 1 illustrates a point cloud from a scanned structure. The point cloud 100 may be obtained from, for example, a three-dimensional scanning device suitable to scan an indoor structure and to derive therefrom a point cloud such as point cloud 100. The point cloud 100 may, for example, be represented in a three-dimensional cartesian coordinate system 101 , or any other coordinate system suitable for representing a three-dimensional point cloud 100. The point cloud 100 comprises points such as point 102. Each point is thus identifiable through its coordinate. From the point cloud 100, which is considered as an input, a spatial model will be generating therefrom. Fig. 6 illustrates an apparatus comprising means configured to generate such a spatial model, wherein the point cloud 100 is considered to be an input 601 for the apparatus 610. The input 601 point cloud 100 may be a point cloud of an entire room, including furniture and other objects.

[62] In a first step, in the point cloud 100 shapes are identified 602 by extracting planar primitives from the input point cloud 100. To identify 602 or detect planar primitives in the three-dimensional point cloud 100, a random sample consensus (RANSAC) method may be used. Firstly, the point cloud 100 may be down sampled using a voxel grid filter at a resolution of, for example, 5cm. As a result, a set of planar patches P = (p is obtained, wherein each patch comprises a set of points έ which lie within a distance e from a best fit plane through these points. Planar primitives identified or extracted from the point cloud 100 are illustrated in Fig. 2 by, for example, line 200 and plane 201 .

[63] Next, the identified 602 shapes such as line 200 and plane 201 in the point cloud 100 are enveloped 603 by a bounding box as illustrated in Fig. 3 by bounding box 300. In this illustrative example, the bounding box 300 is a cuboid comprising vertices, such as vertices 301 and 302 connected by edge 303.

[64] The bounding box 300 enveloping 603 the point cloud 100 may be defined such that the planes of the bounding box 300 are close to the limits of the point cloud 100 such as illustrated by bounding box 300. Alternatively, a bounding box may also envelop the point cloud 100 wherein the planes thereof are located at a distance from points located at the edge of the point cloud 100.

[65] When the bounding box 300 is close to the edges of the point cloud 100, planes may be added to the set of planar patches as obtained in the previous step 602. For example, when the set of planar patches does not comprise a plane that is located within a predefined distance from a segment of the outer surface of the bounding box 300, the plane defining the segment of the bounding box 300 is added to the set of planar patches. This way, it is ensured that a watertight mesh will be generated.

[66] The just mentioned absence of such a plane may occur when a boundary wall of the scanned structure is undetected due to the presence of clutter, door openings, etc.

[67] A plane of the bounding box 300 may also be close to a plane of the set of planar patches, such that both planes may be considered as equal. To verify if a plane of the set is close to a plane of the bounding box, firstly angles between the detected planes and the bounding box planes are determined. Next, a plane having an angle with a plane of the bounding box lower than a predefined threshold 0 t and comprising a predefined number of points n t lying on the bounding box plane is considered to be equal to the bounding box plane. The thresholds 0 t and n t may, for example, be set at 10° respectively 20%|r έ | wherein \pi \ denotes the number of inliers of the planar patch Pi . When a plane of the bounding box does not have a corresponding plane in the set, it may be added to the set as already highlighted.

[68] Next, the bounding box 300 is intersected 604 by the planes determined by the shapes in the set. This is illustrated in Fig. 4. To account for numerical robustness, an intersection point is calculated by the ordering points. For example, as illustrated in Fig. 7A, when calculating an intersection between line 702 and a line from point 704 to point 705, a different outcome may be obtained when the intersection is calculated by going from point 705 to point 704 due to a limited precision of floating points. Thus, to avoid numerical issues, the points are ordered lexicographically. This means that first x-coordinates are compared, if they are equal y-coordinates are compared, and finally if they are also equal z-coordinates are compared. [69] Secondly, another source of numerical issues may arise when checking if a point lies in front or behind a plane, which is tackled by slicing a plane but two planes lying parallel to said plane at, for example, a distance e. For example, although vertex 701 is not intersected by a plane that may be identified by line 702, it lies within the distance e 700 from vertex 701 such that intersection 703 is performed at vertex 701 .

[70] According to a first example embodiment, the bounding volume 300 is intersected 604 with all the detected planes resulting in intersections like, for example, intersections 401 -405 and 420. As a result, structural faces are obtained, such as structural faces 400, 410 and 41 1 . From this set of structural faces, a selection 605 is made from which the spatial model is constructed 606.

[71] When, for example, duplicate faces are obtained, only one unique structural face replaces those duplicate faces. The set of selected structural faces may also comprise faces that were previously added when the set of planar patches does not comprise a plane that is located within a predefined distance from a plane of the bounding box 300.

[72] The selection 605 of the structural faces is performed in such a way that it best describes the geometry of the scanned structure, while simultaneously guaranteeing that the resulting spatial model is 2-manifold and oriented. This may be achieved by formulating the selection 605 as a binary labelling problem.

[73] Firstly, each structural face, represented as / £ is described by two variables x t and x ir , wherein the subscript r signifies reverse, meaning that either a face in its counter clockwise orientation, i.e. x t = 1, either in its clockwise or reversed orientation, i.e. x ir = 1, is selected. When a face is not selected at all, its respective variables are set at zero. Next, the structural faces are selected through three energy terms: data selection, model complexity and interior coverage.

[74] The data selection term evaluates how many points belong to the planar patches through:

wherein P is the total number of inliers from the detected planar patches and m/ters(/ £ ) is the number of inliers in face / £ and N is the total number of generated faces. The term E d favours selecting as many faces wherein planar patches are identified. [75] A second term, the model complexity, discourage selecting structural faces corresponding to clutter by taking into account the number of boundary edges of a structural face. The model complexity term is expressed as follows:

wherein E is the total number of edges and corner {f f j ) indicates whether the edge formed by the faces / £ and f j is a boundary edge, i.e. corner^, f j ) = 1 or not.

[76] The third term, the interior coverage compensates the fact that clutter results in missing data. This is achieved by selecting structural faces that covers more points on the inside compared to the outside. To measure a coverage of a face / £ , firstly all points that lie in front of / £ are projected. Hence, for x £ points are projected on the positive side of the supporting plane onto / £ and for x ir points are projected on the negative side onto / £ . Next, a two-dimensional alpha shape from the projected points is calculated. The alpha shape creates a bounding area that envelops the set of projected two-dimensional projected points. By changing the alpha parameter, the alpha shape object is manipulated to tighten or loosen a fit around the points to create a non-convex region. The alpha shape provides a good measure for the coverage of the candidate face by the projected points. Thus, even if a face / £ from a structural component has no inliers due to occlusion, there will be a lot of points in front of it. Hence, it can still provide a high coverage and gets a higher chance of being selected as well. [77] The interior coverage term is expressed as: wherein area(M), area(f i ), area P (f i ) and area w (/j ) denote the surface areas of the bounding box 300, a candidate face f , and the area of the alpha shape mesh constructed from the points on the positive or negative side of f respectively.

[78] Subsequently, an optimal subset of candidate structural faces is selected 605 my minimizing a weighted sum of the data selection, model complexity and interior coverage terms expressed by equations one to three. Since for a face f only one orientation can be selected, the variables x t and x ir are mutually exclusive. This mutual exclusion can be enforced by adding an extra term to an objective function calculating a weighted sum of the terms, namely variables XiX ir . This term ensures that either one or both variables will be zero. The objective term is formulated as follows:

with 0 < XiX ir < 1. Thus, by defining the two variables per face, it is ensured that the constructed spatial model 606 will have a consistent orientation. The orientation of two adjacent faces is consistent when two vertices of the common edges are in an opposition direction. This is illustrated in Fig. 7B by 710 and 71 1 .

[79] The vertices of each edge are further lexicographically ordered, so for each face ft adjacent to an edge e ; - its direction can be determined with respect to this edge. Thus, functions sign{x)i, e j ) and (sign(x) ir , e j ) are defined as follows:

(sign(x)i, ef) (Eq.5), ί 1 if e j and corresponding edge in X j have the same direction

(—1 if e j and corresponding edge in x j have opposite directions

and:

(sign(x) ir , ej) = -{sign{x) i ej) (Eq.6).

[80] Next, to ensure that each edge will have consistently oriented faces, the following constraint is defined:

which implies that when a face is selected with the edge in one direction, another face should be selected with the edge in the opposite direction. [81] Further, to guarantee that the constructed spatial model is 2-manifold either two or no faces must be selected. Therefore, an additional constraint for each edge is defined as:

[82] Finally, the optimization problem for selecting the best subset of candidate faces can be formulated as follows:

such that

[83] Finally, when the structural faces are selected 605, the spatial model is constructed 606 based on this selection 605 of structural faces. A constructed 605 spatial model is illustrated in Fig. 5, by spatial model 500 which represents the structure that is scanned and from which point cloud 100 was obtained and used as an input 601 to the apparatus 610.

[84] According to a second example aspect, the bounding volume 300 is divided into smaller subvolumes by the detected planes. By subdividing the 3D space a set of so- called cells comprising vertices, edges, faces and volumes is obtained. Commonly, a 0-cell is referred to as a vertex, a 1 -cell as an edge, a 2-cell as a face, and a 3-cell as a volume. The collection of all these cells is commonly referred to as a cell complex. To describe the topological structure between these cells a combinatorial map may be used that links the cells together through adjacency and incidence relationships. From the set of faces, a selection 605 is made from which the spatial model is constructed 606. [85] The selection 605 of the structural faces is performed such that it describes the geometry of the scanned structure while guaranteeing that the resulting spatial model is 2-manifold and oriented. This may be achieved by formulating the selection 605 as a binary labelling problem as described below.

[86] Firstly, each volume, represented as v is described by a variable x vi . This variable is set to 1 if the volume is selected and 0 otherwise. Due to the way the 3D cell complex was built, two spatially connected volumes always share a face. If both volumes are selected, this face lies in the interior and is called an interior face. If only one of both volumes is selected, the face lies on the boundary between the interior and exterior and is called a boundary face. Therefore, each structural face, represented as fi is described by two variables x fi and x fbi , wherein the subscript b signifies boundary, meaning that either a face is selected as an interior face or a boundary face (i.e. Xfi+Xf bi < 1). When the face is not selected at all, its respective variables are set at zero. Finally, each edge, represented as e t is described by two variables x ei and x ebi , wherein the subscript b signifies boundary, where x ebi < x ei meaning that if an edge is a boundary edge (i.e. x ebi = 1) the corresponding edge is selected (i.e. x ei = 1). Next, the structural faces are selected through three energy terms: data selection, model complexity and interior coverage.

[87] The data selection term evaluates how many points belong to the planar patches on the boundary faces through:

wherein P is the total number of inliers from the detected planar patches and inliers fi) is the number of inliers in face f t and N is the total number of generated faces. The term E d favours selecting as many faces wherein planar patches are identified.

[88] A second term, the model complexity, discourages selecting structural faces corresponding to clutter by taking into account the number of boundary edges of a structural face. The model complexity term is expressed as follows: (Eq.12),

wherein E is the total number of edges.

[89] The third term, the interior coverage evaluates how many points fall inside a volume :

wherein C is the total number of inliers that do not belong to the detected planar patches and inliersfvi) is the number of inliers in volume v t and L is the total number of generated volumes. The term E t favours selecting as many volumes containing points. [90] Subsequently, an optimal subset of candidate structural faces is selected 605 my minimizing a weighted sum of the data selection, model complexity and interior coverage terms expressed by equations one to three. The objective term is formulated as follows:

E = d . E d + C . E C + i. Ei (Eq.14), [91] Next, to ensure that the boundary faces are selected when only one of the adjacent volumes is selected, the following constraint may be added:

where i^and i^are the cells adjacent to face f k . If both adjacent volumes are selected (i.e. x vi = 1 and x vj = 1) the equation will automatically force x fc to be 1 and x fbk to be 0, meaning that the shared face is indeed an interior face.

[92] Finally, the optimization problem for selecting the best subset of candidate faces can be formulated as follows:

such that

[93] Finally, when the structural faces are selected 605, the spatial model is constructed 606 based on this selection 605 of structural faces. A constructed 605 spatial model is illustrated in Fig. 5, by spatial model 500 which represents the structure that is scanned and from which point cloud 100 was obtained and used as an input 601 to the apparatus 610.

[94] According to a further embodiment, a labelling operation may be performed when identifying 602 the shapes 100. Shapes that do not carry a label that qualifies that shape as a supporting structure are then disregarded for the further intersecting step 604. Examples of labels are table, chair, door, wall, ceiling, windows and cabinet of which only wall, door, ceiling and window would qualify for defining the supporting structure. Such labelling may further be performed in an automated way, for example by a neural network that is trained to perform such labelling.

[95] Fig. 8 shows a suitable computing system 800 for performing the steps according to the above embodiments. Computing system 800 may be used for generating a spatial model 500. Computing system 800 may in general be formed as a suitable general-purpose computer and comprise a bus 810, a processor 802, a local memory 804, one or more optional input interfaces 814, one or more optional output interfaces 816, a communication interface 812, a storage element interface 806 and one or more storage elements 808. Bus 810 may comprise one or more conductors that permit communication among the components of the computing system 800. Processor 802 may include any type of conventional processor or microprocessor that interprets and executes programming instructions. Local memory 804 may include a random-access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 802 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 802. Input interface 814 may comprise one or more conventional mechanisms that permit an operator to input information to the computing device 800, such as a keyboard 820, a mouse 830, a pen, voice recognition and/or biometric mechanisms, etc. Output interface 816 may comprise one or more conventional mechanisms that output information to the operator, such as a display 840, etc. Communication interface 812 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 860 to communicate with other devices and/or systems such as apparatus 610. The communication interface 812 of computing system 800 may be connected to such another computing system by means of a local area network (LAN) or a wide area network (WAN) such as for example the internet. Storage element interface 606 may comprise a storage interface such as for example a Serial Advanced Technology Attachment (SATA) interface or a Small Computer System Interface (SCSI) for connecting bus 610 to one or more storage elements 808, such as one or more local disks, for example SATA disk drives, and control the reading and writing of data to and/or from these storage elements 808. Although the storage elements 808 above is described as a local disk, in general any other suitable computer-readable media such as a removable magnetic disk, optical storage media such as a CD or DVD, -ROM disk, solid state drives, flash memory cards, ... could be used. The system 800 described above can also run as a virtual machine above the physical hardware.

[96] The steps according to the above embodiments may be performed on computing system 800. Computing system may interact directly with a user, e.g. through the interfaces 820, 830 and represent the results of the steps according to the above embodiments on a display 840 or print them on a printer 650. Alternatively, as illustrated in Fig. 9, the steps may be performed remote from a user on a remote computing system 900, e.g., on a cloud computing system. Interaction with a user may then be done by a connection between the remote computing system 900 and a client computing system 901 , e.g. over an Internet connection. Also, the client computing system 901 may be implemented as computing system 800. Communication is then performed over a wired or wireless networking interface 812.

[97] Although the present invention has been illustrated by reference to specific embodiments, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied with various changes and modifications without departing from the scope thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. In other words, it is contemplated to cover any and all modifications, variations or equivalents that fall within the scope of the basic underlying principles and whose essential attributes are claimed in this patent application. It will furthermore be understood by the reader of this patent application that the words "comprising" or "comprise" do not exclude other elements or steps, that the words "a" or "an" do not exclude a plurality, and that a single element, such as a computer system, a processor, or another integrated unit may fulfil the functions of several means recited in the claims. Any reference signs in the claims shall not be construed as limiting the respective claims concerned. The terms "first", "second", third", "a", "b", "c", and the like, when used in the description or in the claims are introduced to distinguish between similar elements or steps and are not necessarily describing a sequential or chronological order. Similarly, the terms "top", "bottom", "over", "under", and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.




 
Previous Patent: PROCESS FOR PRODUCING ISOPRENOL

Next Patent: CONNECTED FENCE