Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ESTIMATION OF LOWER BOUNDS FOR DEVIATIONS OF AS-BUILT STRUCTURES FROM AS-DESIGNED MODELS
Document Type and Number:
WIPO Patent Application WO/2016/107989
Kind Code:
A1
Abstract:
A method and a system for determining lower bounds for deviations of an as-built structure from an as-designed model are provided. The method includes facilitating receipt of a 2-D image of an as-built structure and a 3-D as-designed model associated with the as-built structure, where the as-designed model includes a plurality of vertices and a plurality of lines. The method includes determining a plurality of edge features in the image performing an exterior orientation of the image corresponding to the as-designed model to generate an oriented image. The as-designed model is deformed, such that, upon projection of the deformed as-designed model onto the oriented image, the plurality of lines of the as-designed model substantially fit with the plurality of edge features. Lower-bounds for deviations of the as-built structure from the as-designed model are determined corresponding to the plurality of vertices based on the deformation of the as-designed model.

Inventors:
JOKINEN OLLI (FI)
Application Number:
PCT/FI2015/050960
Publication Date:
July 07, 2016
Filing Date:
December 31, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV AALTO FOUNDATION (FI)
International Classes:
G06T7/00; G01C11/00; G06F30/10; G06F30/20; G06Q50/08; G06V10/70
Domestic Patent References:
WO2013106802A12013-07-18
Foreign References:
US20130155058A12013-06-20
US20090010489A12009-01-08
US20040068187A12004-04-08
US20110187713A12011-08-04
US6324299B12001-11-27
US6963338B12005-11-08
US20090322742A12009-12-31
Attorney, Agent or Firm:
PAPULA OY (Helsinki, FI)
Download PDF:
Claims:
CLAIMS

What is claimed is: 1. A method, comprising:

facilitating receipt of:

an image of an as-built structure, the image being a two-dimensional (2-D) image, and

an as-designed model associated with the as-built structure, the as- designed model being a three-dimensional (3-D) model comprising a plurality of vertices and a plurality of lines connecting the plurality of vertices;

determining a plurality of edge features in the image;

performing an exterior orientation of the image comprising the plurality of edge features corresponding to the as-designed model to generate an oriented image;

deforming the as-designed model, such that, upon projecting the deformed as- designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image; and determining, based on the image, lower-bounds for deviations of the as-built structure from the as-designed model, the deviations determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

2. The method as claimed in claim 1, wherein the image is a planar image.

3. The method as claimed in claim 2, wherein the plurality of edge features is a plurality of edge lines in the planar image, and wherein determining an edge line of the plurality of edge lines comprises detecting edge pixels in the planar image. 4. The method as claimed in claim 3, wherein performing the exterior orientation of the planar image comprises determining correspondences between the plurality of edge lines of the planar image and the plurality of lines of the as-designed model.

5. The method as claimed in claim 1, wherein deforming the as-designed model comprises adjusting positions of the plurality of vertices of the as-designed model such that a vector from a camera projection center to an individual vertex of the plurality of vertices is perpendicular to normals of projecting planes corresponding to one or more edge features of the plurality of edge features in the image, wherein the one or more edge features correspond to one or more lines of the plurality of lines departing from the individual vertex.

6. The method as claimed in claim 1, wherein the image is a spherical image.

7. The method as claimed in claim 6, wherein the plurality of edge features is a plurality of edge curves in the spherical image, and wherein determining the plurality of edge curves comprises:

determining edge pixels in the spherical image using a Canny edge detection algorithm;

grouping the edge pixels into a plurality of connected components based on 8-connectivity of neighbouring pixels;

forming one or more segments within each connected component of the plurality of connected components using a region growing process based on determining at each edge pixel, a local line direction and a signed distance from a camera origin;

forming a plurality of arcs corresponding to segments of the plurality of connected components using edge boundary pixels corresponding to the segments; and

merging overlapping arcs from among the plurality of arcs to determine the plurality of edge curves.

8. The method as claimed in claim 6, the plurality of edge features is a plurality of edge curves and wherein performing the exterior orientation of the image comprises determining correspondences between the plurality of edge curves of the spherical image and the plurality of lines of the as-designed model using RANSAC algorithm.

9. The method as claimed in claim 6, wherein deforming the as-designed model comprises adjusting positions of the plurality of vertices of the as-designed model such that a vector from a camera projection center to an individual vertex of the plurality of vertices is perpendicular to normals of projecting planes corresponding to one or more edge features of the plurality of edge features in the image, wherein the one or more edge features correspond to one or more lines of the plurality of lines departing from the individual vertex.

10. An apparatus, comprising:

a memory to store image processing instructions; and

a processor electronically coupled with the memory, the processor configured to execute the image processing instructions stored in the memory to cause the apparatus to perform at least:

facilitating receipt of:

an image of an as-built structure, the image being a two- dimensional (2-D) image, and

an as-designed model associated with the as-built structure, the as-designed model being a three-dimensional (3-D) model comprising a plurality of vertices and a plurality of lines connecting the plurality of vertices;

determining a plurality of edge features in the image;

performing an exterior orientation of the image comprising the plurality of edge features corresponding to the as-designed model to generate an oriented image;

deforming the as-designed model, such that, upon projecting the deformed as-designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image; and

determining, based on the image, lower-bounds for deviations of the as-built structure from the as-designed model, the deviations determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

11. The apparatus as claimed in claim 10, wherein the image is a planar image.

12. The apparatus as claimed in claim 11, wherein the plurality of edge features is a plurality of edge lines in the planar image, and wherein determining an edge line of the plurality of edge lines comprises detecting edge pixels in the planar image.

13. The apparatus as claimed in claim 12, wherein for performing the exterior orientation of the planar image, the apparatus is further caused, at least in part to determine correspondences between the plurality of edge lines of the planar image and the plurality of lines of the as-designed model.

14. The apparatus as claimed in claim 10, wherein for deforming the as-designed model, the apparatus is further caused, at least in part to adjust positions of the plurality of vertices of the as-designed model such that a vector from a camera projection center to an individual vertex of the plurality of vertices is perpendicular to normals of projecting planes corresponding to one or more edge features of the plurality of edge features in the image, and wherein the one or more edge features correspond to one or more lines of the plurality of lines departing from the individual vertex.

15. The apparatus as claimed in claim 10, wherein the image is a spherical image.

16. The apparatus as claimed in claim 15, wherein the plurality of edge features is a plurality of edge curves in the spherical image, and wherein for determining the plurality of edge curves, the apparatus is further caused, at least in part to:

determine edge pixels in the spherical image using a Canny edge detection algorithm;

group the edge pixels into a plurality of connected components based on 8-connectivity of neighbouring pixels;

form one or more segments within each connected component of the plurality of connected components using a region growing process based on determining at each edge pixel, a local line direction and a signed distance from a camera origin; form a plurality of arcs corresponding to segments of the plurality of connected components using edge boundary pixels corresponding to the segments; and

merge overlapping arcs from among the plurality of arcs to determine the plurality of edge curves.

17. The apparatus as claimed in claim 15, the plurality of edge features is a plurality of edge curves and wherein for performing the exterior orientation of the image, the apparatus is further caused, at least in part to determine correspondences between the plurality of edge curves of the spherical image and the plurality of lines of the as-designed model using RANSAC algorithm.

18. The apparatus as claimed in claim 15, wherein for deforming the as-designed model, the apparatus is further caused, at least in part to adjust positions of the plurality of vertices of the as-designed model such that a vector from a camera projection center to an individual vertex of the plurality of vertices is perpendicular to normals of projecting planes corresponding to one or more edge features of the plurality of edge features in the image, wherein the one or more edge features correspond to one or more lines of the plurality of lines departing from the individual vertex.

19. A non-transitory, computer-readable storage medium storing computer- executable program instructions to implement a method for determining lower bounds for deviations of an as-built structure from an as-designed model, the method comprising:

facilitating receipt of:

an image of an as-built structure, the image being a two- dimensional (2-D) image, and

an as-designed model associated with the as-built structure, the as- designed model being a three-dimensional (3-D) model comprising a plurality of vertices and a plurality of lines connecting the plurality of vertices;

determining a plurality of edge features in the image; performing an exterior orientation of the image comprising the plurality of edge features corresponding to the as-designed model to generate an oriented image;

deforming the as-designed model, such that, upon projecting the deformed as-designed model onto the oriented image, the plurality of lines of the as- designed model fit substantially with the plurality of edge features in the image; and

determining, based on the image, lower-bounds for deviations of the as- built structure from the as-designed model, the deviations determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

Description:
ESTIMATION OF LOWER BOUNDS FOR DEVIATIONS OF AS-BUILT STRUCTURES FROM AS-DESIGNED MODELS

TECHNICAL FIELD

[0001] The present disclosure generally relates to photogrammetry and, more particularly, to image-based detection and measuring of deviations against an as-designed Building Information Model (BIM).

CROSS-REFERENCE TO RELATED APPLICATIONS

[0002] This application claims priority to U.S. Provisional Application No. 62/098464, filed Dec. 31 , 2014, titled "As-built Deformations of 3-D Building Information Model from Single Spherical Panoramic Image", by Olli Jokinen.

BACKGROUND

[0003] Detection and measuring of deviations between as-built and as-designed structures (e.g., buildings) is of important value for high-quality construction work. An optimal solution would allow checking of positioning of building elements against allowed tolerances already during installation when it is easy to remedy possible errors. At later stages, as-built documentation facilitates proper installation and later maintenance of building services equipment as one knows where they were actually installed. [0004] Oftentimes, building geometry of the as-designed structures is usually represented in a Building Information Model (BIM) with vertices, lines, and surfaces. The inspection of geometries of the as-built structure typically involves reconstruction of a dense set of 3-D points from stereo or by laser scanning of the as-built structure, and thereafter modeling of the data into surface patches which are compared against the design. Processing of such huge amount of data is computationally heavy and requires identification of corresponding surface patches between the design (BIM) and the reality (as-built structure). [0005] Hence, techniques are needed that can reduce the computational overhead while studying deviations between the as-built structures and their corresponding as- designed models.

SUMMARY

[0006] Various methods, systems and computer readable mediums for determining lower bounds for deviations of an as-built structure from an as-designed model are disclosed. In an embodiment, a method includes facilitating receipt of an image of an as-built structure. The image is a two-dimensional (2-D) image. The method also facilitates receipt of an as-designed model associated with the as-built structure. The as- designed model is a three-dimensional (3-D) model including a plurality of vertices and a plurality of lines connecting the plurality of vertices. Furthermore, the method determines a plurality of edge features in the image and an exterior orientation of the image including the plurality of edge features corresponding to the as-designed model is performed to generate an oriented image. Also, the method deforms the as-designed model, such that, upon projection of the deformed as-designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image. Based on the image, lower-bounds for deviations of the as-built structure from the as-designed model are determined. The deviations are determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

[0007] In another embodiment, an apparatus for determining lower bounds for deviations of an as-built structure from an as-designed model is disclosed. The apparatus includes a memory and a processor. The memory stores image processing instructions and the processor is electronically coupled with the memory. The processor is configured to execute the image processing instructions stored in the memory to cause the apparatus to perform facilitating receipt of an image of an as-built structure. The image is a two- dimensional (2-D) image. The apparatus also facilitates receipt of an as-designed model associated with the as-built structure. The as-designed model is a three-dimensional (3-D) model including a plurality of vertices and a plurality of lines connecting the plurality of vertices. The apparatus further determines a plurality of edge features in the image and performs an exterior orientation of the image including the plurality of edge features corresponding to the as-designed model to generate an oriented image. Furthermore, the apparatus deforms the as-designed model, such that, upon projection of the deformed as- designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image. Based on the image, lower- bounds for deviations of the as-built structure from the as-designed model are determined. The deviations are determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

[0008] In yet another embodiment, a non-transitory, computer-readable storage medium storing computer-executable program instructions to implement a method for determining lower bounds for deviations of an as-built structure from an as-designed model is disclosed. The method includes facilitating receipt of an image of an as-built structure. The image is a two-dimensional (2-D) image. The method further facilitates receipt of an as-designed model associated with the as-built structure. The as-designed model is a three-dimensional (3-D) model including a plurality of vertices and a plurality of lines connecting the plurality of vertices. Furthermore, the method determines a plurality of edge features in the image and an exterior orientation of the image including the plurality of edge features corresponding to the as-designed model is performed to generate an oriented image. Also, the method deforms the as-designed model, such that, upon projection of the deformed as-designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image. Based on the image, lower-bounds for deviations of the as-built structure from the as-designed model are determined. The deviations are determined corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

[0009] Other aspects and example embodiments are provided in the drawings and the detailed description that follows.

BRIEF DESCRIPTION OF THE FIGURES

[0010] For a more complete understanding of example embodiments of the present technology, reference is now made to the following descriptions taken in connection with the accompanying drawings in which: FIG. 1 illustrates a block diagram representation of an apparatus, in accordance with an example embodiment;

FIG. 2 is a flow diagram depicting a method for determining lower bounds for deviations of an as-built structure from an as-designed model (BIM), in accordance with an example embodiment;

FIGS. 3 A and 3B show example representations of imaging geometries for a planar image and a spherical image, in accordance with an example embodiment;

FIG. 4 is a flow diagram depicting a method for extracting edge features from a spherical image, in accordance with an example embodiment;

FIG. 5A shows an example image representation of edge features extracted in an image of a real-construction site with selected part of BIM projected thereon with an initial orientation, in accordance with an example embodiment;

FIG. 5B shows an example image representation of the real-construction site of FIG. 5A with selected part of BIM projected thereon with a refined exterior orientation, in accordance with an example embodiment;

FIG. 5C shows an example image representation of the real-construction site of FIG. 5B after deforming the selected part of BIM to substantially match with the edge features in the image, in accordance with an example embodiment;

FIG. 6A shows an example image representation of edge features extracted in an image of a parking lot corner with selected part of BIM projected thereon with an initial orientation, in accordance with an example embodiment;

FIG. 6B shows an example image representation of the image of FIG. 6A with selected part of BIM projected thereon with a refined exterior orientation, in accordance with an example embodiment;

FIG. 6C shows an example image representation of the image of FIG. 6B after deforming the selected part of BIM to substantially match with the edge features in the image, in accordance with an example embodiment; and

FIG. 7 shows a plotted representation of a deformed model in BIM coordinates with shading differences illustrating lower bounds of differences of the as-built structure against the BIM, in accordance with an example embodiment. [0011] The drawings referred to in this description are not to be understood as being drawn to scale except if specifically noted, and such drawings are only exemplary in nature.

DETAILED DESCRIPTION

[0012] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure can be practiced without these specific details. In other instances, apparatuses and methods are shown in block diagram form only in order to avoid obscuring the present disclosure.

[0013] Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.

[0014] Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present disclosure. Similarly, although many of the features of the present disclosure are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present disclosure is set forth without any loss of generality to, and without imposing limitations upon, the present disclosure. [0015] The term 'as-designed structure' 'as-designed model', 'building information model', BIM are used interchangeably throughout the description, and these terms represent a 3-D modeling of any planned structured that is to be built, or already built, or a work in progress. Further, the term 'as-built structure' refers to any actual structure that is built such that its images can be captured. Examples of the images herein include planar images and/or spherical images including planar and/or spherical panoramic images of the as-built structure. Herein, for the purposes of the description, the term 'planar image' also includes 'planar panoramic image' unless the context suggests otherwise, and the terms 'planar image' and 'planar panoramic image' are jointly referred to as as 'planar image (Ip)'. Similarly, the term 'spherical image' also includes 'spherical panoramic image' unless the context suggests otherwise, and the terms 'spherical image' and 'spherical panoramic image' are jointly referred to as 'spherical image (Is)'. [0016] FIG. 1 illustrates a block diagram of an apparatus 100 for adjusting a building information model to fit with an image, in accordance with an example embodiment of the present invention. More specifically, the apparatus 100 is configured to determine lower-bounds for deviations of the as-built structure from the as-designed model. [0017] It is understood that the apparatus 100 as illustrated and hereinafter described is merely illustrative of an apparatus that could benefit from embodiments of the disclosure and, therefore, should not be taken to limit the scope of the disclosure. The apparatus 100 may be any computing or data processing machine for example, a laptop computer, a tablet computer, a mobile phone, a server, and the like. It is noted that the apparatus 100 may include fewer or more components than those depicted in FIG. 1. Moreover, the apparatus 100 may be implemented as a centralized device, or, alternatively, various components of the apparatus 100 may be deployed in a distributed manner while being operatively coupled to each other. In an embodiment, one or more components of the apparatus 100 may be implemented as a set of software layers on top of existing hardware systems.

[0018] In at least one example embodiment, the apparatus 100 includes at least one processor for example, a processor 102, and at least one memory for example, a memory 104. The memory 104 is capable of storing machine executable instructions, particularly the image processing instructions. Further, the processor 102 is capable of executing the stored machine executable instructions. The processor 102 may be embodied in a number of different ways. In an embodiment, the processor 102 may be embodied as one or more of various processing devices, such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In at least one example embodiment, the processor 102 utilizes computer program code to cause the apparatus 100 to perform one or more actions responsible for adjusting a building information model of a structure to fit with an image corresponding to as-built structure, and to calculate lower bound estimations for deviations between the as-built structure and the as-designed model (the building information model). [0019] The memory 104 may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non- volatile memory devices. For example, the memory 104 may be embodied as magnetic storage devices (such as hard disk drives, floppy disks, magnetic tapes, etc.), optical magnetic storage devices (e.g., magneto-optical disks), CD- ROM (compact disc read only memory), CD-R (compact disc recordable), CD-R/W (compact disc rewritable), DVD (Digital Versatile Disc), BD (Blu-ray® Disc), and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).

[0020] In at least one embodiment, the apparatus 100 includes a user interface 106 (also referred to as UI 106) for providing an output and/or receiving an input. The user interface 106 is configured to be in communication with the processor 102 and the memory 104. Examples of the user interface 106 include, but are not limited to, an input interface and/or an output interface. Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, a microphone, and the like. Examples of the output interface may include, but are not limited to, a display such as light emitting diode display, thin-film transistor (TFT) display, liquid crystal display, active-matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, ringers, vibrators, and the like. In an example embodiment, the processor 102 may include user interface circuitry configured to control at least some functions of one or more elements of the user interface 106, such as, for example, a speaker, a ringer, a microphone, a display, and/or the like. The processor 102 and/or the user interface circuitry may be configured to control one or more functions of the one or more elements of the user interface 106 through computer program instructions, for example, software and/or firmware, stored in a memory, for example, the memory 104, and/or the like, accessible to the processor 102.

[0021] In an example embodiment, the apparatus 100 includes a camera module 108, for example including one or more digital cameras. The camera module 108 is configured to be in communication with the processor 102 and/or other components of the apparatus 100 to capture digital image frames, videos and/or other graphic media. The camera module 108 may include hardware and/or software necessary for taking various kinds of images, for example, planar images, spherical images, or planar panoramic or spherical panoramic images. The camera module 108 may include hardware, such as a lens and/or other optical component(s) such as one or more image sensors. Examples of one or more image sensors may include, but are not limited to, a complementary metal- oxide semiconductor (CMOS) image sensor, a charge-coupled device (CCD) image sensor, a backside illumination sensor (BSI) and the like. In an example embodiment, the camera module 108 may further include a processing element such as a co-processor that assists the processor 102 in processing image frame data and an encoder and/or a decoder for compressing and/or decompressing image frame data. The encoder and/or decoder may encode and/or decode according to a standard format, for example, a Joint Photographic Experts Group (JPEG) standard format.

[0022] The various components of the apparatus 100, such as components (102- 108) may communicate with each other via a centralized circuit system 110 to determine lower bounds for deviations of the as-built structure from the as-designed model. The centralized circuit system 110 may be various devices configured to, among other things, provide or enable communication between the components (102-108) of the apparatus 100. In certain embodiments, the centralized circuit system 118 may be a central printed circuit board (PCB) such as a motherboard, a main board, a system board, or a logic board. The centralized circuit system 110 may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media.

[0023] In at least one embodiment, the memory 104 is configured to store image processing instructions for processing of the as-designed model (hereinafter also interchangeably referred to as 'building information model' or 'BIM') and the images corresponding to the as-built structure. The image processing instructions stored in the memory 104 are executable by the processor 102 for performing a method explained with reference to FIG. 2. [0024] FIG. 2 is a flowchart depicting an example method 200 for estimation of lower bounds of deviations of the as-built structure from the as-designed model, in accordance with an example embodiment. The method 200 depicted in the flow chart may be executed by, for example, the apparatus 100 of FIG. 1. It should be noted that to facilitate discussions of the flowchart of FIG. 2, certain operations are described herein as constituting distinct steps performed in a certain order. Such implementations are examples only and non-limiting in scope. Certain operations may be grouped together and performed in a single operation, and certain operations may be performed in an order that differs from the order employed in the examples set forth herein. Moreover, certain operations of the method 200 are performed in an automated fashion. These operations involve substantially no interaction with the user. Other operations of the method 200 may be performed in a manual fashion or semi-automatic fashion. These operations involve interaction with the user via one or more user interface presentations.

[0025] At 205, the method 200 includes facilitating receipt of an image (I) of an as-built structure and an as-designed model (BIM) associated with the as-built structure. Examples of the as-built structure may include, but not limited to, any man-made structure such as a building, a site, roads, etc., complete or partially built structures, foundations of any structure, etc. In an example embodiment, the BIM associated with the as-designed structure is a 3-D modeling of an intended structure that is originally planned to be built. In an example, the BIM is represented as a 3-D wire-frame model with a plurality of vertices and a plurality of BIM lines (i.e., straight lines) joining the plurality of vertices. In an example embodiment, the image (I) of the as-built structure is captured by the camera module 108 present in or otherwise accessible to the apparatus 100. In some other example embodiments, the image (I) may be prerecorded or stored in the apparatus 100, or may be received from sources external to the apparatus 100. In such example embodiments, the apparatus 100 is caused to receive the image (I) from external storage medium such as DVD, Compact Disk (CD), flash drive, memory card, or from external storage locations through Internet, Bluetooth ® , and the like.

[0026] At 210, the method 200 includes determining a plurality of edge features in the image (I). The plurality of edge features includes edge lines in those cases where the image (I) is a planar image, and the plurality of edge features includes edge curves in those cases where the image (I) is a spherical image. It should be noted that straight lines of the BIM are usually visible as edge lines in the planar image (Ip) or as edge curves in the spherical image (Is), and hence the edge features are determined from the image (I) so that the edge features and the BIM lines, after the post processing, can be utilized to compute lower bounds for the deviations of the as-built structure from the as-designed model. Various example embodiments of determination of the edge features are described later with reference to one or more of FIGS. 3A-3B to 6A-6C.

[0027] At 215, the method 200 includes performing an exterior orientation of the image (I) comprising the plurality of edge features corresponding to the as-designed model to generate an oriented image. The exterior orientation of the image (I) is found by determining correspondences between the BIM lines and the edge features of the image (I). For instance, the exterior orientation of the planar image or planar panoramic image (Ip) can be achieved by line to line correspondences of the 3-D lines of the BIM and the edge lines of the planar image (Ip). Further, the exterior orientation of the spherical image or spherical panoramic image (Is) can be achieved by estimating arc to line correspondences of the 3-D lines of the BIM and the arcs of the edge curves of the spherical image or spherical panoramic image (Is), by suitable algorithms such as modified RANSAC algorithm. Various example embodiments of the exterior orientation are described later with reference to one or more of FIGS. 3A-3B to 6A-6C.

[0028] At 220, the method 200 includes deforming the as-designed model (BIM), such that, upon projecting the deformed as-designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image (I). In an example embodiment, deforming the BIM comprises adjusting positions of the plurality of vertices of the BIM such that a vector from a camera projection center to an individual vertex of the plurality of vertices is perpendicular to normals of projecting planes corresponding to one or more edge features of the plurality of edge features in the image (I), wherein the one or more edge features correspond to one or more lines of the plurality of lines departing from the individual vertex. Further, at 225, the method 200 includes determining, based on the image (I), lower-bounds for deviations of the as-built structure from the BIM. In an example embodiment, the deviations are determined corresponding to the plurality of vertices of the BIM based on the deformation of the BIM (performed at operation 220). Various example embodiments of the deformation of the BIM and determination of the lower bounds for the deviations are described further with reference to one or more of FIGS. 3A-3B to 7. [0029] The operations 205-225 of the method 200, which are performed by the apparatus 100, will be described by jointly referring to FIGS. 3A-3B to 7, in which the FIGS. 3A and 3B represent image geometry and coordinates used for the explanation of various embodiments of the operations 205-225 of the method 200. [0030] Referring to FIG. 3A, an example representation 300 of object coordinate system and camera coordinate system corresponding to the planar image (Ip) is illustrated, and referring to FIG. 3B an example representation 350 of object coordinate system and camera coordinate system corresponding to the spherical image (Is) is illustrated. In both the FIGS. 3A and 3B, the rectangular object coordinate system Χ Β , Υ Β Β {see, 302, 304 and 306) is aligned with the BIM so that the origin (see, 310) of the object coordinate system is in a vertex of a rectangular corner of the BIM and the coordinate axes (Χ Β , Υ Β Β ) (see, 352, 354 and 356) are aligned with the BIM lines emanating from the vertex (i.e., the origin 360). The camera coordinates Xc, Yc,Zc (see, 352, 354 and 356) are related to the object coordinates by r c = R(V B ~ t), where r B and r c are the positions of a point in the object and camera coordinate systems, respectively. Herein, the exterior orientation of the camera is given by a rotation matrix 'i? ' and a translation vector

[0031] As shown in FIG. 3A, in the case of the planar image (Ip), the object point is projected onto the image (I) by well-known equations of perspective projection (e.g., the projection of points Pc and Qc to points p and q, respectively). Further, as shown in FIG. 3B in the case of the spherical image (Is), the object point is projected onto the image sphere by r = c rc/\rc\, where c is the focal length of the camera. The coordinates of r = [x y z] r are related to the spherical coordinates θ, φ as per the following expression (1):

x =— c cos6 sin0

y = c sme (1)

where φ is the rotation angle (azimuth) around the j-axis counted from the negative z- axis, and Θ is the rotation angle (elevation) around the once rotated x-axis.

[0032] The processor 102 is configured to, with the image processing instructions stored in the memory 104, and optionally with other components described herein, to cause the apparatus 100 to perform the operation 210 of the method 200. For example, the apparatus 100 is caused to determine a plurality of edge features in the image (I). In the embodiment of image (I) being the planar image (Ip), the examples of the edge features are edge lines; and in the embodiments of the image (I) being the spherical image (Is), the examples of the edge features are edge curves.

[0033] In the embodiments of the image (I) being the planar image (Ip), the edge lines can be extracted by any well-known method such as Hough transform from edge pixels detected by any well-known method such as Canny algorithm. In an example embodiment of the image (I) being the spherical image (Is), the edge curves can be extracted as per a flow diagram illustrated in FIG. 4.

[0034] Referring now to FIG. 4, a method 400 illustrates a flow diagram for extracting the edge features (edge curves) from the spherical image (Is), in accordance with an example embodiment. In the case of a spherical image (Is), a straight line in the object space is projected to an arc of a circle on the image sphere of the spherical image (Is). The arc is defined as an intersection of the image sphere with the projecting plane having a unit normal defined as per the expression N c = (Pc x Qc)/ \\ Pc x Qc \\ , and containing the projection center (see, FIG. 3B). It is noted that in spherical coordinates, the arc is a curve with a slowly varying tangent vector.

[0035] At 405, the method 400 includes detecting canny edge pixels in the spherical image (Is) using Canny algorithm as set forth in J. Canny, "A Computational Approach to Edge Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, 1986. At 410, the method 400 includes grouping the canny edge pixels into a plurality of connected components based on 8-connectivity of neighboring pixels. It is noted that each connected component may contain pixels from several BIM lines.

[0036] At 415, the method 400 includes selecting a connected component of the plurality of connected components. Further, at 420, the method 400 includes dividing the connected component into one or more segments and merging the compatible segments using a region growing process. In an example embodiment, for the operation 420, the method 400 includes estimating, robustly at each edge pixel, the angle of normal of a local edge line and the signed distance of the local edge line from the image origin. In an example, estimation of the angle and the signed distance include computing a set of lines through two points, i.e. each edge pixel in a local neighborhood (of size M x M pixels) and the pixel in question. Further, the method 400 includes selecting the best line which has the biggest number of other edge pixels closer than a threshold T e from the line. The line parameters are refined by fitting a straight line to all edge pixels within the neighborhood that are closer than T e from the best line.

[0037] In an example embodiment, for the operation 420, the method 400 further includes performing region growing along edge pixels with similarity of the local line direction and signed distance from the origin as criteria for adding the next pixel to the segment of previous pixels. Since the edge is curved, the local line parameters of the next pixel are compared to the moving average of line parameters of a pre-determined number of previously added pixels. In an example, the pre-determined number of previously added pixels may be 20 pixels. In an example embodiment, the line directions are similar if the normal angles differ less than a threshold T a and the signed distances from the origin are similar if they differ less than D max (|cos( + T a ) - cosa|, 1 - cos T a ), where D is the distance of the moving average from the origin and a is the normal angle of the moving average. In an example embodiment, if there are several neighboring edge pixels of the current pixel, then the most compatible of them is selected and the others are stored to be considered later, with appropriate moving averages of line parameters, once the branch of the selected pixel has been tracked to the end. In an example embodiment, appropriate care is also paid to vertical lines, where the normal angle of the line has a discontinuity of π radians and the signed distance from the origin changes to the opposite sign with the same absolute value. If the next pixel is not compatible with the region grown by then, a new segment is started. In this manner, the connected component is divided into segments.

[0038] At 425, the method 400 includes forming one or more arcs corresponding to one or more segments of the connected component using edge boundary pixels corresponding to the one or more segments. For instance, each segment of the connected component is selected, and for pixels of the each segment, the coordinates on the image sphere are computed according to expression (1). Further, a plane is then fitted to these points with the origin of the sphere added to the point set the same number of times as there are points on the image sphere. It is noted that the unit normal n of the fitted plane represents the projecting plane computed from the image measurements {see, FIG. 3B). In an example embodiment, the end points (i.e. edge boundary pixels) p and q of an arc of a circle are determined by projecting the points on the sphere to the plane fitted, computing unit vectors from the projection center toward the projected points, selecting one vector as a reference and computing angles of other vectors with respect to the reference, choosing the vectors of smallest and largest angle (including possibly the reference), and scaling them to the circle.

[0039] At 430, the method 400 checks whether all of the connected components are processed (i.e. the segments are formed, and thereby arcs are also formed). If all connected components are processed, the method 400 proceeds to 435, otherwise the method 400 goes back to 415, and a next connected component is selected for processing with the operations 420 and 425.

[0040] Once all connected components are processed, at 435, compatible arcs of circles, which satisfy an overlap criterion, are merged one by one if the unit normals of their projecting planes are close to each other or point to near opposite directions. For instance, if the angle between normals of the projecting planes of two arcs is below a threshold Τ β or above π(ρϊ) - Τ β , the two arcs are merged. In an example, the arcs are defined to satisfy an overlap criterion, for example, whether the projected and scaled arcs overlap with each other or they are apart from each other by not greater than a tolerance T 0 . Herein, the projected and scaled arcs are obtained by orthogonally projecting the end point vectors of the arcs to a new projecting plane and scaling the projected vectors so that their end points are on the image sphere. In an example, the new projecting plane is computed with a new unit normal averaged from the normals of the old planes with the lengths of the old arcs divided by the focal length as weights. The end points of the new merged arc are given by those projected and radially scaled old end point vectors which have the smallest and largest angle with respect to one end point vector selected as a reference (or the reference itself).

[0041] The result of extraction of edge curves from a spherical panoramic image (Is) of a real construction site is illustrated in FIG. 5A. In this example representation 500, the extracted edge curves that are longer than a pre-determined number of pixels (e.g., 200 pixels) are shown by reference numerals 505. Further, a selected part of the BIM projected on the spherical panoramic image (Is) according to an initial orientation is shown by reference numerals 510. The image of the real construction site shown in FIG. 5A may be constructed by stitching together a plurality of sub-images taken of the site. It should be understood that only some structures such as lacunas 520, 525, conduit elements 530 and elevator shaft 535 are exemplarily represented for the sake of description. Further, only some selected part of the BIM (see, 510) is projected on the spherical image (Is) by manually setting an initial orientation between the spherical image (I s ) and the BIM.

[0042] In another representation, result of extraction of edge curves from another spherical image (an image of a corner of a parking hall) is illustrated in FIG. 6A. In this example representation 600, the extracted edge curves that are longer than 200 pixels are shown by reference numerals 605. Further, a selected part of the BIM projected on the spherical image (Is) according to an initial orientation is shown by reference numerals 610.

[0043] Some example embodiments of performing the operation 215 of the method 200 are explained herein with following description. For example, the processor 102 is configured to, with the image processing instructions stored in the memory 104, and optionally with other components described herein, to cause the apparatus 100 to perform an exterior orientation of the image (I) comprising the plurality of edge features corresponding to the as-designed model (BIM) to generate an oriented image. In the embodiments of the image being the planar image (Ip), the exterior orientation of a planar image (Ip) can be performed from line to line correspondences by any suitable methods known in the art.

[0044] In the embodiments of the image (I) being the spherical image (Is), the exterior orientation of the spherical image (Is) or panoramic image is solved using determination of arc (e.g., arcs of Is) to line (e.g., BIM lines) correspondences. In an example embodiment, a modified RANSAC algorithm is used to determine the correspondences between the 3-D lines of the BIM and arcs of circles on the image sphere corresponding to the spherical image (Is). In this example embodiment, several combinations of the correspondences are tested between the 3-D lines of the BIM and arcs of circles on the image sphere corresponding to the spherical image (Is), and an optimum correspondence from the several correspondences is determined and is used to compute the exterior orientation.

[0045] In an example embodiment, for each RANSAC iteration, more than two pairs of BIM lines (e.g., S≥ 2 pairs of BIM lines) are first randomly selected with a requirement that the two lines of each pair share the same vertex of the BIM as one of the end points of the lines. The pairs of lines constitute a set of K distinct lines where 3 < K < 2S. Further, in an example embodiment, unit normals N k , k = I,..., K, of their projecting planes are computed. Furthermore, differences between directions of normals parameterized by two angles ¾ are also computed.

[0046] In an example embodiment, an arc with a projecting plane normal n \ is then randomly selected from the image sphere. Thereafter, arcs for k = 2,..., K, are also randomly and sequentially chosen from subsets of all the arcs. Each subset consists of arcs, the direction angles of projecting plane normals of which differ less than a threshold To, from the direction angles of normal approximated from the direction angles of normals of previously selected arcs and 3-D lines of the BIM. This approximated normal is obtained by the condition that the differences between the projecting plane normals of arcs are similar as differences between the projecting plane normals of 3-D lines of the BIM (also referred to as '3-D BIM lines'). More precisely, the direction angle <¾ of an approximated normal is given by following expression (2) ω ¾ = E - i ¾ + n k - no/ k - 1) (2)

for k = 2,...,K, and same for the other direction angle.

[0047] In an example embodiment, given the set of K arc to line correspondences, the rotation R of the spherical image is solved using the normals and the direction vectors L k of the 3-D BIM lines by applying adaptive weighting to the technique set forth in Y. Liu, T.S. Huang, and O.D. Faugeras, "Determination of Camera Location from 2-D to 3-D Line and Point Correspondences," in IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 28-37, 1990. In an example embodiment the merit function to be minimized with respect to the rotation matrix parameterized by three angles, is as per the following expression (3)

where <¾ are weights. In an example embodiment, the minimization of the merit function (fx) is solved using the Levenberg-Marquardt method where an initial estimate for the rotation can be provided manually.

[0048] In an embodiment, in the expression (3), the weights (<¾) are equal to one at correspondences, where the mean distance from the end points of the arc to the projecting plane of the transformed BIM line is below an adaptive threshold. Herein, it is noted that the transformation of the BIM line means transformation to the camera coordinate system according to the current estimate for the rotation and translation. In an example embodiment, the adaptive threshold is chosen based on mean and standard deviation of arc to projecting plane distances at each of the K correspondences. The adaptive threshold gets tighter as the iteration proceeds similarly as set forth in Z. Zhang, "Iterative Point Matching for Registration of Free-Form Curves and Surfaces," International Journal of Computer Vision, vol. 13, no. 2, pp. 119-152, 1994, for registration of curves and surfaces.

[0049] Further, in the expression (3), the weights (<¾) are equal to zero at correspondences, where the arc to projecting plane distances are larger than or equal to the adaptive threshold. It is noted that such weighting scheme removes false correspondences and uses only reliable ones. In some example embodiments, however, the weighting can be ignored when the value of K is small, as it has more influence when K is large.

[0050] In an example embodiment, for the estimation of translation t, two or more intersections of 3-D BIM lines and intersections of circles of corresponding arcs on the image sphere corresponding to the spherical image (Is), are taken. In an example representation, let Vu denote the vertex, where the BIM lines k and / of one of the selected S pairs of 3-D BIM lines intersect. The corresponding arcs in the image sphere need not intersect, but the intersections of circles, which the arcs are part of, are used. The circles intersect at two points given by ±c(«^ x «/)/ || x «/ 1| · These two vectors are rotated by R l and the one is selected, for which the direction of the rotated vector is closer to Vu- Let this intersection point be denoted by Vu (before rotation). The image point and the object point lie on the same image ray. Accordingly, in an example embodiment, the translation t may be represented as per the following expression (4):

where the real number μu and the three components of t are unknown. It is noted that in the scenario of S intersection correspondences, the number of equations is 3S with 3 + S unknowns, and hence the translation is solvable for S≥ 2.

[0051] Using the estimated rotation and translation, the 3-D lines of the BIM are transformed from the object to camera coordinate system. The projecting plane normals Ne k in the camera coordinates and the end points of respective arcs on the image sphere are computed. In an example embodiment, a merit function evaluating the quality of the selected set of arc to BIM line correspondences is given by the following expression (5): f 2 =∑ K k=1 \n T k N ck \/K (5) provided the arcs related to and Not satisfy the overlap criterion strictly with T 0 = 0 for all k = \,..., K (with the new projecting plane being the same as the projecting plane defined by Not)- [0052] Additionally, in an example embodiment, the quality of the estimated orientation is also evaluated by the number of other arc to line correspondences that appear besides those K correspondences which are used for the orientation estimation. In this example embodiment, the other 3-D lines of the BIM are transformed to the camera coordinate system and the projecting plane normals Nc P and respective arcs A p on the image sphere are computed. Further, for each arc A p , those arcs D u of the image sphere, the projecting plane normals n u of which are close to Nc P and which satisfy the overlap criterion with A p strictly with T 0 = 0, are searched. Herein n u and Nc P are close to each other if the angle between them is less than a threshold T y or larger than π (pi) - T y . In an example embodiment, the closest arc D q may be chosen as a candidate arc to be considered to correspond A p and the respective 3-D line of the BIM; and closest arc D q should satisfy the condition as per the following expression (6): q = arg(max u |n¾ p |) (6)

[0053] In some scenarios, the closest arc according to expression (6) may not always be the best solution, especially when there are several arcs close to each other or the arcs represent different parts of the same edge line. Consequently, in an example embodiment, it is determined if there are other arcs D q2 close to D q (e.g., mean distances of the end points of D q from the projecting planes of D q2 are less than a threshold r K c), which have been previously matched with some other arcs A p2 computed from the BIM such that the projecting plane normals of A p2 are close to the one of A p (i.e. angle between the normals less than a threshold Τ μ or larger than π - Τ μ ), and A p2 satisfy the overlap criterion with A p strictly with T 0 = 0. In an example embodiment, if any of A p2 corresponds to one of the K BIM lines used for orientation estimation, then D q is deleted from consideration. Otherwise, the arc A P 3 of the arcs A p2 which is closest to D q (i.e. absolute value of dot product between normals of projecting planes is maximized) is searched. Consequently, it is studied which one is closer, D q to A p , or D q 3 to A P 3. Herein, D q is the arc of the image sphere corresponding to the BIM line from which A p i has been computed. Further, the closeness between D q to A p , and D q 3 to A P 3 is measured by the mean distance of end points of D q and D q 3 from the projecting planes of A p and A p i, respectively. The closer of these two alternatives is kept for consideration and the other is deleted (or ignored). If D q is deleted from consideration, then the second closest arc according to expression (6) is considered and the process is iterated until a compatible arc can be found. Further, if D ¾3 is deleted, then the arc A P 3 is processed again to find another arc of the image sphere which yields a better overall solution. These considerations thus take into account the neighboring arc to line correspondences when establishing a new one. As a result, the algorithm is able to find a correct arc instead of mixing with another arc close to the correct one. It would be appreciated by those skilled in the art that the result of the algorithm does not depend on the order of BIM lines in which they are processed.

[0054] In an example embodiment, the best orientation estimate is considered to be obtained when the total number of arc to line correspondences (including the other ones as per the expression (6) and as per the further considerations according to paragraph [0053], i.e. the paragraph following expression (6)) is maximized and within the maximum number of correspondences, the orientation which maximizes the merit function f 2 in the expression (5).

[0055] The result of exterior orientation of the spherical panoramic image (Is) is illustrated in an example representation 540 of FIG. 5B and in an example representation 625 of FIG. 6B. Referring particularly to the example representation 540 of FIG. 5B, after the exterior orientation of the spherical image (Is) is performed, the BIM (see, 510) projected onto the spherical image (Is) fits mostly well with the edge curves (see, 505) except along the left border of the lacuna 520 (see, region 522) and along the side of the lacuna 525 near the conduit elements 530 (see, region 528). [0056] Some example embodiments of performing the operations 220 and 225 of the method 200 are explained herein with following description. For example, the processor 102 is configured to, with the image processing instructions stored in the memory 104, and optionally with other components described herein, to cause the apparatus 100 to perform the operations 215 and 220. For instance, the apparatus 100 is caused to deform the as-designed model, such that, upon projecting the deformed as- designed model onto the oriented image, the plurality of lines of the as-designed model fit substantially with the plurality of edge features in the image, and the apparatus 100 is further caused to determine lower-bounds for deviations of the as-built structure from the as-designed model, based on the image (I). In an example embodiment, the processor 102 is configured to determine the deviations corresponding to the plurality of vertices of the as-designed model based on the deformation of the as-designed model.

[0057] In an example representation, an edge line extracted from a planar image (Ip) and an edge curve or an arc of an image sphere extracted from a spherical image (Is) can be jointly denoted as a 2-D feature of the image (I). In an example, it may be assumed that a total of K number of 2-D feature to 3-D BIM line correspondences which yield the optimal exterior orientation and a total of T' number of other feature to line correspondences using the optimal orientation, are established {e.g., determined during the operation 215). It should be noted that these K' other correspondences are obtained for the spherical image (Is) as described above in paragraphs [0052] and [0053] {i.e. the paragraph including expression (6) and the following paragraph), and for the planar image (Ip), K' other correspondences can be obtained using the same procedure by replacing term 'arc' by the 'line' and term 'image sphere' by the 'image plane'. The positions of the vertices Ps h , where h = \,...,H, of the BIM (in BIM coordinates) are adjusted so that the K +I BIM lines fit optimally with the corresponding edge features of the image (I).

[0058] In an example embodiment, for each h of PBH, let U h be a set of indices of features that correspond to 3-D BIM lines that share PBH as one of the end points of the BIM line. For instance, in other words, ½ groups the features corresponding to BIM lines that depart from vertex PBH- In an example embodiment, after transforming to camera coordinates, the vector to each adjusted vertex PBU +ΔΡΒΗ should be perpendicular to the unit normals of projecting planes of all features belonging to U h . In an example embodiment, a merit function to be minimized, with respect to APs h , h = \,...,H, can be as per the following expression (7):

In the above expression (7), P ch are vertices of the BIM in the camera coordinates and M¾ are weights, and the P ch can be given as per the following expression (8): P ch = R {P Bh + AP Bh - t) (8).

Each weight (¼¾) is proportional to the length of overlap between the image feature and the feature computed from the corresponding BIM line when transformed to the same circle on the image sphere similarly as in the overlap criterion in the case of the spherical image (Is), and correspondingly for the planar image (Ip).

[0059] The set U h typically contains zero to three features. In an example embodiment, if there are two or more features in U h , then all of the three coordinates of APs h are regarded as unknown. In an example embodiment, if there is only one feature in the set U h , then the position of PBH is kept fixed in the direction of the 3-D BIM line corresponding to the feature in question and changes are allowed only perpendicular to the BIM line direction. Due to the selection of the BIM coordinate system, each line of the BIM is usually parallel to one of the coordinate axes. It should be noted that fixing the movement in the direction of the BIM line represents fixing one coordinate of PBH- In this example embodiment, for BIM lines that are non-parallel to any of the coordinate axes, constraint equations L k AP Bh = 0, where L k is the direction vector of the line, are introduced. In an example embodiment, if the set U h is empty, all the coordinates of vertex Psh are kept fixed. [0060] Since all the projecting planes of features in the set U h include the camera projection center and the vertex, the solution for the vertex position is not unique but all points along the ray from the camera to the vertex are valid solutions, when there are two or more features in the set U h . The same holds also if there is only one feature in the set U h except that the degeneracy is along the intersection of the projecting plane and the plane perpendicular to the BIM line. Consequently, an additional constraint (Ps h ~ t) T APs h = 0 is introduced, which forces the correction to the vertex position to be perpendicular to the ray from the camera to the original vertex position. This latter constraint (i.e. (Pe h ~ t) T APs h = 0) ensures a unique solution that has approximately a minimum norm (e.g. , the point, which is closest to Ps h on the image ray from the camera to the adjusted vertex, would be the exact minimum norm solution). In an example embodiment, the vertex position is thus corrected by a vector of minimum length so that the distance between the adjusted vertex position and the original vertex position gives a lower bound for the magnitude of deformation.

[0061] In an example embodiment, the apparatus 100 is caused to solve the minimization of metric function ( ) in expressions (7) and (8) using the Levenberg- Marquardt algorithm with Lagrange multipliers and zero values as initial estimates for APs h , h = \,..., H. In this example embodiment, the other feature to line correspondences for which one or both end points of the 3-D line have moved more than a threshold T p , (i.e., \\ ΑΡΒΗ \\ ≥ T p ) are removed from consideration and the corresponding deformations are set to zeros. Thereafter, the Levenberg-Marquardt algorithm with Lagrange multipliers is applied again with zero values as initial estimates for a new set of unknown coordinates of ΛΡ¾ determined based on the remaining feature to line correspondences. In an example embodiment, the two step approach thus uses information from the 3-D object space to eliminate false feature to line correspondences. It is noted that when the scene contains 3-D lines at various depths, it is difficult to set an appropriate value for T y , which describes the closeness of normals of projecting planes of the image feature and the 3-D line. However, the threshold T p gives a depth invariant criterion, which can be set based on knowledge about how much the vertices are expected to be misplaced at most. In an example embodiment, the solution with the reduced number of unknowns gives the deformation of the BIM, where the magnitudes of deformations represent the lower- bounds for deviations against the as-designed BIM at each vertex of the BIM.

[0062] The result of BIM adjustment for the spherical panoramic image (Is) is illustrated in an example representation 560 of FIG. 5C and in an example representation 650 of FIG. 6C. Referring particularly to FIG. 5C, after the exterior orientation process, the BIM is deformed so as to substantially fit with the BIM lines with the corresponding edge curves of the lacuna 520. For example, the region 522 shown in FIG. 5B is adjusted and is hence not visible in FIG. 5C. In this example representation 560, after adjusting the BIM, all the projected BIM lines of established arc to line correspondences match perfectly with edge curves of the spherical image (Is). Further, FIG. 7 shows the deformed model in the BIM coordinates (ΧΒ, ΥΒ,ΖΒ) with shading differences illustrating differences against the as-designed BIM.

[0063] It should be noted that while describing various example embodiments, particularly, for the spherical image (Is), one or more algorithms include several thresholds which determine when two quantities are considered to be close to each other. It should however be noted that determination of appropriate values for the parameters and thresholds depends on the scene content such as straightness of edge lines in the object space (M, T e , T a ), dissolution potential one wants to achieve by not merging adjacent edge curves (Τ β , Τ 0 ), closeness of initial orientation to the true one (Τ ω ), dissolution potential needed to separate arcs close to each other ( Τ κ , Τ μ ), and magnitude of BIM deformation there is expected to be (T 7 , T P ). Further, increasing the number K of feature to line correspondences may help to find the correct exterior orientation, although a too large K may incorrectly distribute possible deformations along some lines also to other lines which are actually correct. On the other hand, the disclosed adaptive weighting in rotation estimation is intended to cope with these cases.

[0064] Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is to provide methods for providing lower-bounds for deviations of the as-built structure from the as-designed model. Various example embodiments are capable of working with a single image {e.g., a planar or a spherical image) and are yet able to derive 3-D deformation information. The lower bound obtained can be applied to determine deviations exceeding tolerances and requiring further inspection. Various example embodiments operate both on the planar as well as spherical images stitched from one or several concentric sub-images. It is noted that spherical panoramic images contain features from all surroundings of the setup and may be thus more accurately oriented with the BIM. As the orientations of the spherical images are solved by applying the concept of edge-based methods, the computational complexities and overheads reduce significantly.

[0065] The present disclosure is described above with reference to block diagrams and flowchart illustrations of method and device embodying the present disclosure. It will be understood that various block of the block diagram and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, may be implemented by a set of computer program instructions. These set of instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to cause a device, such that the set of instructions when executed on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks. Although other means for implementing the functions including various combinations of hardware, firmware and software as described herein may also be employed.

[0066] Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a non-transitory computer program product. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a system described and depicted in FIG. 1. A computer- readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.

[0067] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present disclosure and its practical application, to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present disclosure.