Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING DEFORMATIONS OF QUAY WALLS USING A PHOTOGRAMMETRIC SYSTEM
Document Type and Number:
WIPO Patent Application WO/2022/173285
Kind Code:
A1
Abstract:
An apparatus, system and method for monitoring quay walls is disclosed. The system is arranged to record image data of a quay wall. The image data is recorded from a plurality of angles, providing a plurality of perspectives, creating multi-perspective image data. The multi-perspective image data is analysed to determine deformations and/or displacement of the quay wall to be monitored.

Inventors:
KODDE MARTINUS (NL)
TREFFERS NIELS (NL)
STAATS SJOERD (NL)
LODDER HENDRIK (NL)
Application Number:
PCT/NL2021/050789
Publication Date:
August 18, 2022
Filing Date:
December 24, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INGB GEODELTA B V (NL)
International Classes:
G01C11/10; G01C11/30
Domestic Patent References:
WO2006074298A22006-07-13
Foreign References:
US20200312019A12020-10-01
US20200312019A12020-10-01
US6711293B12004-03-23
Other References:
ESMAEILI FARID ET AL: "Displacement Measurement of Soil Nail Walls using Close Range Photogrammetry", PROCEDIA ENGINEERING, ELSEVIER BV, NL, vol. 54, 18 April 2013 (2013-04-18), pages 516 - 524, XP028582906, ISSN: 1877-7058, DOI: 10.1016/J.PROENG.2013.03.047
Download PDF:
Claims:
1 . An apparatus for monitoring quay walls (1004), the apparatus comprising a mounting device; an image recording device (1001 ,7001) fixed to the mounting device; whereby the image recording device having a housing (2001); whereby the image recording device further having an objective lens (1003) attached to the housing; the objective lens comprising an aperture (2003); whereby the image recording device comprising an image sensor (2002, 7002) arranged inside the housing; whereby the image recording device further comprising a processing unit (7003) for processing image data from the image sensor; a storage device (7004) connected to the image recording device for storing image data; a computation device (7005); whereby said computation unit being arranged such as to allow communication with a memory (7008) storing a computer program comprising instructions and data, which computer program can be run by the computation unit; wherein said apparatus is arranged to record image data of a quay wall to be monitored.

2. The apparatus according to claim 1 further comprising a sensing unit (7020), wherein the sensing unit comprises at least on of an accelerometer or inertia measurement unit for measuring the attitude of the imaging device, a GNSS receiver for measuring the position of the imaging device.

3. The apparatus according to claims 1 or 2 being adapted to determine deformation and/or displacement of the quay wall to be monitored.

4. A reference object (3013) for providing a reference position, the reference object comprising: an elongated member having a front and a rear; a mounting bracket (3010) adjust-ably fixed to the rear of the elongated member.

5. The reference object according to claim 4, the reference object further comprising a crossmember affixed to the elongated member.

6. The reference object according to claim 4, whereby the reference object further comprises at least one reference marker (3003) and/or a retro-reflective devices (3004) mounted to the front of the elongated member.

7. The reference object according to claim 5, whereby the reference object further comprises at least one reference marker (3003) and/or a retro-reflective devices (3004) mounted to the front of the elongated member and/or cross-member and/or mounting bracket.

8. The reference object according to claims 4-7, wherein the reference object further comprises a GNSS-receiver for determining the position of the reference object.

9. A system for monitoring quay walls, the system comprising: a mounting platform (1002); an apparatus according to claim 1 fixed to said mounting platform;

10. The system according to claim 9, wherein the mounting platform is one of a tripod, a water born surface vehicle, a land vehicle or an airborne vehicle.

11 . The system of claim 9, the system further comprising one or more reference objects (3013) according to claims 4-8, whereby the one or more reference objects are positioned on or close to the quay wall (1004) to be monitored.

12. A method for monitoring quay walls, the method comprises the steps of determining the coordinates of the markers as input for the adjustment (5003); recording image data from a quay wall to be monitored (5004); search for corresponding common points in the overlapping images (5006); detecting the reference markers in the images (5007); computation of approximate positions and attitude of the images as well as the terrain coordinates of the corresponding common points (5008); computation of image position, image attitude and terrain coordinates using a Bundle Block Adjustment (5009); statistical testing of the observations (5010) and removing rejected observations (5013); searching for corresponding common points between images of different epochs (6006); generating constraints between corresponding common points between epochs (6007); Bundle Block Adjustment of multiple epochs (6008); statistical testing of the constraints (6009); rejecting or updating constraints to represent actual deformation (6010); writing out statistically tested deformations (6014).

13. The method for monitoring quay walls according to claim 12, wherein image data is recorded from a plurality of positions, as to create a plurality of perspectives of the quay wall to be monitored.

14. The method according to claim 13, wherein the plurality of perspectives comprises at least one obliquely angled field of view.

15. The method of claim 13, wherein the plurality of perspectives comprises a perpendicular field of view.

16. The method of claim 13, wherein the plurality of perspectives comprises an inverted perpendicular field of view.

17. The method of claims 12 to 16, wherein the method further comprises the step of detecting very similar images to remove very similar images from the computation.

18. The method of claims 12 to 17, wherein the method further comprises guided feature matching, where known overlap between images is used to find corresponding common points.

19. The method of claims 12 to 18, wherein the method further comprises the step of computation of a dense point cloud from the images of one epoch.

20. The method of claims 12 to 19, wherein the method further comprises the steps of adjustment and statistical testing of the dense point cloud.

21 . The method of claims 12 to 20, wherein the method further comprises 3D visualization of the point cloud in a web environment and/or the interactive visualization of the deformation and the testing results.

22. The method of claims 12 to 21 , wherein multiple cameras are used with a fixed mounting between each other.

23. The method of claims 12 to 22 wherein multiple cameras are used as to measure both sides of a canal at the same time.

24. The method of claims 12 to 23 wherein multiple cameras remain at a fixed location recording images at a set interval.

25. The method of claims 12 to 24 wherein some markers remain on the quay wall permanently as fixed deformation points.

26. The method of claims 12 to 25 wherein an error correcting code is coded to the marker.

27. The method of claims 12 to 26 wherein one or more expected deformations are estimated using a Finite Element Method analysis of the quay wall based on various geotechnical failure mechanisms.

28. A method for monitoring quay walls , the method comprises the steps of determining the coordinates of the markers as input for the adjustment (5003); recording image data from a quay wall to be monitored (5004); search for corresponding common points in the overlapping images (5006); selecting the reference markers in the images (5007); computation of approximate positions and attitude of the images as well as the terrain coordinates of the corresponding common points (5008); computation of image position, image attitude and terrain coordinates using a Bundle Block Adjustment (5009); statistical testing of the observations (5010) and removing rejected observations (5013); overlying a grid of small planes on the computed points on the quay wall; estimating the plane parameters; repeating this process for each epoch; creating constraints between corresponding planes; statistical testing of the constraints (6009); rejecting or updating constraints to represent actual deformation (6010); writing out statistically tested deformation planes (6014).

29. The method of claim 28 wherein the planes are not defined in a regular grid, but the shape is derived from a crack pattern detected in the images.

30. The system according to claims 9 to 11 having means for providing real time feedback about the quality of the images to the operator.

Description:
Determining deformations of quay walls using a photogrammetric system

Field of the invention

[0001] The present invention relates to a method of and apparatus for determining the three- dimensional deformations and/or displacements of a quay wall between two or more observation epochs using a photogrammetric system mounted on a waterborne or airborne vehicle, such as a boat or a drone, with high statistical quality analysis of the computed results and an interactive three-dimensional visualisation.

Background art

[0002] Systems for detecting defects in quay walls are known in the prior art. Patent document US2020/0312019 describes an automated benthic ecology system comprising a remotely operated vehicle upon which an environmental sensor package and photomosaicing technology are mounted, the remotely operated vehicle configured to operate in benthic habitats, the photomosaicing technology comprising a high-resolution still camera, a high-resolution video camera, and a stereoscopic camera, the environmental sensor package comprising a plurality of sensors. The Automated Benthic Ecology System (ABES) is a small, portable remotely operated vehicle (ROV) used to conduct photomosaicing surveys of: (1) biological communities inhabiting vertical structures such as piers and quay walls, (2) biological communities in areas of known UXO and buried munitions, (3) pier and quay wall integrity to investigate for cracks, leaks and other structural issues, and (4) compromised integrity of a ship's hull for planning purposes of the salvage operation as well as pre- and post-salvage surveys of biological impacts. The ABES obtains high- resolution imagery of the site, along with water quality information to provide a more complete ecological understanding of areas of interest that are inaccessible and/or areas that pose human health or safety access issues. Adding a stereoscopic payload and three-dimensional model generation capability has made the ABES capable of collapsing four surveys into one survey and providing a holistic view of the area of interest.

[0003] The system described in US2020/0312019 therefore relies on biological communities inhabiting quay walls in order to be able to detect defects in said quay walls. In particular, US2020/0312019 is concerned with determining cracks, leaks and structural issues of submerged quay walls by means of a tethered underwater remotely operated vehicle ROV, see [0013]

[0004] The present invention differs from this known method in that the present invention is concerned with quay walls above the waterline.

Summary of the invention

[0005] Waterways, for example canals, ports, riverbanks and so forth, are embedded by quay walls. Said quay walls might deform under the influence of varying loads on the quay walls from either changing water levels or loads on the quay walls itself, for example from a road close to the quay wall. Usually, but not generally, waterways are embedded between quay walls on either side of the waterway.

[0006] According to the present invention, a method and system for determining the three- dimensional deformations of a quay wall as defined above is provided, in which one or more imaging devices are used to capture the entire length of the quay wall from a multitude of directions. The images are taken in such a way that every point on the quay wall is imaged from at least three significantly different directions. Images are captured in epochs with an interval defined by the expected movement of the quay wall. After every epoch said image data is input to a processing unit. Said processing unit will statistically test the presence of any movement. If a movement is statistically significant, the deformation is computed with full statistical quality description. Methods for performing such tests in traditional land survey are previously published in the papers “Testing Methods for Adjustment Models with Constraints” by H. Velsink and “A risk evaluation method for deformation monitoring systems” by S. Zaminpardaz et. al.

[0007] One aspect of the present invention relates to measuring deformation of quay walls.

[0008] A further aspect of the present invention relates to measuring displacement of quay walls. [0009] The method of the present invention involves the use of one or more reference points and/or markers in order to determine deformation and/or displacement of the quay walls.

[0010] In one embodiment, the images are taken from a waterborne vehicle. This feature allows for surveying quay walls of waterways which do not have quay walls on both sides on the waterway. [0011] In another embodiment, the images are taken from a manned or unmanned airborne vehicle. This feature allows for surveying quay walls of waterways which do not have quay walls on both sides on the waterway. Moreover, this feature allows the quay walls to be assessed when the waterway is not accessible by water, for example under legal restrictions.

[0012] In another embodiment, the images are taken from a tripod on the opposing side of the quay wall of a waterway.

[0013] In another embodiment, the images are taken from a land-based vehicle on the opposing side of the quay wall of a waterway.

[0014] An automatic process automatically finds corresponding common points between images from all epochs. Corresponding common points are points which represent the exact same position on the quay wall in the image. Corresponding common points are detected by finding similar contrast and/or brightness structures in the images. Document US6711293B1 discloses such a process.

[0015] In order to improve the number of detected corresponding common points, the method can be extended by an iterative approach. In the first iteration an approximate relative position of all images from all epochs is computed. In the subsequent iterations the number of corresponding common points is increased by searching specifically in subregions with known overlap based on the results from the previous iteration.

[0016] The system can be extended with a multitude of reference objects in case stability of the quay wall cannot be guaranteed for any point along the quay wall. The reference objects contain unique markers. The three-dimensional coordinates of the markers in the applicable coordinate system are measured by means of land survey.

[0017] Alternatively, the reference objects can be extended with a minimum of three retro-reflective prisms per object, each having a known offset to all the markers on the reference object. The prisms allow for more accurate observation of the location. By using retro-reflective prisms, the centre of the point can be observed more accurately and from a greater distance.

[0018] Alternatively, the reference objects can be extended to use one or more Global Navigation Satellite Systems (GNSS). To do this, a GNSS-antenna and receiver with a known offset between the phase centre of the antenna and all the markers on the reference object. [0019] Alternatively, the reference objects can be extended with a corner reflector for

Interferometric Synthetic-Aperture Radar (InSAR) deformation monitoring of the reference object. [0020] In another embodiment of the system unique markers are placed om the quay wall to improve the reliability of the detection of common corresponding points.

[0021] A virtual interactive three-dimensional environment is available which displays the computed three-dimensional points from one or more of the epochs. The deformations in each epoch can be visualized in this environment, as well as the computed accuracy, test values and minimum detectable bias. Visualization is achieved by displaying vectors or colours.

Brief description of the drawings

[0022] The present invention will be discussed in more detail below, with reference to the attached drawings, in which:

[0023] Fig. 1 depicts the general measurement setup along a quay wall.

[0024] Fig. 2 depicts the basic pinhole camera model that is used in all computations.

[0025] Fig. 3 depicts the reference object.

[0026] Fig. 4 depicts the reference marker design. [0027] Fig. 5 depicts a flow chart of the method of the present invention, covering the first part.

[0028] Fig. 6 depicts a flow chart of the method of the present invention, covering the second part. [0029] Fig. 7 depicts a system according to the present invention.

Best mode of the invention [0030] Quay walls (1004) are typically constructed as vertical structures built from stones, rocks, bricks or concrete slabs. The vertical structure is placed on a horizontal foundation slab or bar, which in turn rests on a foundation of pillars made of wood or concrete or other suitable materials. One side of the vertical structure faces a waterway and the other side of the vertical structure faces the land next to the waterway. The intent of the vertical structure is to retain the soil behind the land- facing side of the structure.

[0031] Failure of the quay wall is common and can be caused by improper design, excessive load on the topsoil surface behind the quay wall, tree roots or structural degradation due to rot, salinity or corrosion. Early detection of failures in a quay wall is possible by recognizing displacement or deformation of the outer surface of the quay wall. [0032] Deformation of quay walls are best determined by a set of images taken with a camera (1001) equipped with a sensor (2002), typically a Charge-coupled device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) with a fixed lens (1003) from a boat or another platform (1002) at a distance of approximately 1 to 50 meters, preferably 2 to 20 meters, more preferably 3 to 10 meters from the quay wall. The images should be taken in at least four different directions relative to the quay wall: first relative direction: essentially perpendicular; second relative direction: at angle of about 45 degrees to the left; third relative direction: at an angle of about 45 degrees to the right; fourth relative direction: perpendicular but up-side-down relative to the first image. Each perpendicular image should have an overlap of at least 60% but preferably 80% with the closest perpendicular image to the left and the closest perpendicular image to the right.

[0033] All images could be taken in either visible, near-infrared or ultra-violet light. Images taken in visible light can be either recorded and processed as colour images or as grey-scale images. [0034] The focal distance (2005) of the lens should remain fixed throughout the collection process. [0035] The focus of the camera should be fixed throughout the collection process of one set of images of a quay wall as to avoid changes in the image geometry, also known as focus pumping. [0036] The shutter speed of the camera should be as fast as possible and such that any form of motion blur is avoided.

[0037] Prior to the survey, the operator of the camera should take care that the relations between the aperture (2003), shutter speed and sensor gain is set in such a way that sharp, non blurred images are obtained with a high degree of contrast. Said settings should be updated on the camera if the light conditions change significantly. Alternatively, auto-exposure with a minimum shutter speed and a maximum aperture can be employed as well.

[0038] Prior to the survey a multitude of reference objects (3013) are placed along the quay wall (1004) at an interval suitable for the desired accuracy. Markers (3003, 3005, 3007, 3008) are fixed to said reference objects. The positions of the markers on the reference objects are measured by means of traditional land survey methods using for example a theodolite and/or spirit level instrument. These measurements are performed in a local survey network with at least three permanent reference points of which a minimum of 7 coordinates is fixed.

[0039] The reference markers can be measured directly or indirectly by using a set of retro reflective prisms (3004, 3006, 3009) with known offsets to the reference markers. The use of retro reflective prisms allow for a higher accuracy and measurement over longer distances.

[0040] The reference objects are placed on top of the quay wall (3012) using a mounting plate (3010) that is adjustable in height. By changing the height of the mounting plate, the bottom marker can be positioned right above the water level independent from the construction height of the quay wall. The reference object is stabilized with a counterweight (3011).

[0041] The recorded set of images is then analysed by a central processing unit.

[0042] Within the overlaps of all images, potentially corresponding common points are searched using a process as described in document US6711293B1. Corresponding common points are natural points, that are not signalized in the environment with markers. This results in a multitude of corresponding common points between all images. Due to the nature of this process, up to 80% of these potential corresponding common points could be wrongly identified and therefore constitute outliers.

[0043] The reference markers are used to signal a certain recognizable location in the environment. The reference markers are designed to facilitate automatic detection and identification. The target design consists of a white circle (4001) with a concentric ring (4002) on a black base. Around this there is another concentric ring in which a 10-bit target identification is coded (4004-4013). The central circle is slightly oversized and has a small black dot in the middle to facilitate manual measurement. The rings are sized to allow for a realistic amount of defocus without losing the contrast to a significant extent.

[0044] For full automation of the procedure the targets present in the images have to be recognized automatically. The recognition results in approximate positions of the targets in the images. The first step in this procedure is the estimation of the background. Objects consist of excess density in an area of the size of the Point Spread Function (PSF) or slightly larger. The PSF describes the response of an imaging system to a point source or point object and is used to take the correct amount of blurring of point objects in the image into account for further processing. The algorithm detects sets of adjacent pixels with a pre-set density difference above the background value. [0045] To distinguish targets amongst all objects detected, a set of parameters is computed for every object. The parameters are used for a comparison with the characteristics of a target image. The main characteristics used are the following: 1 . within limits set by the size of the target images two objects should be found at the same location, being the images of the circle and the ring of the target; 2. the size of these two objects should have a fixed ratio within bounds set empirically; 3. the absolute size of the two objects should be within limits that depend on the scale of the photograph, which in turn is to be adapted to the physical size of the targets photographed.

[0046] The search method described in the previous paragraph can be seen as the first step in the position estimation process. The final improvement of the position estimate is made with the “least squares template matching" method. This matching method requires start positions with a precision of about 2 pixels in order to assure convergence of the iterative procedure. Although the search procedure is expected to fulfil this requirement, an algorithm for start position improvement has been implemented. This algorithm is based on the principle of autocorrelation and results in the symmetry point of the digital image. The idea is to find the symmetry point by finding the maximum correlation of the image with the image mirrored with respect to the approximate symmetry point. This is done through minimizing the two-dimensional function C(x 0 ,y 0 ): with C(x 0 ,y 0 ) as the correlation function, g(x,y) the density at x,y (the image) and x 0 ,y 0 the position of the symmetry point.

[0047] Without interpolation, the correlation function can be evaluated on a grid with half pixel intervals. To obtain subpixel accuracy a 2-dimensional second order polynomial is fitted to a matrix of 3 by 3 datapoints of the correlation function. In this way the correlation function is approximated in an area the size of a pixel. The position of the minimum of the function gives the best estimate of the position of the symmetry point. The value of the minimum is an indication for the density noise in combination with the symmetry or lack thereof present in the image.

[0048] To assure convergence of the matching procedure, apart from a relatively accurate position, the scale of the target should be approximated. An indication for the scale is obtained through the autocorrelation procedure described above: the size of the so-called correlation window, a circular area for which the autocorrelation function is evaluated, is chosen in such a way that a symmetry- criterion and a contrast-criterion is met. The measure for symmetry is defined by the minimum of the correlation function relative to the assumed image noise. The contrast-criterion ensures a minimum size of the correlation window. Starting from a maximum size correlation window, depending on photo scale and size of the targets used, the size is decreased until both criteria are met. In this way the correlation window is fitted to the symmetric part of the target.

[0049] For the final step of locating the marker, template matching is applied. The template is an artificial digital image based on the target design. It contains only the circle and the ring of a target. The template image is smoothed with a 3 by 3 averaging filter to simulate convolution with the PFS. The template is designed for the image of a slightly defocused target with average photo scale. The eight-parameter model for the least squares matching can be written as follows: g(x,y) = d g t(A (x - x 0 ,y - y 0 )) + g 0 with g(x,y) the density at x,y (the image), t(x,y) the template, d g ,g 0 the radiometric scale and offset, A the affine transformation as a 2 x 2 matrix and x 0 ,y 0 the position of the template.

[0050] An affine transformation is chosen to model the geometric transformation because this is a good local approximation of the projective transformation from object to photo. This holds for a target on a flat surface. For the radiometric part of the model, matching a linear model is chosen. [0051] To automate the identification of a target, each target has, next to its number printed in the upper right corner a circular binary code containing the same number. The binary code consists of 10 bits allowing for 1024 different numbers.

[0052] Reading the circular binary code consists of three steps: 1 . sampling the code; 2. determining the start point of the code; 3. reading the code. Using the previously solved affine deformation of the target image 200 or more samples of the binary code are taken from the image using bilinear interpolation for resampling. The sampling is done at regular angular distances. In the case of a total number of 200 samples this equates to 10 samples per bit along the centre of the bar code ring. The location of the least significant bit is determined by correlating the sampled code with a step function over half the circle. This is to detect the reference half of the circle (4003,4014). The other half of the circle contains the 10-bit code (4004-4013). Reading this code is done in a straightforward manner by comparing the densities in the bit centre with a threshold density level. This threshold depends on the densities found in the binary code.

[0053] While the images observed from the camera sensor (2002) equate a negative plane, all images are converted to a positive plane (2004). All image coordinates of markers and corresponding common points are stored with respect to this virtual positive plane.

[0054] Based on the computed corresponding common points as well as the detected reference markers, approximate values should be computed for the position and orientation of every image as well as the three-dimensional coordinates of the points which are represented in the terrain by the corresponding points and the reference markers.

[0055] The image pairs are ranked on the usability as a start pair for further computation. A start pair has at least three corresponding common points with at least seven known three dimensional coordinates. The best start pair has a lot of corresponding common points between two images. The coordinates of the three corresponding common points to be used as start can be defined in a local system and do not have to be according to a certain scale or orientation. Each combination of linear independent coordinates will work. Using the initial three corresponding common points for at least two images the exterior orientation is computed to continue. Once two or more images are oriented, forward intersection is used to calculate the three-dimensional terrain coordinates of all corresponding common points in the known images. This results in an increased number of known terrain points, to be used as input for further resections.

[0056] For the resection a closed form is used which does not use iterations and circumvents the need for initial approximations. In the resection, six exterior parameters are calculated, namely the three-dimensional position and three-dimensional rotation using at least three corresponding common points with known three-dimensional terrain coordinates and image coordinates in the image to be oriented. The closed form resection is performed in three or more steps.

[0057] The first step is computing the distance (radii) between three corresponding common points with the terrain points and the image position. This involves solving a fourth-degree polynomial which is made using distances between the terrain points and angles between image and terrain points. The fourth-degree polynomial reads:

N 4 x n 4 + N 2 x n 3 + N 3 x n 2 + N 4 x n + N s = 0

This equation needs to be solved for n. The values for N l t N 2 , N 3 , N 4 and N s can be found through substitution as described in the publication “A General Solution of a Closed-Form Space Resection” by Zhuoqiao Zeng and Xibo Wang.

[0058] Solving the fourth-degree equation results in one up to four real answers while complex results are discarded. For every real answer a set of radii can be calculated.

[0059] In the second step, for every set of radii, a three-dimensional trilateration is performed to calculate the three-dimensional coordinates of the image. Every trilateration gives two possible positions for the image. This increases the possible exterior parameter sets to 2-8.

[0060] In the third step, the three-dimensional rotation is calculated. The vectors of the image coordinates and terrain coordinates are centred on the image position calculated in the trilateration. By calculating the rotation of the bisection vectors of image-to-terrain vectors of the same point combinations in three dimensions the quaternion elements x, y, z, w can be calculated. However, there is an ambiguity where the x, y and z elements can have the opposite sign, which leads to two possible three-dimensional rotations for every image position. Both results are stored as a possible outcome. [0061] In an optional fourth step, the possible exterior parameter sets from step three are reduced to one exterior parameter set. This is done using an extra common point not used in any of step one, two and three. For every possible exterior parameter, the projected image position of the extra common point is calculated. The correct exterior parameter gives the smallest difference between projected and measured image position of the extra common point.

[0062] The entire process as described for resection is repeated for a large number of random selections of corresponding common points. This novel variant on the Random Sample Consensus (RANSAC) approach computes an exterior orientation for each random set of at least three corresponding common points. The number of iterations needed depends on the available data and needs to be determined empirically for the environment. For each outcome a score is computed. [0063] This score value of the outcome results from a forward intersection of all corresponding common points from the image pair. Only the correct exterior orientation will result in a good intersection point for every corresponding common point. The residual is the shortest distance between the two lines coming from both corresponding common points. A residual of 0 means a perfect intersection, larger values indicate that the intersection is not perfect. This is common under the presence of measurement noise. All intersection points with a residual smaller than a threshold value are counted. The threshold can be derived by scaling the a priori variance from the image coordinates to terrain coordinates. The total count of intersections with a residual smaller than the threshold is the score value.

[0064] Alternatively, this score value of the outcome results from the number of similar exterior parameter sets resulting from other random points.

[0065] The randomly selected set of corresponding common points resulting in the largest score value is the selected outcome of this process.

[0066] An intersection threshold is predefined at which the residuals of a forward intersection are small enough to use the result of that forward intersection as approximate value in a non-linear least squares adjustment.

[0067] Using the selected exterior orientation all corresponding common points from the image pair are position in the three-dimensional space by forward intersection. Only intersection points with residuals smaller than the predefined intersection threshold are used.

[0068] The selected exterior orientation is stored as the outcome together with the computed terrain points from the forward intersection. This result, together with all previously computed exterior orientations and terrain points, is input into a bundle block adjustment for further refinement. The working of the bundle block adjustment is described later in this document.

[0069] After completing the procedure described above for all image pairs, the outcome of the final bundle block adjustment is a set of exterior orientations and terrain points. A three-dimensional similarity transform is performed to translate, rotate and scale the computed exterior orientations and terrain points to a set of at least three points with known coordinates in the terrain system. [0070] From all detected corresponding common points, the subset is now selected of points for which it is now known, based on the consensus in the computation of the approximate values, that they are not an outlier. The other points, which are outliers, are discarded. [0071] In every image, the photo coordinates of the corresponding common points within that image, are compared to the photo coordinates of the corresponding common points in all other images for which it can be assumed that they are very similar. The homography transform is computed between the photo coordinates of every pair of images.

[0072] Using the homography, the corners of the first image are transformed into the plane of the second image. The average change of coordinates of the corner points is computed. If the average change is below a threshold value, the two images are very similar in position and orientation. The threshold value depends on the chosen setup, a threshold of 42 will generally yield good results. [0073] Once it is established that two images are very similar, one of the images is discarded. This process reduces the risk on a singular matrix which occurs when two very similar images are added to the subsequent processing steps.

[0074] Approximate values for the focal length (2005) and the principal point of autocollimation (2006, 2007), the offset between the centre of the image (2008) and the intersection of the optical axis through the negative image plain (2009) or positive image plane (2010), should be obtained with an a-priori camera calibration. Calibrated values should have a precision of at least 0.01 mm. [0075] With approximate values available for all terrain coordinates as well as the position and orientation of all images, the best unbiased values for these parameters can be estimated using a non-linear least squares adjustment. To do this, the image observations, expressed as row and column values in the images, are converted to x- and y-coordinates in millimetres on the camera sensor relative to the middle of the sensor (2008). Each image coordinate on the negative plane (2011) is optionally converted to a coordinate on the positive plane (2012). Each image coordinate is used as an observation in the least squares adjustment.

[0076] Each image coordinate (2011) relates to a terrain coordinate (2013) and an image position and orientation through the collinearity equations: r il (x - X°) + r 21 (Y - Y°) + r 31 (Z - Z°)

X f r 13 (X - X°) + r 23 (Y - Y°) + r 33 (Z - Z°)

= r 12 C X - X°) + r 22 (Y - Y°) + r 32 (Z - Z°) y 1 r 13 (X - co) + r 23 (Y - Y<>) + r 33 (Z - Z°)

In this equation, (x° Y° Z°) are the coordinates of the image position while (x Y z) are the three-dimensional terrain coordinates of the point (2013). The elements of the rotation matrix /? of the camera attitude are denoted by r y . The focal length of the camera is denoted as All the elements are treated as unknown parameters in the least squares solution. The observed image coordinates are denoted by x and y, which represent the image pixel position converted to millimeters using the known sensor size.

[0077] The non-linear equations are linearized using an approximation from the first order of a Taylor expansion. This linearization is performed using the approximate values computed before. This results in a linear system of equations expressed as: y = Ax + e with y the vector with observations, x the vector with unknown parameters to be estimated, A the model matrix and e the vector with unknown residuals. [0078] The matrix A should be stored as a sparse matrix.

[0079] The system of equations is further extended with the terrestrial survey observations using their respective observation equations.

[0080] The system of equations is further extended with observed X, Y and Z coordinates of terrain points from GNSS-observations when available.

[0081] The system of equations is further extended with the calibrated focal distance (2005) and principal point of autocollimation (2006) when available.

[0082] Each observation in the y-vector is assigned an a-priori variance. This variance, which is the standard deviation squared, is used as a weight for each observation. The variance for each observation is expressed as E jyj, a matrix where all variances are stored on the diagonal. In order to improve computational efficiency, the median value of all variances is stored as s 2 . The variance matrix Q y can now be computed as: E jy} = a 2 Q y .

[0083] For best statistical testing covariance between observations can optionally be added to the adjustment by adding covariance values to the non-diagonal elements of the a 2 Q y matrix. Covariance is to be assumed between the x- and y- image coordinates for best testing results. [0084] The variance for all image coordinates should be approximately 1 /3 rd of physical the pixel size on the sensor.

[0085] The variance of the calibrated focal distance (2005) and principal point of autocollimation (2006) should be equal to the variance expressed on the calibration report.

[0086] The variance of land survey and GNSS-observations should be estimated from experimentation before commencing the survey.

[0087] The values for the unknown parameters can now be estimated using: x = {A T Q ~1 A) 1 A T Q ~1 y.

[0088] The solution of this equation is computed using Cholesky decomposition for sparse matrices or other suitable solvers for linear systems of equations.

[0089] An iteration threshold is predefined at which the improvement in the non-linear least squares iteration is small enough to assume that the adjustment is converged and the result does not significantly improve anymore.

[0090] Due to the linearization with approximate values, the solution needs to be iterated with the outcome of one computation as approximate values for the next iteration. Iteration should continue until no improvement larger than the iteration threshold is obtained in the parameters.

[0091] After completing the final iteration, the ¾ matrix is computed. This matrix is computed as

Qx = ( A T Q ~1 A ) 1 . The ¾ matrix contains the a posteriori variance and covariances of all computed unknown parameters. In essence this means that this matrix contains the variance for all coordinates as well as the correlation between the parameters. This is crucial information for assessing and testing the significance of a deformation later on.

[0092] While the matrix is typically sparse, the (¾matrix generally is not. Due to the large number of parameters, the ¾ matrix will not normally fit in the memory of a computer. Therefore, a Sparse Cholesky decomposition is computed first. From the Cholesky decomposition each column is computed independently. This process can be performed in parallel using multiple cores on a multi-core computer machine.

[0093] On larger systems, the size of ¾ can become too large, meaning that storage in memory is no longer possible. As the matrix is symmetric, only one triangle needs to be stored. This should be done in a storage structure that does not require a continuous block of memory, but individual memory blocks per column.

[0094] The matrix with variances and covariances of the adjusted observations is computed as Q = AQ X A T . This matrix is by definition even largerthan ¾. However, only storage of the elements on the main diagonal is needed. The computation should proceed in a nested loop for both matrix multiplications to avoid excessive delay when computing large systems. The nested loop runs as shown in the pseudo code below. In this code, m represents the number of observations. RowP is the array containing the row indices of the existing rows in the matrix A. Each element in the array gives the index of the element in the array Values that is first non-zero element in a row. The array Columnlnd contains the column indices of the non-zero values. The elements correspond to the elements of the array Values. The Values array contains the actual values of the matrix A.

[0095] An F-test is performed on the computed results and stored in a log file. The F-Test is computed as follows with m the number of observations and n the number of unknowns:

[0096] For each observation a w-test is performed using the following equations: e = y - Ax where e ± is the i-th element from the vector e and s έ. the square root of the i-th element from the main diagonal of the matrix s 2 (¾. [0097] The w-test threshold value can be determined from the c 2 distribution function. Assuming an unreliability factor of 0.1%, the threshold values computed from this distribution is 3.29. Any other threshold can be applied as seen fit for the application.

[0098] If a w-test is larger than the threshold it means that the observation is rejected. Most probably due to a measurement error.

[0099] The observation with the largest w-test is eliminated from the observation vector after which the entire Bundle Block Adjustment is repeated. This process, called data snooping, is iterated until no other observations with a w-test larger than a certain threshold are present in the data.

[0100] For large systems a greater quantity of rejected observations than one can optionally be eliminated.

[0101] The process described above is repeated in its entirety for each subsequent epoch. The results for each epoch are stored in a database.

[0102] After every epoch, except for the first one, the deformation testing process is commenced. In this step the search for corresponding terrain points is repeated, this time to find corresponding common points between two epochs. Again, a process such as the one described in US6711293B1 is used for this step.

[0103] If the reference objects or other objects other than the quay wall have moved between two epochs, these can optionally be masked from the images, thereby avoiding corresponding common points found on the reference object which do not relate to the same three-dimensional location in the terrain. Masking of the reference objects can be achieved by changing the colour values of all pixels in a bounding box around the reference object to black, or by keeping a mask polygon in memory in which all detected corresponding common points are subsequently deleted.

[0104] Alternatively, points on the reference objects and/or other objects other than the quay wall can be removed from the deformation analysis using the three-dimensional distance to the reference markers on the reference object.

[0105] The search for similar points between two epochs is generally harder because there are more changes in light conditions and terrain structure. This is alleviated by a process of guided feature matching. Each image is subdivided in a set of tiles. Any number of tiles is possible and the number of tiles depends on the application. A set of 16 tiles derived from a four-by-four raster generally works well. For each tile, the overlapping tiles from images in the other epochs are selected. This is done by a reprojection of the tile boundaries on all other images using the previously computed position and attitude. Corresponding common points are then searched only between the overlapping tiles.

[0106] If a corresponding point between two epochs was previously found as a corresponding point in one epoch, this point’s identifiers from all epochs are stored. This is a crucial step to maintain integrity in the adjustment result between subsequent epochs.

[0107] A new bundle block adjustment is defined comprising all epochs together, extended with the newly detected corresponding common points. [0108] The new combined bundle block adjustment is extended with constraints. Two or more constraints are defined for every corresponding coordinate between two epochs, at least a transformation constraint and a deformation constraint.

[0109] The transformation constraint states the transformation of a coordinate, which transforms the point to be in the coordinate system of a predefined epoch. This constraint is added as a new observation with a variance of zero or a value close to zero.

[0110] The deformation constraint states the expected three-dimensional movement of a coordinate between the two epochs. The deformation constraint equations can be any equation describing movement between epochs. This constraint is added as a new observation with a variance of zero or a value close to zero.

[0111] A hypothesis of the bundle block adjustment with deformation is formed by the assumptions and expectations of the deformation that are made in the deformation constraints.

[0112] The expected deformations can be entered based on expert assessment from quay wall engineers.

[0113] The expected deformations can also be computed using a Finite Element Analysis of the quay wall for various failure mechanisms of the quay wall.

[0114] To apply the Finite Element Analysis, a local three-dimensional analysis of deformation and stability is performed using Finite Element Analysis. Using a mesh, a realistic model with initial stress and pore pressures fields in equilibrium with the soil weight of the quay-wall is generated. Subsequently, a geotechnical failure mechanism is induced in the model and the resulting deformation is computed using the Finite Element Method. The resulting deformed shape of the quay-wall is used as an alternative hypothesis for testing of deformations.

[0115] On quay walls with constant geotechnical conditions along the wall, the Finite Element Analysis can be performed once. Subsequently, the resulting deformation can be tested at an interval of for instance one meter along the quay wall.

[0116] In most situations there will be no a priori knowledge about the expected movement, in which case the deformation constraints should first be set to “no deformation”.

[0117] In situations without a priori knowledge of the expected movement, one can also opt to create a constraint with an infinitively large variance, in which case the estimated length of the constraint after adjustment is an indication of the movement of the quay wall. Alternatively, this constraint can also be modelled with a large variance close to infinity, or as an additional parameter without corresponding constraint.

[0118] The Bundle Block Adjustment is performed in the same way as described before. At the end of the adjustment, testing is performed using both the F-test and the w-test.

[0119] If the F-test of the Bundle Block Adjustment is accepted, then the said hypothesis of the deformation used in that adjustment is accepted. This means the assumptions and expectations of the deformations were accepted and the analysis can continue using this adjustment result. If the F-test of this Bundle Block Adjustment is rejected, then alternative hypotheses are defined with different assumptions and expectations of the deformations in the adjustment. The Bundle Block Adjustments using these alternative hypotheses are performed. The statistically best Bundle Block Adjustment will be used to continue.

[0120] As all the observations in the bundle block adjustment have been tested in the adjustment per epoch, any w-test larger than the w-test threshold indicates that the expected movement as defined by one of the constraints is not valid.

[0121] The constraint with the largest w-test value above the threshold is removed from the adjustment. Optionally, a larger set of constraints with w-test values above the threshold can be removed as well.

[0122] Optionally, for every removed constraint, a distance vector between the coordinates is added as unknown parameter to the adjustment in order to estimate the deformation at this point between these epochs. This process is repeated until no more constraints have a w-test value larger than the threshold. The computed deformation vectors are stored separately, together with the estimated a posteriori variance for the deformation from the ¾ matrix.

[0123] Equally important is the reliability of the results. This can be expressed as the Minimal Detectable Bias (MDB) of an observation. The MDB of an observation is computed as: with q †. the corresponding element from the Q matrix, which is in turn computed as

Qf= Q^QeQy 1 ·

The value for l 0 is a constant. When selecting a value of 17.075 for l 0 , as is typically done in literature, the interpretation of the MDB is that it represents the smallest error that can still be detected using the w-test with a probability of 80%. Other values can be chosen when suitable for the application.

[0124] The MDB of the computed terrain points and the deformations can only be derived from previously computed MDBs of the observations. For every observation the effect of the minimal detectable error in the observation on the computed coordinates can be computed with:

MDB = QxA T Qy 1 V 0 y where V 0y is a vector with the same size as the y vector with only zeroes, except of the position of the current observation, where the MDB value of that observation is inserted. The equation above is repeated for every observation, resulting in an equal number of MDB vectors. The maximum at each position in this vector is stored as the final MDB value for the estimated parameters. The interpretation is that this is the smallest error that can still be detected with a probability of 80% by performing w-testing on all the observations.

[0125] Within the data of the final epoch all images are reprojected to image planes parallel to the quay wall. Each plane is centred on the same point as the principal point (2006) of the image. Each image plane is rotated such that all rows in the images are horizontal and all rows of all images are pointing in the same direction.

[0126] Between each overlapping image pair, row-based contrast matching is performed. [0127] For every pixel this results in a set of one or more corresponding pixel locations in the other images. Contrary to the previous matching techniques, corresponding common points are now available for every pixel, rather than only the pixels with a high amount of contrast.

[0128] All matched corresponding common points are inserted again into a bundle block adjustment. This time all exterior orientation parameters are kept as non-stochastic observations. This means that they remain fixed and cannot be changed. Each observation gets an a priori standard deviation, a standard deviation of 0.5 times the pixel size generally works well. The three- dimensional coordinates for every matched pixel is the unknown property to be estimated.

[0129] Optionally, for every observation a w-test is performed to test for any outliers where the w- value is larger than the threshold. All observations with a w-test larger than the threshold are removed from the adjustment. The process is repeated until no observations with a w-value larger than the threshold remain.

[0130] The result of this process is a dense three-dimensional point cloud.

[0131] All coordinates from either the sparse and/or the dense point cloud are stored in an octree data structure. Levels of detail in the octree are generated by subsampling the entire point cloud from the most detailed level upwards.

[0132] A virtual three-dimensional environment is provided. All points from the octree data structure are visualized in this environment, coloured by the Red-Green-Blue (RGB) colour value of the corresponding pixel in the image.

[0133] The virtual environment utilises the octree such that only the parts of the octree are loaded with a level of detail that matches the amount of on-screen pixels available to display the points. [0134] Additionally, an octree is created with the points from the deformation analysis. These points are visualized as well, coloured using a colour scale range going from one colour representing no deformation to another colour representing maximum deformation, using the subsequent colours from the colour spectrum.

[0135] The representation of deformations can be changed from colour to scaled vectors pointing in the direction of deformation with the length of the vector representing the magnitude of the deformation.

[0136] A user can click on every deformation point and obtain the absolute value of the deformation, the standard deviation, the Minimal Detectable Bias (MDB) and a graph with the deformation per epoch.

[0137] The user can change the scene to colour the points by standard deviation.

[0138] The user can change the scene to colour the points by Minimal Detectable Bias (MDB). [0139] The representation of standard deviations can be changed from colourto three dimensional spheroids with the magnitude of each axis of the spheroid representing the scaled standard deviation in that direction.

[0140] The representation of minimum detectable biases can be changed from colour to three dimensional boxes with the size of the box in each direction representing the scaled minimum detectable bias in that direction. Description of further embodiments

[0141] In another embodiment of the system, the images are taken from a helicopter or airplane. [0142] In another embodiment of the system, the images are taken from a remotely piloted aircraft or vehicle.

[0143] In another embodiment of the system, the images are taken from a camera mounted on a tripod.

[0144] In another embodiment of the system, the platform is equipped with multiple cameras, each pointing at a different direction.

[0145] In another embodiment of the system, at least two cameras are rigidly mounted to each other, with each camera pointing in a different direction.

[0146] A fixed offset between the two perspective centres of the two cameras is added to the adjustment as a constraint.

[0147] In another embodiment of the system, at least two cameras are rigidly mounted to each other, with one camera pointing to the quay wall and one camera pointing to the opposing quay wall.

[0148] A fixed offset between the perspective centres of the cameras is added to the adjustment as a constraint.

[0149] Reference objects are positioned at the opposing quay wall as well and included in the computations. This will improve the accuracy of the deformation estimation as well as speed up the data collection process.

[0150] In another embodiment of the system a multitude of cameras is used which remain fixed at a single location along the quay wall and create images at a set interval.

[0151] In another embodiment, smaller reference objects containing just one marker are permanently mounted on the quay wall and used as a deformation point in every epoch.

[0152] In another embodiment of the system, a number of bits from the marker are used for error detection using a 2-bit checksum on all bits. If upon processing it is encountered that a marker was identified with the wrong number, the marker can be discarded, or the correct number can be reconstructed using other markers that have received the same three-dimensional position after the Bundle Block Adjustment.

[0153] In another embodiment of the system, the bits in the marker are assigned values for an Error Correcting Code, such that an error in reading the bits can be resolved automatically.

[0154] In another embodiment of the system, the markers are extended with additional bits. To this end the start data ring and the end data ring are also used to encode bits. To find the position of the least significant bit a cyclic search method is applied.

[0155] In another embodiment of the system, the reference objects are equipped with Quick Response codes (QR-code) to encode additional metadata including the calibrated reference values and the last date of calibration. These values can also be encoded through a unique identifier and a look-up database for these values. Upon processing the QR-codes are decoded and the data within the code is used as input for the Bundle Block Adjustment, removing the need for human input during processing. [0156] In another embodiment of the system, the calibration parameters of the camera are added to the Bundle Block Adjustment as an adjustable parameter.

[0157] In another embodiment of the system, the calibration parameters of the camera are added to the Bundle Block Adjustment as an adjustable parameter based on an observation.

[0158] In another embodiment of the system, the reference objects are equipped with a GNSS- receiver and a GNSS-antenna. The distance between the phase centre and all markers on the reference marker is calibrated. The GNSS-receivers perform an observation while the reference object is stationary during the survey. The observed pseudo ranges are processed in a local GNSS- network with the other GNSS-stations on neighbouring reference markers. The computed GNSS- coordinates are added to the adjustment, together with distance constraints for the distance between the offset between the GNSS-antenna and the marker locations.

[0159] In another embodiment of the system, the reference objects are equipped with a GNSS- receiver and a GNSS-antenna. The distance between the phase centre and all markers on the reference marker is calibrated. The reference marker remains fixed at the current location and stays there in-between epochs.

[0160] The GNSS-receiver performs an observation with a set interval and transmits the observed data over a wired or wireless computer to a central computing device.

[0161] The observed pseudo ranges are processed in a local GNSS-network with the other GNSS- stations on neighbouring reference markers.

[0162] The computed GNSS-coordinates that are closest in time to the epoch of a photogrammetric registration are added to the adjustment, together with distance constraints for the distance between the offset between the GNSS-antenna and the marker locations.

[0163] In another embodiment of the system, the reference objects are equipped with a corner reflector for InSAR deformation monitoring. The distance between the corner reflector and all markers is accurately calibrated. The deformation of the reference marker is estimated in every pass of a Radar satellite. Using this method, deformations of the reference marker can be estimated with mm-precision. The estimated deformation between epochs is used as a special constraint with known deformation between two epochs.

[0164] In another embodiment of the system, the quay wall is permanently equipped with a fibre optic cable that is rigidly attached to the wall. Permanent markers are placed on the fibre optic cable. The fibre optic cable is capable of sensing deformations. The total observed deformation in the fibre optic cable between two reference markers is added to the Bundle Block Adjustment as a special constraint with known deformation between two epochs.

[0165] In another embodiment of the system, the reference objects are observed with a Mobile Mapping System (MMS). The MMS is equipped with a GNSS-receiver and an Inertial Measurement Unit (IMU) for positioning. The system additionally has one or more lidar scanning systems and one or more cameras. The reference objects are scanned or photographed by the MMS. This yields a position of the reference object which can be translated to a position of the markers. These positions are added to the bundle block adjustment as observations. [0166] In another embodiment of the system, the position and attitude of each image is determined by creating a fixed connection between the camera and a positioning device. This positioning device could be a combined GNSS-receiver and IMU, or any other device that gives a position or attitude or both. Using a fixed offset between the positioning device and the projection centre of the camera as well as a fixed rotation between these two systems, the position and attitude of the image can be computed from the observations. For additional geometrical reliability these observations should be added as additional observations in the Bundle Block Adjustment which is described in this document.

[0167] In another embodiment of the system the Bundle Block Adjustment is performed independently per epoch. A grid is overlaid on the quay wall. For each grid cell a plane is computed directly from the Bundle Block Adjustment by adding the plane parameters as unknowns to the adjustment. Now the constraints are applied on the corresponding planes rather than the corresponding common points.

[0168] In another embodiment of the system, planes are used as described above. The plane boundaries are defined by a crack detection algorithm that is run on all images. Each identified crack location is idealised to a line which represents the edge of one plane to be detected. The planes for the estimation of the deformation are derived from the detected crack lines.

[0169] In another embodiment of the system, the operator received real-time feedback on the quality of the observations. Each image is, once made, instantly processed at low resolution by the central processing unit to determine its position and orientation. The user receives feedback if the image was sufficiently sharp, if the image was well oriented, if accurate positioning of the image was possible and if additional images are needed to capture the entire quay wall with sufficient accuracy.

[0170] List of reference signs

1001 = Camera

1002 = Observation platform

1003 = Objective lens

1004 = Quay wall

2001 = Camera housing

2002 = Image sensor

2003 = Aperture

2004 = Mathematical positive plane

2005 = Focal length

2006 = Principal point of auto collimation in negative plane

2007 = Principal point of auto collimation in positive plane

2008 = Perspective centre of the camera

2009 = Perspective centre of the camera projected on the negative plane

2010 = Perspective centre of the camera projected on the positive plane

2011 = Point P projected on the negative plane 2012 = Point P projected on the positive plane

2013 = Point P in the terrain as a three-dimensional point

3001 = Front

3002 = Left

3003 = Top reference marker

3004 = Retro reflective prism

3005 = Left reference marker

3006 = Left retro reflective prism

3007 = Bottom reference marker

3008 = Right reference marker

3009 = Right retro reflective prism

3010 = Mounting bracket

3011 = Counterweight

3012 = Top of quay wall

3013 = Reference object

4001 = Centre point of reference marker

4002 = Alignment ring

4003 = Start data ring

4004 = First bit data ring

4005 = Second bit data ring

4006 = Third bit data ring

4007 = Fourth bit data ring

4008 = Fifth bit data ring

4009 = Sixth bit data ring

4010 = Seventh bit data ring

4011 = Eighth bit data ring

4012 = Nineth bit data ring

4013 = Tenth bit data ring

4014 = End data ring

5001 = Start Epoch.

5002 = Place reference objects on quay wall

5003 = Determine positions of markers on reference objects

5004 = Capture images along quay wall

5005 = Overlapping images

5006 = Search corresponding common points

5007 = Detect reference markers in images

5008 = Resection to achieve approximate values using random resection consensus

5009 = Bundle Block Adjustment

5010 = Statistical testing of observations

5011 = Observations rejected? 5012 = Yes, one or more observations are rejected

5013 = Data snooping

5014 = No, there are no rejected observations

5015 = Adjusted observations of current epoch

5016 = Constraint testing, see figure 6

6001 = Constraint testing

6002 = Adjusted observations of epoch 1

6003 = Adjusted observations of epoch 2

6004 = Adjusted observations of epoch ...

6005 = Adjusted observations of epoch n

6006 = Search corresponding common points between epochs using tiled subregions

6007 = Generate constraints between corresponding common points between epochs

6008 = Bundle Block Adjustment

6009 = Statistical testing of constraints

6010 = Constraints rejected?

6011 = Yes, one or more of the constraints are rejected.

6012 = Remove constraint or update constraint to represent predicted deformation

6013 = No, there are no rejected constraints

6014 = Statistically tested deformations for each corresponding common point

6015 = Convert corresponding common points to three-dimensional point cloud

6016 = Generate dense point cloud and generate octree from sparse and dense point cloud

6017 = Colour points by deformation magnitude, standard deviation or MDB

6018 = Visualise point cloud in web based interactive virtual three-dimensional environment

7001 = Image recording device

7002 = Image sensor

7003 = Image recording device processing unit

7004 = Storage

7005 = Central Processing Unit

7006 = File storage

7007 = Microprocessor

7008 = Memory

7009 = Database

7010 = Network

7011 = Client computer device

7012 = Microprocessor

7013 = Storage

7014 = Memory

7015 = Display

7016 = Human input device

7020 = Attitude and or positional sensing unit