Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR RECORDING INSPECTION DATA
Document Type and Number:
WIPO Patent Application WO/2022/248022
Kind Code:
A1
Abstract:
The invention relates to a method and system for recording inspection data of an environment in floor plan space. The method comprises the following steps: Receiving a sequence of images from a camera, the sequence of images captured by the camera as the camera is moved along a camera path through the environment; Generating an estimate of the camera path in sensor space based on the sequence of images; For a first image of the sequence of images, taken at a first position on the camera path, obtaining a first position in sensor space and receiving a first user input indicative of a first position of the camera in floor plan space; For a second image of the sequence of images, taken at a second position on the camera path, obtaining a second position in sensor space and receiving a second user input indicative of a second position of the camera in floor plan space; Calculating a first transformation between sensor space and floor plan space based on the first position and second position in sensor space and the first position and second position in floor plan space; At an inspection position on the camera path, receiving inspection data; Storing the inspection data together with data indicative of the inspection position in floor plan space.

Inventors:
ZHANG SHIHUA (SG)
POSER MARCEL (CH)
CABALLERO ANTONIO (CH)
Application Number:
PCT/EP2021/063895
Publication Date:
December 01, 2022
Filing Date:
May 25, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PROCEQ SA (CH)
International Classes:
G01C21/20; G01C15/00; G01C21/16
Foreign References:
CN112465907A2021-03-09
US20150350845A12015-12-03
Attorney, Agent or Firm:
E. BLUM & CO. AG (CH)
Download PDF:
Claims:
Claims

1. A method for recording inspection data of an environment in floor plan space, comprising

- receiving a sequence of images from a camera (2, 12), the se quence of images captured by the camera as the camera (2, 12) is moved along a cam era path through the environment,

- generating an estimate of the camera path in sensor space based on the sequence of images,

- for a first image of the sequence of images, taken at a first position on the camera path, obtaining a first position in sensor space and receiving a first user input indicative of a first position of the camera in floor plan space,

- for a second image of the sequence of images, taken at a second position on the camera path, obtaining a second position in sensor space and receiving a second user input indicative of a second position of the camera in floor plan space,

- calculating a first transformation between sensor space and floor plan space based on the first position and second position in sensor space and the first position and second position in floor plan space,

- at an inspection position on the camera path, receiving inspection data,

- storing the inspection data together with data indicative of the in spection position in floor plan space.

2. The method of claim 1, wherein the floor plan comprises a two-dimensional, to-scale repre sentation of the environment in floor plan space, in particular wherein the environment is a building.

3. The method of any of the preceding claims, wherein the floor plan specifies positions and dimensions of physi cal features in the environment in floor plan space, in particular position and dimen sions of at least one of walls, doors, windows, pillars and stairs.

4. The method of any of the preceding claims, wherein at least a part of the estimate of the camera path in sensor space is generated without taking into account GNSS position data. 5. The method of any of the preceding claims, wherein the estimate of the camera path in sensor space and the first transformation between sensor space and floor plan space are calculated in a device moved along the camera path together with the camera.

6. The method of any of the preceding claims, wherein the estimate of the camera path, the first transformation and the data indicative of the inspection position in floor plan space are calculated in real time.

7. The method of any of the preceding claims, wherein the inspection data comprises an image received from the camera (2, 12), in particular wherein the inspection data additionally comprises an image from a 360-degree camera (5).

8. The method of any of the preceding claims, wherein the inspection data comprises non-destructive testing data, in particular at least one of

- a hardness value,

- ultrasonic data,

- GPR data,

- eddy current data.

9. The method of any of the preceding claims, further comprising

- for a third image of the sequence of images, taken at a third posi tion on the camera path, obtaining a third position in sensor space and receiving a third user input indicative of a third position of the camera in floor plan space,

- calculating a second transformation between sensor space and floor plan space based on the second position and third position in sensor space and the second position and third position in floor plan space,

- applying the second transformation for calculating data indicative of positions in floor plan space, which are located on the camera path after the third position.

10. The method of claim 9, additionally comprising - retrospectively applying the second transformation for calculating data indicative of positions in floor plan space, which are located on the camera path between the second position and the third position,

- in particular changing the stored data indicative of the inspection position in floor plan space for inspection data located on the camera path between the second position and the third position.

11. The method of any of the preceding claims, wherein the data indicative of the inspection position in floor plan space comprise at least one of

- the inspection position in floor plan space,

- the inspection position in sensor space and the transformation be tween sensor space and floor plan space,

- a timestamp of the inspection data and a timestamped version of the first estimate of the camera path in floor plan space.

12. The method of any of the preceding claims, wherein the first estimate of the camera path in sensor space is gen erated by performing visual odometry, in particular feature-based visual odometry, on the sequence of images.

13. The method of any of the preceding claims, wherein in generating the estimate of the camera path in sensor space, a vertical component of the camera path is neglected.

14. The method of any of the preceding claims, further comprising

- receiving acceleration data captured by an inertial measurement unit (4, 14) and/or orientation data captured by a magnetometer (15) as they are moved along the camera path together with the camera,

- additionally using the acceleration data and/or orientation data for calculating the estimate of the camera path in sensor space.

15. The method of claim 14, wherein the estimate of the camera path in sensor space is generated by performing visual inertial odometry on the sequence of images and at least one of the acceleration and orientation data.

16. The method of any of the preceding claims, further comprising

- displaying, in real time, on a graphical representation of the floor plan, the inspection position and a current position of the camera in floor plan space.

17. The method of claim 16, further comprising

- displaying, in real time, on the graphical representation (24) of the floor plan, the estimate of the camera path in floor plan space.

18. The method of any of the preceding claims, wherein the step of receiving the first and/or second user input com prises the steps of displaying a graphical representation (24) of the floor plan on a screen, receiving an input event from the user indicative of a current posi tion of the camera on the representation of the floor plan.

19. The method of any the preceding claims, further comprising

- generating, in real time, an estimate of a camera viewing direction based on the sequence of images and, if applicable, on the acceleration and/or orienta tion data,

- storing the inspection data together with data indicative of the camera viewing direction at the inspection position in floor plan space.

20. The method of claim 19, further comprising - displaying, in real time, on a graphical representation (24) of the floor plan, the estimate of the camera viewing direction at a current position (25) in floor plan space.

21. The method of any of the preceding claims, further comprising

- triggering to automatically acquire inspection data in defined time intervals and/or in defined intervals of space along the camera path.

22. The method of any of the preceding claims, further comprising

- automatically triggering acquiring the inspection data upon reach ing a predetermined inspection position, in particular upon the distance between a current position of the camera and the predetermined inspection position falling below a defined threshold. 23. The method of any of the preceding claims, further comprising - generating guiding information for guiding the user to a predeter mined inspection position, in particular by displaying the predetermined inspection position in floor plan space on a graphical representation (24) of the floor plan, and/or in particular by displaying directions to the predetermined inspec tion position.

24. The method of any of the preceding claims, further comprising

- generating, in real time, an error measure for the estimate of the camera path,

- if the error measure exceeds a defined error threshold at a current position: outputting a warning or triggering the user to generate a further user input indicative of the current position of the camera in floor plan space, calculating a further transformation between sensor space and floor plan space based on the further position in sensor space and the further position in floor plan space.

25. The method of any of the preceding claims, further comprising

- storing raw data indicative of the estimate of the camera path in sensor space, in particular three room coordinates, three rotation angles and a confi dence measure,

- storing data indicative of the first, second and any further position in sensor space and of the first, second and any further position in floor plan space.

26. The method of any of the preceding claims, further comprising

- automatically generating an inspection report, wherein the inspection report contains at least one of:

- a graphical representation (24) of the floor plan with position marks (26) of the inspection locations,

- a graphical representation (24) of the floor plan with an indication (27) of the camera path or an inspected area,

- the inspection data together with a graphical representation of the respective inspection position on the floor plan.

27. The method of any of the preceding claims, further comprising

- generating and storing a representation of the environment in sen sor space based on the sequence of images,

- upon cold start, receiving a further sequence of images from the camera located at a cold start position,

- generating an estimate of the cold start position in sensor space based on the further sequence of images and the representation of the environment,

- determining the cold start position in floor plan space based on the estimate of the cold start position in sensor space and on the transformation between sensor space and floor plan space calculated prior to cold start.

28. A computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of any of the preceding claims.

29. An inspection system comprising

- a camera (2, 12) configured to capture a sequence of images,

- a processor (1) in communication with the camera (2, 12), config ured to carry out the method of any of claims 1 to 27.

30. The inspection system of claim 29, further comprising

- a 360-degree camera (5) in communication with the processor (1), configured to acquire inspection data.

31. The inspection system of any of claim 29 and 30, additionally comprising

- a display (3, 13) in communication the processor (1), in particular wherein the inspection system comprises a tablet com puter or a smartphone (11).

32. The inspection system of any of claims 29 to 31, additionally comprising at least one of

- an inertial measurement unit (4, 14), in particular an accelerometer and/or a gyroscope,

- a magnetometer (15).

Description:
Method for recording inspection data

Technical Field

The invention concerns a method for recording inspection data of an environment in floor plan space as well as an inspection system for carrying out the method.

Background Art

Modem society relies on buildings, such as houses, office building, industrial buildings and bridges, which need to be maintained and kept in good condi tion in order be safe for the people in and around them. For an early detection of de fects in the buildings, inspection is performed. Conventional inspection includes an inspector walking in or around the building and inspecting it, e.g. by eye and/or by non-destructive testing (NDT) method. Documentation of the inspection is usually done by acquiring inspection data, e.g. by taking photos and/or NDT data, and manu ally associating them with the inspection position, i.e. the location where the inspec tion data was acquired. Ideally, the documentation comprises the inspection data as sociated with their respective inspection positions in floor plan space, i.e. in the coor dinate system of a floor plan of the building. Such conventional inspection is tedious and time-consuming since it requires a lot of manual interaction, in particular position logging, on the part of the inspector.

Recent improvements to inspection and in particular to the work- flow of the inspector have been as follows: For outdoor inspection and where global navigation satellite system (GNSS) positioning, e.g. by GPS, is feasible, the inspec tion data may automatically be associated with the inspection position as measured by GNSS, i.e. without an extra interaction for positioning on the part of the inspector.

For indoor inspection where GNSS positioning usually is not feasible, the inspector still needs to provide data indicative of the inspection position manually, e.g. by tap ping on a displayed floor plan of the inspected building at the inspection position. While this process is faster than conventional inspection, it still suffers from several disadvantages: Not only is it still tedious and time-consuming to manually provide the inspection location for each inspection datum, but it is also not very reliable since it is prone to errors on the part of the inspector. Disclosure of the Invention

The problem to be solved by the present invention is therefore to provide a method for recording inspection data of an environment, which is fast and at the same time associates reliable inspection positions to the respective inspection data, in particular in indoor use.

This problem is solved by the following method for recording in spection data of an environment in floor plan space.

As stated before, the environment typically is a building, such as a house, an office building, an industrial building or a bridge. In particular, the floor plan comprises a two-dimensional, to-scale representation of the environment in floor plan space. The floor plan space is typically spanned by two, in particular horizontal, coordinate axes. Advantageously, the floor plan specifies positions and dimensions of physical features in the environment in floor plan space, in particular position and di mensions of at least one of walls, doors, windows, pillars and stairs. Positions and di mensions in floor plan space may e.g. be given in pixels.

According to an aspect of the invention, the method comprises the following steps:

- receiving a sequence of images from a camera, the sequence of images captured by the camera as the camera is moved along a camera path through the environment: The camera may e.g. be carried along the camera path by a user, e.g. an inspector, or it may be part of a drone moving along the camera path, in particular autonomously or controlled from remote.

- generating an estimate of the camera path in sensor space based on the sequence of images: Techniques how this may be done are given below. The sen sor space in particular is a representation of real space, which is three-dimensional, as sensed by the camera in two - advantageously horizontal - dimensions. Positions in sensor space may e.g. be expressed in meters.

- for a first image of the sequence of images, taken at a first position on the camera path, obtaining a first position in sensor space and receiving a first user input indicative of a first position of the camera in floor plan space: In other words, the first position represents a first calibration point for determining a transformation between sensor space and floor plan space.

- for a second image of the sequence of images, taken at a second position on the camera path, obtaining a second position in sensor space and receiving a second user input indicative of a second position of the camera in floor plan space: In other words, the second position represents a second calibration point for determin ing a transformation between sensor space and floor plan space.

- calculating a first transformation between sensor space and floor plan space based on the first position and second position in sensor space and the first position and second position in floor plan space: Typically, the first transformation may be represented by a first matrix. In particular, the transformation describes at least one of a translation, a rotation and a scaling.

- at an inspection position on the camera path, receiving inspection data: As detailed later, an acquisition of inspection data may be triggered manually by the user or, in a different embodiment, automatically when certain conditions are ful filled.

- storing the inspection data together with data indicative of the in spection position in floor plan space: In other words, the inspection data are associ ated or tagged with their respective inspection positions. Data indicative of the in spection position in floor plan space are in particular data from which the inspection position in floor plan space is derivable.

Accordingly, the data indicative of the inspection position in floor plan space may e.g. comprise at least one of the following: the inspection position in floor plan space; the inspection position in sensor space together with the first (or other applicable) transformation between sensor space and floor plan space; a timestamp of the inspection data and a timestamped version of the first estimate of the camera path in floor plan space; a timestamp of the inspection data and a timestamped version of the first estimate of the camera path in sensor space together with the first (or other applicable) transformation between sensor space and floor plan space.

It is evident that such method for recording inspection data in floor plan space yields a fast, reliable and consistent positioning of the inspection data: At the start of an inspection, a calibration based on (at least) a first and a second position is necessary in order to establish the transformation between sensor space and floor plan space. With the transformation established, the camera only needs to be moved along the camera path, and the inspection data be acquired, but no further user inter action is required for recording the inspection position. Due to the lack of further user interactions, the method is less prone to positioning errors due to erroneous user in puts.

Such method is particularly advantageous when at least a part of the estimate of the camera path in sensor space is generated without taking into account GNSS position data. This is typically the case in indoor inspections, i.e. inside a building where no GNSS data is available due to GNSS signal attenuation by the building.

Advantageous embodiments

Advantageously, the estimate of the camera path in sensor space and the first transformation between sensor space and floor plan space are calculated in a device moved along the camera path together with the camera. The camera may be an integral part of the device. In particular, the device may be a tablet computer or a smartphone. Alternatively, the camera may be separate from the device, e.g. a cam era worn on the chest or on the head of the user. In this case, the camera is connected to the device either wirelessly or through a cable.

In an advantageous embodiment, the device comprises a display configured to display a graphical representation of the floor plan and/or the inspection data.

The inspection data may be of different types: The inspection data may e.g. comprise an image received from the camera, i.e. the same camera that takes the sequence for images. Additionally, the inspection data may comprise an image from a 360-degree camera, e.g. mounted on a helmet of the user. Further, the inspec tion data may comprise non-destructive testing data, in particular at least one of a hardness value, ultrasonic data, ground-penetrating radar (GPR) data, eddy current data.

In a particularly advantageous embodiment, the estimate of the camera path, the first transformation and the data indicative of the inspection position in floor plan space are calculated in real time. “Real time” in this context means dur ing the inspection, in particular in less than 1 s from acquiring the relevant data, which respectively are a most recent image of the sequence of images, the first and second user input and the inspection data.

Such real time positioning, in particular in connection with a dis play of the floor plan, allows the user to see and check his current position in floor plan space, e.g. by comparing his relative position regarding physical features of the building between reality and floor plan space. This may be of particular interest in case of drift in the positioning, i.e. when the first estimate of the camera path in sen sor space - and hence also in floor plan space - deviates from the actual camera path in reality. In this case, the user may notice the drift in real time and correct it in real time. An example how this may be done is given below. Further advantages of the real time positioning are that a previous camera path or the camera path followed up to now may be displayed to the user. Al ternatively or additionally, the user may navigate to a predetermined inspection posi tion, i.e. a position where inspection data shall be acquired, e.g. by looking at a dis play of his current position in floor plan space with regard to the predetermined in spection position.

Visual odometry

In an embodiment, the first estimate of the camera path in sensor space is generated by performing visual odometry (VO), in particular feature-based VO, on the sequence of images. In this context, VO in particular means a process of determining the position and optionally also an orientation of the camera by analyzing images taken by the camera. In other words, VO is a process of incrementally esti mating the position and orientation of the camera in motion by examining changes that the motion induces on the images taken by the camera.

Since VO facilitates to also determine the orientation of the camera, i.e. a camera viewing direction, when moved along the camera path, the method may advantageously be extended to comprise the following steps:

- generating, in real time, an estimate of the camera viewing direc tion based on the sequence of images,

- storing the inspection data together with data indicative of the camera viewing direction at the inspection position in floor plan space.

The estimate of the camera viewing direction may be displayed in real time in order to support his navigation through the environment. Further, the stored camera viewing direction at the inspection positions makes the evaluation of the inspection data as well as a repetition of the inspection at the same inspection po sitions easier.

While other positioning solutions based on a sequence of images, such as simultaneous localization and mapping (SLAM) aim at global consistency of the estimate of the camera path and in particular need closed loops in the camera path, VO only aims at local consistency in the estimate of the camera path. This releases the need to keep track of all of the sequence of images, as needed in SLAM, and makes VO less heavy in terms of computational power needed. VO, in particular with a two-point calibration as described above, may thus be performed in real time and on a mobile device such as a tablet computer or a smartphone. In an advantageous embodiment, the first estimate of the camera path in sensor space is generated by performing feature-based VO. In such feature- based method, salient and repeatable features are extracted and tracked across subse quent images in the sequence of images. Alternatively, appearance-based VO, which uses the intensity information of all pixels in subsequent images, may be applied to generate the first estimate of the camera path. However, feature-based methods are generally more accurate and faster than appearance-based methods. For estimating the motion between subsequent images in the sequence, the well-known random sam ple consensus (RANSAC) algorithm is advantageously used due to its robustness in the presence of outliers.

In a general three-dimensional case, there are six degrees of free dom (DoF) to be estimated for the camera for each image in the sequence, namely e.g. three coordinates for the position and three angles for the orientation in sensor space. In this case, five corresponding positions in sensor space and in floor plan space would be needed for estimating the transformation between sensor space and floor plan space.

However, in the two-dimensional case of a floor plan, which by def inition shows only one floor or level, planar motion may be assumed. Thus, in gener ating the estimate of the camera path in sensor space, a vertical component of the camera path is neglected. In this case, only three parameters need to be estimated, namely, e.g. an angle and a distance travelled by the camera between subsequent im ages and a viewing direction. Thus, only two points are needed, which again is com putationally less expensive. This leads to the above-described two-point calibration with two user inputs required by the method.

In an advantageous embodiment, the first position and the second position on the camera path are separated by at least a minimum distance, in particu lar by at least 1 m or at least 3 m. This ensures that the determined transformation be tween sensor space and floor plan space is reliable and accurate.

Multi-point calibration

A challenge in VO is that errors introduced by each image-to-image motion in the sequence accumulate over time. This generates the aforementioned drift of the first estimate of the camera path from the actual camera path. A solution to the problem of drift is to perform a re-calibration by establishing a third (or further) cali bration point at a third position as follows.

Advantageously, the above method additionally comprises the steps of - for a third image of the sequence of images, taken at a third posi tion on the camera path, obtaining a third position in sensor space and receiving a third user input indicative of a third position of the camera in floor plan space,

- calculating a second transformation between sensor space and floor plan space based on the second position and third position in sensor space and the second position and third position in floor plan space,

- applying the second transformation for calculating data indicative of positions in floor plan space, which are located on the camera path after the third position.

In this way, a drift in the first estimate of the camera path is cor rected, i.e. zeroed, at the third position. For the third position, the position of the cam era in floor plan space (again) corresponds to its actual position in the environment. Evidently, such recalibration is advantageously repeated, e.g. in regular time or dis tance intervals. Since this extends the calibration from two points to many points, such method is called multi-point calibration.

Such method may further be extended to comprise the following steps:

- generating, in real time, an error measure for the estimate of the camera path,

- if the error measure exceeds a defined error threshold at a current position: outputting a warning or triggering the user to generate a further user input indicative of the current position of the camera in floor plan space, calculating a further transformation between sensor space and floor plan space based on the further position in sensor space and the further position in floor plan space.

Further, it is possible to apply the second or further transformation not only to positions on the camera path after the third or, respectively, further posi tion, but also to a part of or all of the positions determined since the previous calcula tion of the transformation, i.e. the previous calibration. This is based on the assump tion that already those positions are - to a certain degree - affected by the drift.

Accordingly, the above method may additionally comprise the steps of

- retrospectively applying the second transformation for calculating data indicative of positions in floor plan space, which are located on the camera path between the second position and the third position, - in particular changing the stored data indicative of the inspection position in floor plan space for inspection data located on the camera path between the second position and the third position.

In this way, drift may also be corrected for inspection positions of inspection data that is already stored.

Visual inertial odometry

An accuracy of the first estimate of the camera path in sensor space may be improved by additionally including data of an inertial measurement unit (IMU), e.g. an accelerometer or a gyroscope, and/or of a magnetometer, e.g. a com pass. In such embodiment, the method additionally comprises

- receiving acceleration data captured by an inertial measurement unit and/or orientation data captured by a magnetometer as they are moved along the camera path together with the camera,

- additionally using the acceleration data and/or orientation data for calculating the estimate of the camera path in sensor space.

In this case, the estimate of the camera path in sensor space may be generated by performing visual inertial odometry (VIO) on the sequence of images and at least one of the acceleration and orientation data. This improves the accuracy and makes the estimate of the camera path more robust, especially in situations with few overall features or few repeatable features in subsequent images of the sequence of images, as may be the case in long hallways or under poor light conditions.

Displaying information

As mentioned before, the inspection may be facilitated by display ing various kinds of information to the user, i.e. the inspector.

In an embodiment, the method further comprises displaying, in real time, on a graphical representation of the floor plan, the inspection position and a cur rent position of the camera in floor plan space. This support the navigation of the user.

Further, the method may comprise displaying, in real time, on the graphical representation of the floor plan, the estimate of the camera path in floor plan space. It is understood that such estimate may be an aggregate estimate calcu lated by applying different transformations for the respective parts of the camera path. Such display of the camera path may again facilitate the navigation of the user. More over, it support the user in keeping track of the progress of inspection, i.e. which parts of the environment have already been inspected. In case, the camera viewing direction is estimated, the method may further comprise displaying, in real time, on a graphical representation of the floor plan, the estimate of the camera viewing direction at the current position in floor plan space. This again facilitates navigation.

Also the two-point or multi-point calibration may be arranged in a simple and time-saving way by displaying the floor plan to the user. In such embodi ment of the method, the step of receiving the first and/or second user input comprises the steps of displaying a graphical representation of the floor plan on a screen, and re ceiving an input event from the user indicative of a current position of the camera on the representation of the floor plan. The input event may e.g. be a tap or double tap on the screen.

Inspection workflow

The method for recording inspection data, and thus the inspection workflow, may further be automated in the following ways:

In an embodiment, the method additionally comprises triggering to automatically acquire inspection data in defined time intervals and/or in defined inter vals of space along the camera path. In this way, more inspection data, e.g. images taken by the camera, may be acquired, which leads to a better inspection coverage of the environment.

Alternatively or additionally, the method may comprise automati cally triggering acquiring the inspection data upon reaching a predetermined inspec tion position, in particular when the distance between a current position of the camera and the predetermined inspection position falls below a defined threshold, e.g. 1 m. This is particularly useful when the user is guided through the environment, e.g. as follows.

In an embodiment, the method additionally comprises generating guiding information for guiding the user to a predetermined inspection position. This may e.g. be done by displaying the predetermined inspection position in floor plan space on a graphical representation of the floor plan. Another possibility displaying directions, e.g. in the form of arrows, to the predetermined inspection position. Such method makes it possible to guide the user along a predetermined, in particular opti mized, route in order to cover all predetermined inspection positions and/or to save time. Post-processing and reporting

The described method may be extended to allow for useful post processing and reporting options.

In an embodiment, the method further comprises storing raw data indicative of the estimate of the camera path in sensor space. In particular, the raw data may be given as three room coordinates of the camera path. Additionally, three rotation angles of the device may be comprised in the raw data. Alternatively, the ori entation or rotation of the device may be expressed as a quaternion. Accordingly, the raw data may in particular comprise the quaternion, i.e. four numbers describing the orientation or rotation. Further a confidence measure for the accuracy of the estimated position on the camera path may be calculated and stored in the raw data.

Further, the method may comprise storing data indicative of the first, second and any further position in sensor space and of the first, second and any further position in floor plan space. This means that the calibration points are stored together with the raw data. Having the raw data as well as the calibration points avail able allows for various post-processing options, e.g. generating a replay of the camera path, determining an inspected area, which in particular may be the area imaged by the camera when being moved along the camera path, or correcting the estimate of the camera path or specific inspection positions during post-processing, i.e. after the inspection or the camera path is completed.

In a further embodiment, the method comprises automatically gen erating an inspection report. Since all relevant data, such as inspection data, inspec tion positions and the floor plan, are available, such inspection report may be output in a standardized form. This saves time on the part of the user. Also, it allows to per form a quick check for completeness of the inspection data, e.g. just after completing the inspection or, in other words, the camera path.

In particular, the inspection report may contain at least one of the following:

- a graphical representation of the floor plan with position marks of the inspection locations,

- a graphical representation of the floor plan with an indication of the camera path, e.g. as a line or a heat map, or an inspected area, e.g. as a shaded area or a heat map,

- the inspection data, e.g. the camera image or the acquired NDT data, together with a graphical representation of the respective inspection position on the floor plan. Cold start

A further embodiment of the method facilitates automatic relocali zation subsequent to a cold start, i.e. after a device executing the method has been switched off and on again or after the estimate of the camera path has been lost or corrupted otherwise. Such method requires that a sequence of images has been taken in the same environment and that a two-point calibration as described above has been performed before cold start. In particular, a further sequence of images captured upon cold start needs to picture corresponding characteristic features of the environment as the sequence of images captured before cold start.

Such embodiment of the method further comprises

- generating and storing a representation of the environment in sen sor space based on the sequence of images: In particular, the representation may con tain characteristic features of the environment, such as changes in color or intensity.

- upon cold start, receiving a further sequence of images from the camera located at a cold start position,

- generating an estimate of the cold start position in sensor space based on the further sequence of images and the representation of the environment: This may be done by correlating corresponding characteristic features in the sequence of images and the further sequence of images, i.e. feature-based. Further acceleration data and/or orientation data may be taken into account to generate the estimate of the cold start position in sensor space.

- determining the cold start position in floor plan space based on the estimate of the cold start position in sensor space and on the transformation between sensor space and floor plan space calculated prior to cold start: In the case of multi point calibration as described above, advantageously the transformation calculated last before cold start is applied.

Evidently, such method avoids the need for another two-point cali bration after cold start, thereby saving time on the part of the user.

Computer program product

A second aspect of the invention relates to a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out any of the above methods.

In particular, the computer program product may be implemented in Apple’s ARKit 4 or a comparable product, e.g. conveniently offering an algorithm for generating the estimate of the camera path in sensor space based on the sequence of images, e.g. by VO or VIO.

Inspection system

A further aspect of the invention relates to an inspection system comprising a camera configured to capture a sequence of images, e.g. a video camera, and a processor communicatively coupled with the camera and configured to carry out any of the above methods. The inspection system may e.g. be a tablet computer such as an iPad or a smartphone such as an iPhone. In such case, the camera may be used for both, capturing the sequence of images used for generating the estimate of the camera path and acquiring the inspection data in form of photos.

The inspection may be extended to further comprise a 360-degree camera configured to acquire inspection data. In particular, the 360-degree is con nected to the processor via wireless or wired connection.

As discussed above, it is advantageous that the inspection system further comprises a display, in particular a touch screen, in communication with the processor. Such inspection system may comprise a tablet computer or a smartphone with their respective displays.

Further, the system may comprise at least one of an inertial meas urement unit (IMU), in particular an accelerometer and/or a gyroscope, and/or a mag netometer. By taking into account the inertial data and/or orientation data acquired by the IMU and magnetometer, respectively, VIO is facilitated as a method of generating the estimate of the camera path as discussed above.

Other advantageous embodiments are listed in the dependent claims as well as in the description below.

Brief Description of the Drawings

The invention will be better understood and objects other than those set forth above will become apparent from the following detailed description thereof. Such description makes reference to the annexed drawings, wherein:

Fig. 1 shows a block diagram of an inspection system according to an embodiment of the invention;

Fig. 2 shows a schematic view of a device for performing the method for recording inspection data according to an embodiment of the invention; Fig. 3 shows a flow diagram of an inspection workflow according to an embodiment of the invention;

Fig. 4 shows a flow diagram of a method for recording inspection data according to an embodiment of the invention;

Figs. 5a to 5d show a schematic illustration of a transformation be tween sensor space and floor plan space according to an embodiment of the inven tion;

Figs. 6a and 6b show a schematic illustration of multi-point calibra tion between sensor space and floor plan space according to an embodiment of the in vention;

Figs. 7a and 7b show a real-life example of a device performing a method for recording inspection data according to an embodiment of the invention.

Modes for Carrying Out the Invention

The inspection system of Fig. 1 comprises a processor 1 and a cam era 2 communicatively coupled to the processor 1. The camera 2 may be a video cam era configured to record a plurality of images, i.e. frames, per second, e.g. 30 or 60 frames/s. The processor 1 and the camera 2 may be integral parts of the same device, e.g. a tablet computer or a smartphone. The processor is configured to perform a method as described above.

Optionally, the system of Fig. 1 may also comprise a display 3, e.g. a touch screen such as in the case of a tablet computer or smartphone. The display 3 is in connection with the processor 1, and configured to receive and display data from the processor 1, e.g. images captured by the camera 2, a floor plan, in particular with inspection positions, or inspection data.

The system may further comprise an IMU 4 configured to acquire acceleration and/or orientation data. The IMU may again be an integral part of the de vice, such as in the case of a tablet computer or a smartphone.

Further, the system may comprise a 360-degree camera 5 communi catively coupled to the processor 1, e.g. via a wireless connection, and carried by the user or a drone during inspection. The 360-degree camera 5 is configured to take 360- degree images of the environment. Such images facilitate a more detailed documenta tion of the inspection and a reconstruction of the environment in post-processing, e.g. in an augmented reality (AR) model. This allows a third party, e.g. a remote inspec tor, to retrace and evaluate the inspection later on and remotely. Optionally, the system may also comprise an NDT sensor 6 com municatively coupled to the processor 1. The NDT sensor 6 may e.g. be a mechanical hardness sensor, an ultrasound transmitter, a GPR transmitter acquiring NDT data or a profometer acquiring eddy current data. The NDT data may be recorded during the inspection as inspection data and in particular displayed on the display 3 if needed.

Fig. 2 shows a schematic view of an inspection device 11 which, in this case, is a conventional tablet computer or smartphone. The device 11 comprises a processor (not shown), a touch screen 13 on a front side, a camera 12, in particular mounted to a back side opposite to the front side, an IMU 14 and a magnetometer or electronic compass 15 as integral parts. Such device 11 is suitable and may be config ured for recording inspection data according to the above method. At least a part of the method may be implemented using readily available VO or VIO solutions, such as Apple’s ARKit. In general, it is advantageous that the method is computer-imple mented, e.g. as an app configured to display a user interface to the user. The app may guide the user through the inspection according to the method.

A real-life example of such device with an inspection app perform ing the above method is shown in Figs. 7a and 7b. Both pictures represent screen shots from a tablet computer held by a user during inspection, while the tablet com puter displays a floor plan of the building and floor where the user is situated. Fig. 7a shows the display of the app during calibration. While the camera of the device is ac quiring a sequence of images in order to perform positioning in sensor space, the user needs to input the current (first) position in floor plan space. This can be done by moving cross-hairs icon 21 and checking the tick 22 when the cross-hairs icon 21 is at the current position. The procedure is repeated for a second position. With that, two- point calibration is performed, and the device is ready for the actual inspection.

Fig. 7b shows the display of the app after some inspection data has been acquired at respective inspection positions indicated, e.g. by balloon icons such as icon 26 displayed on the floor plan. In general, the app may allow the user to choose between a camera view 23, which in Fig. 7b is in the background, and a floor plan view 24, which in Fig. 7b is in the foreground. The camera view shows the sec tion of the environment, which is currently in the camera’s viewing angle. Typically, the user will acquire photos as inspection data in camera view. The floor plan view allows the user to see the determined current position on the floor plan, as indicated by dot symbol 25, together with the determined viewing direction of the camera, as indicated by the shaded segment adjacent to dot 25. Previous inspection positions, i.e. positions where inspection data have been taken, are shown as balloon icons 26. Moreover, the user may choose that the camera path up to the present position is dis played, e.g. as a line or, as indicated in Fig. 7b by shading 27, as a heat map depicting the inspected area.

Thus, the user can, in real time, monitor the camera’s current posi tion as well as the inspection data already acquired and the area already inspected. This enables the user to take control over the inspection, e.g. navigate to a predeter mined inspection position, correct the estimate of the current position (see below) or even correct or repeat inspection data or positions already acquired.

An inspection workflow according to an embodiment of the inven tion is shown in Fig. 3 in form of a flow diagram. In step SI, the user starts the in spection app on the inspection system or device, e.g. on a tablet computer or smartphone comprising a camera, an IMU and a touch screen. In step S2, the user se lects a floor plan from a list of floor plans stored on the device or in a connected memory. In response, the app may display the selected floor plan to the user. In step S3, a positioning session is started within the app, either triggered manually by the user or automatically upon step S2 being completed. At the start of the positioning session, the user may be prompted to walk a few steps such that the camera and/or IMU acquires initial images and initial acceleration data, respectively, which are used in an initial calibration. In the initial calibration, an initial estimate of the orientation of the device may be generated.

In step S4, the two-point calibration described above begins. The user indicates the current position of the device on the floor plan displayed by the app, e.g. by a long-tap on the respective position on the touch screen. Optionally and as a user guidance, the user may be prompted by the app to do so. This position corre sponds to the first position described before. Subsequently, the user walks a few me ters away from the first position to a second position. Optionally and as a user guid ance, the user may again be prompted to do so. In step S5, the user again indicates the current position of the device on the floor plan, e.g. as described above. With the in puts relating to the first and second positions, the app performs the two-point calibra tion, i.e. it calculates the transform between sensor space and floor plan space. Then, the device is ready for the actual inspection.

In step S6, the user - while carrying the device - follows his in tended inspection path, i.e. the camera path. In general, it is important that the envi ronment along the camera path is sufficiently illuminated and in particular shows suf ficient features such that VO may be reliably performed. This implies that subsequent images captured by the camera show corresponding features such that a motion track ing can be done. During the inspection, the app may continuously and in real time in dicate the calculated current position of the device on the floor plan to the user. This facilitates navigation and enables the user to check whether the calibration, i.e. the transformation between sensor space and floor plan space, is still correct or whether the current position has drifted.

During the user following the inspection path, which in particular includes the user roaming freely without following a predetermined path, inspection data may be acquired manually, i.e. triggered by the user, or automatically, i.e. ac cording to certain conditions such as in regular time or space intervals. This is done in step S7. Inspection data, e.g. an image by the camera, is acquired. In step S8, the in spection data is automatically tagged with the inspection position, i.e. the position of the device at the time when the inspection data is acquired. An advantage over con ventional inspection methods is that no user interaction is required for the position tagging. This saves time and makes the inspection position more reliable.

Steps S7 and S8 are repeated as often as required, i.e. until the in spection path is completed. At the same time, the inspection positions where inspec tion data has been acquired may be shown on the floor plan in real time.

After completing the inspection path, the user terminates the posi tioning session and in particular exits the app in step S9. All inspection data with cor responding inspection positions are instantly available on the device, e.g. for checking a completeness of the coverage of the inspection or for post-processing.

Optionally, the inspection data and inspection positions may be transferred to a cloud memory in step S10. In the cloud memory, the inspection data can be accessed and evaluated by any authorized third party from remote. This facili tates simple post-processing and high-quality reporting.

Evidently, such inspection workflow is simple and may even be supported by guidance through the app. In comparison to conventional inspection routines, time is saved by automatic tagging of the inspection data with the corre sponding inspection positions. Moreover, the inspection positions are acquired in a consistent and thus reliable manner. The whole inspection workflow is performed by the user with only one device, makes it simple and convenient.

The flow chart of Fig. 4 shows the method for recording inspection data from the perspective of the app or, similarly, of the processor. The shown steps in particular start at step S3 of the above-described workflow, i.e. when the position ing session is started. In step SI 1, a sequence of images, e.g. a real time video recording, is received from the camera of the device, while the device is moved through the building to-be-inspected.

In step S12, a positioning algorithm, e.g. a VO algorithm, generates an estimate of the camera path in sensor space based on the sequence of images. This is usually done iteratively by extracting features from subsequent images, relating corresponding features in the subsequent images to each other and calculating the mo tion of the device in sensor space from the displacement of corresponding features be tween subsequent images.

Step S13 corresponds to step S4 in Fig. 3: At the first position, a first position is obtained in sensor space from the sequence of images. Also at the first position, a first user input indicative of the first position in floor plan space is re ceived, e.g. via a tap of the user on the first position on the displayed floor plan.

Step S14 corresponds to step S5 in Fig. 3: At a second position, step

S13 is repeated.

From the first and second position now being known in sensor space as well as in floor plan space, a transformation between sensor space and floor plan space is calculated in step SI 5. Since floor plan space typically is a 2D space but the real building is in 3D space, further constraints are necessary in order to be able to reconstruct the motion of the device in floor plan space from the sequence of images. A useful constraint for the inspection inside a building is the assumption of planar motion, i.e. the motion is purely horizontal with zero vertical component. In this case, a vertical component of positions in sensor space is neglected such that sensor space effectively becomes a 2D space. For two 2D spaces, the transformation between them is defined and may be determined from two points known in both spaces, such as the first and second position. The transformation is further illustrated in Figs. 5 and 6.

In step S16, corresponding to step S7 of Fig. 3, the device has been moved to an inspection position, i.e. a position where inspection data are acquired.

The inspection data, e.g. an image taken by the camera, are then received.

In step SI 7, corresponding to step S8 of Fig. 3, the inspection data are stored together with data indicative of the inspection position in floor plan space. The inspection position in floor plan space is typically derived by applying the deter mined transformation to the inspection data in sensor space that are calculated by the positioning algorithm on the basis of the sequence of images.

Steps S16 and S17 may be iterated for several inspection positions during an inspection. In the case of drift, i.e. an accumulation of positioning error, a re-calibration may be performed by iterating steps S14 to S17. This results in a fur ther position - position n+1, being known in sensor space as well as in floor plan space. From the further position n+1 and the previous position n, an updated transfor mation between sensor space and floor plan space is calculated. The updated transfor mation is then used for determining subsequent positions in floor plan space from the sequence of images.

Figs. 5a to 5d schematically illustrates a transformation in 2D be tween sensor space and floor plan space. Such transformation in general comprises a translation, a rotation and a scaling. A motion, e.g. as estimated from subsequent im ages of the sequence of images taken by the camera, from point Spl to point Sp2 in sensor space is illustrated by the bold arrow in Fig. 5a. This motion corresponds to a motion from point Fpl to point Fp2 in floor plan space as depicted by the thin dashed arrow. The desired transformation brings the two motions, and thus the arrows, in congruence, as shown in Fig. 5d. The transformation may be divided into intermedi ate steps: Between Figs. 5a and 5b, the sensor space arrow Spl-Sp2 is translated in order to originate from the same point as Fpl-Fp2. Between Figs. 5b and 5c, the sen sor space arrow is rotated in order to be oriented parallel to Fpl-Fp2. Between Figs. 5c and 5d, the sensor space arrow is finally scaled in order to have the same length as Fpl-Fp2. Such transformation may be represented by a matrix. Thus, calculating the transformation between sensor space and floor plan space amounts to determining the matrix.

Figs. 6a and 6b extend the two-point calibration of Fig. 5 to multi ple two-point calibrations, in this case five calibrations, performed in sequence. The bold arrow Spl 1-Spl2-...-Spl6 represents the camera path, which usually is equiva lent to the inspection path or the user’s path while carrying the inspection device, as estimated from the sequence of images taken by the camera and as transformed to floor plan space by an initial estimate of the transformation (“estimated camera path”). The thin dashed arrow Fpl 1-Fpl2-...Fpl6, on the other hand, represents the camera path in floor plan space, or, more precisely, the “true camera path” in floor plan space, which is the quantity that one wishes to retrieve. The estimated camera path deviates from the true camera path, e.g. because of drift. In this way, errors in positioning accumulate over the camera path, such that at the end point Spl6/Fpl6, the estimated position is off by drift error d. This means that positions, e.g. inspection positions, determined during the inspection over time get more and more inaccurate. In order to prevent this, multi-point calibration is performed, i.e. multiple two-point calibrations. At position Spl2/Fpl2, a user input indicative of the current position in floor plan space is received. A first transformation is calculated from Spl 1-Spl2 and Fpl 1-Fpl2 and used in determining subsequent positions. Then again, at position Spl3/Fpl3, a user input indicative of the current position in floor plan space is received. A second transformation is calculated from Spl2-Spl3 and Fpl2-Fpl3 and used in determining subsequent positions. And so forth. This is iter ated as often as desired, either on the user’s initiative, e.g. because the user notices a significant drift, or triggered by the device prompting the user to perform re-calibra- tion, e.g. because a determined error measure exceeds a certain threshold.

In this way, the positioning error is kept in an acceptable range, e.g. below 1 m, and the inspection yields reliable inspection positions associated to the in spection data. At the same time, such method is robust due to its iterative nature and control by the user. Further, such method is computationally cheap such that it can be performed on a mobile device in real time.