Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATED TUNING OF AN ESTIMATOR
Document Type and Number:
WIPO Patent Application WO/2019/179882
Kind Code:
A1
Abstract:
The present invention relates to a method for calibrating one or more sensor arrangements by means of an automated tuning of an estimator for estimating calibration parameters for the one or more sensor arrangements. The estimator is tuned based on different sets of error modelling coefficients in a parallel computing architecture. The invention also relates to a computer system performing the method and a machine-readable medium.

Inventors:
DROZDZ MAGDALENA (NL)
Application Number:
PCT/EP2019/056497
Publication Date:
September 26, 2019
Filing Date:
March 14, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FNV IP B V (NL)
International Classes:
G05B13/04; G01C21/16
Domestic Patent References:
WO2015138024A22015-09-17
WO2002103398A12002-12-27
WO2000063646A12000-10-26
Foreign References:
EP2112472A22009-10-28
US20150041595A12015-02-12
CN106767900A2017-05-31
US6311129B12001-10-30
US20040133346A12004-07-08
US20090308157A12009-12-17
US9772186B12017-09-26
US20040133346A12004-07-08
US20090308157A12009-12-17
Other References:
TAK KIT LAU ET AL: "Inertial-Based Localization for Unmanned Helicopters Against GNSS Outage", IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 49, no. 3, 1 July 2013 (2013-07-01), pages 1932 - 1949, XP011519675, ISSN: 0018-9251, DOI: 10.1109/TAES.2013.6558029
HAMZA MAZIH: "Implementation of a protocol for acquisition and processing of strapdown gravity data from airborne inertial navigation system to compute geoid models in coastal areas", August 2017, ENSTA
CHEINWAY HWANG; YU-SHEN HSIAO; HSUAN-CHANG SHIH; MING YANG; KWO-HWA CHEN; RENE FORSBERG; AME V. OLESEN: "Geodetic and geophysical results from a Taiwan airborne gravity survey: Data reduction and accuracy assessment", JOURNAL OF GEOPHYSICAL RESEARCH, vol. 112, 2007, pages B04407, XP055237878, DOI: doi:10.1029/2005JB004220
DIOGO AYRES-SAMPAIO; RICHARD DEURLOO; MACHIEL BOS; AMERICO MAGALHAES; LUISA BASTOS: "A Comparison Between Three IMUs for Strapdown Airborne Gravimetry", SURV GEOPHYS, vol. 36, 2015, pages 571 - 586
C. GOODALL; N. EI-SHEIMY: "a method to make Kalman filter tuning more efficient by applying reinforcement learning", THE UNIVERSITY OF CALGARY, article "Intelligent tuning of a Kalman Filter using low-cost MEMS inertial sensors"
"the concept of calculating gravity from INS/GNSS integration", UNIVERSITIES OR RESEARCH INSTITUTES, article "A Comparison Between Three IMUs for Strapdown Airborne Gravimetry"
C. GOODALL; N. EI-SHEIMY: "Intelligent tuning of a Kalman Filter using low-cost MEMS inertial sensors", THE UNIVERSITY OF CALGARY
Attorney, Agent or Firm:
HERRE, Peter (DE)
Download PDF:
Claims:
CLAIMS

1. A method for calibrating one or more sensor arrangements by means of an automated tuning of an estimator for estimating calibration parameters for the one or more sensor arrangements, the method comprising:

providing a first time series of first measurement data obtained from the one or more sensor arrangements to measure first physical properties;

providing a second time series of second measurement data obtained from a reference sensor arrangement to measure second physical properties, wherein the first time series and the second time series share a common time interval, and wherein at least a part of the first physical properties are time derivates of the second physical properties;

modelling the estimator, wherein the estimator is configured to:

initialize a processing loop of the estimator with an input value set of error modelling coefficients on which the calibration parameters depend, and

estimate the calibration parameters and the values of the second physical properties over the common time interval by processing the first time series of first measurement data and the second time series of second measurement data in the processing loop;

modelling a cost function based on a statistical analysis of at least a part of the estimated values of the second physical properties over the common time interval such that a yield of the cost function relates to an accuracy of the estimated values of the second physical properties;

tuning the estimator based on different input value sets of the error modelling coefficients and based on the corresponding cost functions in a parallel computing architecture; and

retrieving the error modelling coefficients that correspond to the cost function providing an optimum yield.

2. The method of claim 1 , wherein modelling the estimator further comprises: modelling dynamics equations that link the first physical properties and their corresponding first measurement data with the second physical properties and their corresponding second measurement data;

modelling error dynamics equations by perturbing the dynamics equations, wherein time-dependent errors incurred by the one or more sensor arrangements are modelled with the calibration parameters depending on the error modelling

coefficients; and

wherein modelling the estimator is based on the error dynamics equations.

3. The method of claim 1 or claim 2, wherein the estimator is a Kalman filter.

4. The method of any of claims 1 -3, wherein the cost function is modelled based on differences between at least a part of the estimated values of the second physical properties and at least a part of reference values of the second physical properties over the common time interval determined from the second time series of second measurement data.

5. The method of any of claims 1 -4, wherein the cost function is modelled based on a physical condition that must be fulfilled for at least a part of the estimated values of the second physical properties according to the laws of physics.

6. The method of any of claims 1 -5, wherein the cost function comprises one or more of cost functions, and wherein the error modelling coefficients are retrieved that provide an optimum yield over the one or more cost functions.

7. The method of any of claims 1 -6, wherein tuning the estimator comprises: executing multiple instances of the estimator in a parallel computing

architecture, each with a different input value set of the error modelling coefficients; calculating multiple instances of the cost function in the parallel computing architecture, each corresponding to an instance of the estimator initialized with a different input value set of the error modelling coefficients;

determining whether one of the multiple instances of the cost function provides the optimum yield; else

iteratively repeating the step of tuning the estimator based on a new plurality of different input value sets of the error modelling coefficients until one of the calculated cost functions provides an optimum yield.

8. The method of any of claims 1 -7, wherein at least a part of the error modelling coefficients defined in the plurality of different input value sets is within a limited value range or is a value from a list.

9. The method of any of claims 1 -8, wherein at least a part of the error modelling coefficients defined in the plurality of different input value sets is constant.

10. The method of any of claims 1 -9, wherein determining the new plurality of different input value sets comprises:

selecting a subset of input value sets from the different input value sets where the corresponding cost functions are within a particular yield;

limiting the value ranges for the error modelling coefficients based on the subset of input value sets;

defining a new plurality of different input value sets of the error modelling coefficients based on the subset of input value sets.

11. The method of any of claims 1 -10, wherein the first time series of first measurement data and the second time series of second measurement data comprises a plurality of time series of measurement data divided in sets of measurement data that share a common time interval.

12. A computer system, configured to perform the method of any of claims 1 -11.

13. A machine-readable medium that comprises a plurality of program instructions, wherein the program instructions are adapted for causing at least one processor to perform the method of any of claims 1 -11.

Description:
AUTOMATED TUNING OF AN ESTIMATOR

FIELD OF THE INVENTION

The present invention relates to a method for calibrating one or more sensor arrangements by means of an automated tuning of an estimator for estimating calibration parameters for the one or more sensor arrangements. The estimator is tuned based on different sets of error modelling coefficients in a parallel computing architecture. The invention also relates to a computer system performing the method and a machine-readable medium.

BACKGROUND OF THE INVENTION

Integrated navigation systems are widely used for positioning purposes within many different applications i.e. aircraft applications, onshore positioning and subsea positioning of Remotely Operated Vehicles (ROV). Usually, data from multiple sensors are integrated with IMU by applying Kalman filtering. This approach allows to combine advantages of different positioning techniques such as long-term absolute accuracy of Global Navigation Satellite System (GNSS) position and high short-term relative accuracy of inertial sensors. Because each sensor/technique has its limitations, the combined approach allows to mitigate them and provide a robust positioning solution.

Currently, inertial navigation systems (INS) are very popular. They become even more appealing due to the rapid development of low-grade inertial sensors based on Microelectromechanical (MEMS) technology. But integrating IMU sensors carries also challenges. Each IMU sensor is influenced by errors that needs to be corrected by determining calibration parameters. These errors are usually modelled when designing the Kalman filter for data integration. The mathematical function of modelling the errors is the same. The parameters of the function are different for each type of IMU and application. The real challenge is to select proper values of error modelling coefficients such that the mathematical function describes the real behavior of an error in an accurate way. In other words, a priori estimates of statistical parameters within the Kalman filter need to be adjusted for individual system to achieve optimal performance. This process is referred to as Kalman filter tuning.

Kalman filters are usually manually tuned by a method referred to as serial tuning procedure. The initial parameters are provided by a manufacturer or calculated using Allan Variance Analysis. Next, they are manually changed until the optimal performance of the filter is reached. The optimum performance indicators can vary. Residuals between IMU based estimate and an aiding should be minimized. Modelled errors to be compensated in calibration parameters such as biases and scale factors should behave according to expected characteristics of a sensor provided by a manufacturer. A common approach to tune Kalman filters is to introduce a condition such as position gaps in position aiding and minimize the free inertial drift during the gaps. Depending of the application, the condition indicating optimal tuning can vary - not all applications focus on trajectory or have position aiding. In general, a cost function is modelled based on a statistical analysis of at least a part of the IMU based estimates such that a yield of the cost function relates to an accuracy of the IMU based estimates.

Regardless of the chosen condition, the tuning process is a time consuming and tedious procedure. In practice, tuning is finished when the filter performance is "good enough", because it is not possible to check all possible combinations of parameters. Because of the rapid development of inertial sensors, there is a high demand in optimizing the filter for specific applications in relatively short amount of time.

The invented method is building on the patent document WO2015138024A2. Further, the graduation project: "Implementation of a protocol for acquisition and processing of strapdown gravity data from airborne inertial navigation system to compute geoid models in coastal areas" by Hamza Mazih, August 2017, ENSTA Bretagne, is an example of practical application of the invented method of parallelizing recursive problems.

WO2015138024A2 covers a concept of airborne gravimetry in combination with Lidar data for extraction of gravity anomalies. It states the need of navigation data, gravity data and Lidar data. The navigation data are considered to be attitude and position, which allow to combine mentioned data to generate orthometric heights and gravity field information. This method allows for the search for hydrocarbon or other commodity deposits, whereby gravimetry can be used to locate such deposits. During the search for such deposits, anomalies in the gravity field can be detected by carrying exploration flights during which the gravity field is estimated. In a search for metals one would expect a stronger gravity field, while in the search for hydrocarbon one expects a weaker gravity field. There is no remark on INS/GNSS integration.

W002103398A1 covers procedure of airborne gravimetry with specific sensors as laser scanner, gravity gradiometer. It focuses on detailed procedure to combine different data to generate DTM. It mentions inertial data but considering attitude which is used to relate all data streams.

The technical paper: "Geodetic and geophysical results from a Taiwan airborne gravity survey: Data reduction and accuracy assessment" by Cheinway Hwang, Yu- Shen Hsiao, Hsuan-Chang Shih, Ming Yang, Kwo-Hwa Chen, Rene Forsberg, and Arne V. Olesen"; JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 1 12, B04407, doi:10.1029/2005JB004220, 2007, describes techniques of obtaining gravity anomalies and/or geoid models from airborne survey which includes gravimeter and GPS system onboard. Inertial technique is not used. The paper describes validation of obtained gravity by using repeat lines.

US631 1 129B1 describes the concept of integrated navigation using Kalman filter. It focuses on tightly coupled integration of raw GNSS observations. It mentions the aspect of filter tuning - it states that measurement residuals can be used to assess the quality of filter tuning. It does not expand on that and it does not describe how the tuning should be performed.

US2004133346A1 describes the concept of inertial navigation with different aiding sources using Kalman filter. It focuses on using attitude change measurements to estimate gyro biases better and at the end improve performance of Kalman filter. It does not mention tuning of adjusting Kalman filter parameters.

US2009308157A1 describes a method of constructing IMU. The are no relations to data integration.

US9772186B1 describes the concept of integrated navigation with INS sensor as a base and other data types used as an aiding sources. It describes a concept of software platform with GUI that houses algorithms for data integration. The concept focuses on building a hardware/software platform that enables efficient testing and validation of different types of IMU, as well as algorithms for data integration. The platform with GUI is used to verify the results of Kalman filter solution. There is no direct concept of tuning as "adjusting" the algorithm parameters to obtain better performance of the algorithm.

The technical paper: "A Comparison Between Three IMUs for Strapdown Airborne Gravimetry" by Diogo Ayres-Sampaio, Richard Deurloo, Machiel Bos, Americo Magalhaes, Luisa Bastos; Surv Geophys (2015) 36:571-586 DOI 10.1007/s10712- 015-9323-5 describes the concept of airborne gravimetry where gravity is estimated from INS/GNSS integration. As different types of IMU were evaluated a Kalman filter tuning was performed. The tuning was based on the assumption that gravity should be the same at crossover points. There is a generic description on the order of tuning parameters, but there is no detailed description of a procedure of tuning. There is no reference to tuning automation. The technical paper: "Intelligent tuning of a Kalman Filter using low-cost MEMS inertial sensors" by C. Goodall, N. El-Sheimy, The University of Calgary, describes a method to make Kalman filter tuning more efficient by applying reinforcement learning. This method tries to reduce iterative approach by introducing incremental learning. W00063646A1 describes an autonomous navigation system for an orbital platform incorporating a global positioning system based navigation device optimized for low- Earth orbit and medium-Earth orbit applications including a 12 channel, GPS tracking application-specific integrated circuit operating in concert with a microprocessor implementing an extended Kalman filter and orbit propagator which autonomously generates estimates of position, velocity, and time to enable planning, prediction and execution of event-based commanding of mission operations. W00063646A1 is an example for another type of determining continuous real time earth-sun vector data from a GPS constellation by means of a Kalman filter. Parameters of such a Kalman filter could be potentially tuned by applying the method according to the present invention.

US9772186B1 describes a miniaturized inertial measurement and navigation sensor device and a flexible, simplified GUI operating in real time to create an optimum

IMU/INS. The IMU includes multiple angle rate sensors, accelerometers, and temperature sensors to provide stability device. A navigation GUI tests algorithms prior to embedding them in real-time IMU hardware. MATLAB code is converted to C++ code tailored for real-time operation. Any point in the algorithm suite structure can be brought out as a data channel to investigate the pattern of operation. The data channels permit zooming in on the algorithm's operation for the open-loop angle, velocity and position drift measurements for bias-compensated channels. The GUI can be used to verify results of an extended Kalman filter solution as well as the implementation of the real-time attitude and heading reference system. When the code has been verified, it is compiled and downloaded into a target processor.

US631 1 129B1 describes a positioning method and a system for measuring a position of a vehicle on land, air, and space, using measurements from a global positioning system receiver and an inertial measurement unit. An integrated Kalman filter processes the all-available measurements of the global positioning system: pseudo- range, delta range, carrier phase, and the solution of an inertial navigation system. The integrated Kalman filter is a multi-mode, robust Kalman filter, in which optimal integrated mode is selected based on the measurement availability and filter stability. The high accurate solution of the inertial navigation system, which is corrected by the Kalman filter, is used to aid on-the-fly resolution of the carrier phase integer ambiguity of global positioning system in order to incorporate the carrier phase measurements into the Kalman filter, and to aid the carrier phase and code tracking loops of the receiver of the global positioning system to improve the receiver jamming and high dynamic resistance.

US20040133346A1 describes a navigation system including a Kalman filter to compensate for bias errors in inertial sensing elements. An observed pitch, roll or heading change is input to the Kalman filter either from an aiding source or when the navigation system is in a known condition. The Kalman filter generates a correction signal that is provided to the navigation computation system.

US20090308157A1 describes an inertial measurement system having a triangular cupola shaped base structure with three mutually orthogonal sides and a bottom surface surrounding a hollow core. The bottom surface includes an aperture providing access to the hollow core. An inertial module is mounted on each of the sides and includes a gyroscopic rotational rate sensor and a linear accelerometer connected to a circuit board. The inertial measurement system also includes a motherboard and a plurality of metallization elements. The metallization elements extend from the bottom surface to the sides of the base structure and conductively connect the inertial module to the motherboard. The inertial measurement system may also include a

nonconductive adhesive underfill positioned between the inertial module and the base structure. DISADVANTAGES OF THE PRIOR ART

There are many references to airborne gravimetry (WO2015138024A2,

W002103398A1 , prior art geodetic and geophysical results in the JOURNAL OF GEOPHYSICAL RESEARCH) but they assume that gravity value is measured by using a dedicated sensor/technique. Usually it is gravimeter. There is no reference to the possibility of computing gravity using INS/GNSS integration nor tuning.

Another group of documents (US631 1 129B1 , US2004133346A1 ) describes data integration using Kalman filter. The main focus is on adding/integrating different types of measurements as aiding sources to improve Kalman filter performance for navigation purposes. There are references to tuning as a procedure which needs to be performed but there is no description how to do that.

The concept of efficient validation of different types of IMU and data integration algorithms appears in US9772186B1 . But there is no reference to tuning Kalman filter parameters.

The paper "A Comparison Between Three IMUs for Strapdown Airborne Gravimetry" is an example of many papers, usually submitted by universities or research institutes, that directly describe the concept of calculating gravity from INS/GNSS integration. There is also reference/description of tuning. Even though different approaches are used, the tuning is still rather manual and it is a time consuming procedure.

There are papers describing smarter techniques to tune Kalman filter parameters, for example "Intelligent tuning of a Kalman Filter using low-cost MEMS inertial sensors" by C. Goodall, N. El-Sheimy, The University of Calgary. But they mainly focus on algorithm improvements allowing smarter estimation of parameters instead of search learning is introduced. But they carry a risk of finding parameters adjusted to a specific scenario/data set while tuning parameters need to be generic and work in any conditions. ln general, there are concepts of estimating gravity from INS/GNSS integration and the need of tuning Kalman filter parameters for that purpose. There are also concepts to improve efficiency and robustness of Kalman filter tuning. But there is no link between them nor a clear procedure which can be easily implemented and utilised in production.

There are a couple of different strategies that make the tuning process more efficient. A common approach is to use intelligent methods such as reinforcement learning. These methods can be very powerful and speed up the tuning process significantly. However, a major drawback is the risk of convergence to the local minimum when the error function is not convex.

In case of methods incorporating adjusting parameters based on comparison with previous tuning results, the convergence to a local, not optimal minimum is a risk. Moreover, when using a different cost function, the result is often different in terms of the optimal set of parameters. As result the tuning process becomes complicated considering that:

a few cost functions should be considered e.g. free inertial drift of position, attitude stability, gravity repeatability (at the same location gravity should have the same value);

aiding residuals should be within expected accuracy, their behavior should be close to white noise; and

accelerometer and gyroscope biases and scale factor should behave according to sensor characteristics.

This generates huge amount of data to analyze and requires more sophisticated techniques to draw valid conclusions. OBJECTS AND SUMMARY OF THE INVENTION

It is an object of the present invention to perform the tuning of an estimator as automatic as possible in an universal way that easily covers unknown future scenarios.

An estimator is configured to process a time series of measurements containing errors like statistical noise and determines estimates of unknown variables by considering feedback from an aiding source. Widely known estimators are for example Kalman filters. Consequently, the optimizing of the tuning process as discussed in the background section is not only an issue when applying a Kalman filter, but for all kinds of estimators. An estimator may be applied to any kind of time series analysis, for example in signal processing, robotics, trajectory optimization, and econometrics, in applications where the current state of a system needs to be estimated.

The present invention is defined by the independent claims. The dependent claims concern optional features of some embodiments of the invention.

The present invention is based on the idea that an estimator and a cost function are executed in multiple instances using different error modelling coefficients, preferably in a parallel computing architecture like a cloud framework. The execution of the estimator and the cost function over the multiple instances in parallel is referred to as "tuning run". As each cost function outputs its corresponding yield, the performance of a plurality of different settings for modelling the error can be monitored. Depending on the amount of different settings to be processed in the tuning run, several further tuning runs with further different settings for modelling the error might be required to identify the settings corresponding to the cost function that provides an optimum yield. The entire process is referred to as "auto-tuning" or "automated tuning". The present invention allows a system requiring tuning of an estimator to be:

- Faster. The auto-tuning procedure is implemented using parallel computing architecture. This allows to significantly reduce tuning time (from one month of manual tuning to a few days of auto-tuning) preserving the significant amount of candidates for tuning parameters.

- Cost-effective. An experienced engineer is needed only to trigger the procedure and interpret the results. As result the cost of human work is significantly reduced.

The cost of parallel computing services is rather low.

- Robust. Instead of reducing the amount of candidates for tuning parameters, auto- tuning allows to increase the amount by efficient processing. Thanks to that the risk of missing a parameter combination or estimating a local minimum is limited.

- Flexible and efficient. Auto-tuning is not restricted to a type of sensors comprising measuring system, nor the type of algorithm. Adding a new cost function to support a specific conditions is straightforward. The tuning can be done in parallel for many different conditions which makes production of the final tuning result much more efficient.

The auto-tuning procedure concerns a full tuning cycle comprising of one or more tuning runs, starting from a preparation of a data set requiring time series analysis, defining the system for a specific application, and finishing at the final settings for modelling the error of the system.

According to a first aspect of the present invention, there is provided an automated tuning procedure of an estimator for estimating calibration parameters for one or more sensor arrangements. As a result of the automated tuning, the one or more sensor arrangements are calibrated. As a first prerequisite for the tuning of the estimator, a data set to be processed by the estimator is provided. The data set comprises (i) a first time series of first measurement data obtained from the one or more sensor arrangements to measure first physical properties and (ii) a second time series of second measurement data obtained from a reference sensor arrangement to measure second physical properties. The first time series and the second time series share a common time interval. At least a part of the first physical properties are time derivates of the second physical properties to establish a fusion of interrelated sensors in a combined system that allows minimizing of errors through a calculated redundancy of information.

As a second prerequisite for the tuning of the estimator, the estimator is modelled. In some embodiments, dynamics equations are modelled that link the first physical properties and their corresponding first measurement data with the second physical properties and their corresponding second measurement data. Then, error dynamics equations are modelled by perturbing the dynamics equations. Time-dependent errors incurred by the one or more sensor arrangements are modelled with the calibration parameters depending on error modelling coefficients. Based on the error dynamics equations, the estimator is modelled.

The estimator is configured to estimate the calibration parameters and the values of the second physical properties over the common time interval by processing the first time series of first measurement data and the second time series of second measurement data in a processing loop. The processing loop of the estimator is initialized with an input value set of error modelling coefficients on which the calibration parameters depend.

As a third prerequisite for the tuning of the estimator, one or more cost functions are modelled based on a statistical analysis of at least a part of the estimated values of the second physical properties over the common time interval such that a yield of a cost function relates to an accuracy of the estimated values of the second physical properties. Different value sets of the error modelling coefficients are generated and used to initialize the estimator for performing a tuning run. For each of the different value sets of the error modelling coefficients, the one or more cost functions for each of the processed sets of error modelling coefficients are calculated to provide a

corresponding yield. The output of the tuning run is a list of yields associated with a value set of the error modelling coefficients that caused the yield. Further tuning runs are performed with further different value sets of the error modelling coefficients until a value set of error modelling coefficients is identified having an associated yield in a determined optimum range.

In some embodiments, multiple instances of the estimator are executed in a parallel computing architecture, each of the multiple instances with a different value set of the error modelling coefficients. After having processed an instance of the estimator, corresponding one or more cost functions are calculated in the parallel computing architecture. It is determining whether one of the multiple instances of the cost function already provides a yield in a determined optimum range. If not, the step of executing multiple instances of the estimator based on a new plurality of different input value sets of the error modelling coefficients is performed again until one or more of the calculated cost functions provides the optimum yield.

In some embodiments, at least a part of the error modelling coefficients defined in a plurality of different input value sets is within a limited value range or is a value from a limited list.

In some embodiments, at least a part of the error modelling coefficients defined in a plurality of different input value sets is constant.

In some embodiments, determining the new plurality of different input value sets for a new iteration in the tuning process comprises a selecting of a subset of input value sets from the different input value sets where the corresponding cost functions are within a particular yield. The values of the error modelling parameters of this subset are used to identify smaller boundaries for one or more error modelling parameters in another tuning run, thereby further limiting the value range of at least one of the error modelling coefficients.

The estimator may be configured to produce estimates of unknown variables by estimating a joint probability distribution over the variables for each timeframe of the time series. In some embodiments, the estimator is a Kalman filter.

In some embodiments, a first type of one or more cost functions is modelled based on a statistical analysis of a comparison of at least a part of the estimated values of the second physical properties and at least a part of reference values of the second physical properties over the common time interval determined from the second time series of second measurement data.

In some embodiments, a first type of cost functions is a position based cost function using a position calculated by the estimator and a reference position provided by a reference measurement system like as GNSS. The yield of the cost function is based on the differences between position values over the entire trajectory.

In some embodiments, a first type of cost functions calculates free inertial drift during the gaps in aiding data. To expose the drift more, gaps in aiding data are introduced during each run of the estimator (the same gaps - location, length) for each run of the estimator. The differences between the reference position and those estimated by the estimator are computed only during the gaps as the drift during the gaps is clearly observable. Free inertial drift is present when there is no aiding data. The mean value of all differences during the gaps is computed to express the mean drift. The smaller the mean drift the better are the parameters.

In some embodiments, a first type of cost functions is a gravity based cost function. Reference gravity is provided from gravimetric measurements or a very accurate local geoid model. Furthermore, it can be known over some area as gridded values or along lines or at specific points only. Once the reference gravity data is known, the estimated gravity data and the reference gravity data are compared. The measure of the yield of the gravity based cost function would be a mean of differences. How to compute the differences (perform the statistical analysis) is related to available reference data and the data set used for the estimation. In general, the statistics needs to include interpolation such that compared gravity values relate to exactly the same location. The statistic can be computed for absolute gravity values or gravity anomalies so that appropriate reductions on geoid need to be applied first.

In some embodiments, a first type of cost functions is a rotation based cost function. The measurement system comprising the one or more sensor arrangements is mounted in a setting which is able to measure in a very accurate way changes of rotation angles so that the changes of rotation angles between the measurement system and the setting can be compared. Such a setting could be a turn table. The differences between the reference values delivered from turn table servomotors or attitude changes estimated from camera observations by applying computer vision (or any other technique) and the estimated values extracted from the estimator can be calculated. The mean value of the differences can express a yield of this cost function.

In some embodiments, a second type of one or more cost functions is modelled based on a condition that must be fulfilled for at least a part of the estimated values of the second physical properties according to the laws of physics. The condition is used to define a measure which indicates when the second type of cost function provides the best yield. The statistical analysis is used to extract and compute the measure.

From the set of estimated physical properties one specific physical property is selected for which a specific condition is valid. A suitable for the condition statistics is applied in order to compute the measure expressing the yield of the cost function.

The condition can directly act on the estimated physical properties. If the condition is indirect, the estimated physical properties are used as input for another measuring technique. The cost function acts on the outcome of the fusion of estimated physical properties and the other measuring technique.

In some embodiments, a second type of cost functions is a gravity based cost function where reference gravity is not known. However, the condition is used that gravity is the same at the same location. When a data set is available where a specific location is covered multiple times over the common time interval, multiple values of gravity are available for an analyzed location. The differences between the values can be computed (in statistics: standard deviation) to express the

resemblance of gravity at a specific location. The resemblance is the measure expressing the yield of cost function. Depending of the character of available data the condition can be applied at specific points (e.g. line crossings) or along a part of a trajectory (covered multiple times). In that case more points are available so the final measure is the mean of the standard deviation at many points (e.g. along a line). Again it express a resemblance, but an average resemblance for an area. In the current application a line (called a repeat line) is used and multiple points with reasonable for gravity analysis spacing are generated to calculate the average resemblance. To be noted here: the resemblance can be calculated for absolute gravity or gravity anomaly; if calculated for gravity anomaly, appropriate reductions need to be applied.

In some embodiments, a second type of cost functions is a position based cost function where a reference position is not known. The measurement system comprising the one or more sensor arrangements is mounted and moved in a such a way that the position follows the same path (e.g. a circle). Someone could imagine such a system for in-house, local calibration of small IMU systems. The resemblance of the estimated position with the assumed trajectory could be calculated and act as measure of a yield of this cost function.

The examples from above refer to cost functions which act directly on the estimated physical properties. In some embodiments, the cost functions act indirectly on the estimated physical properties: The measurement system comprising one or more sensor arrangements is coupled with another technique - the estimator provides georeferencing for another measurement technique, e.g. coupling of inertial navigation system for georeferencing a camera or a scanner. So the point clouds, e.g. from a camera or a scanner, are combined between each other along the trajectory using position from IMU/GNSS position trajectory. The cost function can act on the point cloud directly i.e. if a specific feature is scanned multiple times within the whole trajectory (and respectively one tuning run), the location and orientation and shape of the feature detected from the point cloud should be the same. So the measure of the cost function yield is the resemblance of location and orientation and shape of the feature extracted from the georeferenced point cloud.

In some embodiments, the one or more cost functions comprise a plurality of cost functions. The calibration parameters associated with that input value set of the error modelling coefficients are retrieved that provide an optimum yield over the entire plurality of cost functions.

In some embodiments, multiple cost functions are combined to a combination of cost functions defining measures that either express the yield of each of the multiple cost function separately or express a combined yield of the combination of cost functions.

In some embodiments, the plurality of cost functions comprise at least one cost function of the first type and at least one cost function of the second type.

In some embodiments, the one or more sensor arrangements comprises IMU sensors. The IMU sensors may comprise three accelerometers and three gyroscopes. In some embodiments, the one or more sensor arrangements comprises further sensors to integrate IMU data like position and velocity with other types of data like depth, acoustic ranges, and speeds. In some embodiments, the IMU sensors may be extended by additional sets of accelerometers and gyroscopes besides three accelerometers and gyroscopes comprising the base measurement sensors of IMU.

In some embodiments, the sensor related errors can be divided into three main groups:

• Bias: it concerns initial bias which can be usually removed during calibration; bias instability which cannot be removed before integration and needs to be modeled; and bias noise that leads to effect described as Rate Random Walk (RRW) but this effect may be very small and can be neglectable for the majority of applications. A bias represents one of the calibration parameters.

• Scale factor: it is a scaling effect of the raw readings which directly influences raw sensor readings together with bias error according to formula:

Xcorrected— (1 l) Xobserved + b (1) where l is scale factor and b is bias. A scale factor represents one of the calibration parameters.

• Noise: it is mainly related to electrical noise. If integrated, it causes errors which are described, in case of sensors like accelerometers and gyroscopes, as Velocity Random Walk or Angle Random Walk. This effect cannot be removed prior to integration, it needs to be modeled with error modelling coefficients.

When integrating sensor data within a system, the abovementioned errors need to be considered. Initial bias may be removed by an initial sensor calibration. Biases and scale factors are modelled as calibration parameters within an estimator like a Kalman filter. In some embodiments, the sensor related errors are assumed to be described by a Gauss-Markov process: x = - 1/T * x + w (2) where x is the state vector, w is the noise and T is the correlation time. The noise and the correlation time represent error modelling coefficients. In case of IMU sensors, it should be noted that prior to raw sensor data integration gravity needs to be removed from accelerometer readings as well. As gravity can be only approximated by mathematical model, the remaining gravity bias may also be modelled as Gauss- Markov process.

In some embodiments, noise present in raw IMU sensor data causes attitude and velocity to drift when integrated. This needs to be considered within the estimator as well. Attitude and velocity corrections may be modelled as random walk process where noise is the main driving aspect. In that case, the correlation time related to a sensor is so large that 1/T can be assumed zero: x = w (3) where w is the noise.

In some embodiments, the parallel computing architecture is provided by a cloud service, for example Azure Cloud. Azure Cloud provides a service of renting machines for computation purposes. Usage of computers (number of them and processor speed) is determined automatically. It also supports users with a framework which simplifies implementing desired actions. It provides build in elements such as functions, tables and requests. Functions are triggered by requests which are used to navigate through big amount of data. Results or parameters can be provided or stored within tables which are easily to import/export. Azure fully supports the .NET

Framework, which simplifies upload of C# applications to the cloud. In some embodiments, more than one cost function is calculated to validate a set of error modelling coefficients during each run. Selection of the best set of error modelling coefficients is done by minimizing the yield of the involved cost functions.

The inventive method utilizes a parallel computing architecture for recursive

optimization processes, such as auto-tuning. The auto-tuning process can be divided into the following steps:

selection and preparation of a dataset;

defining of the initial error modelling coefficients; and

iteratively performing tuning runs, each tuning run performed with a plurality of different value sets of the error modelling coefficients until a yield of the cost function is in an optimum range.

In some embodiments, the results of the final tuning run are validated using a different dataset.

In some embodiments, a first tuning run is performed with relatively wide parameter boundaries for selected error modelling coefficients while keeping the remaining error modelling coefficients constant.

In some embodiments, a second tuning run is performed with wide parameter boundaries for all error modelling coefficients.

In some embodiments, a fine tuning run is performed with small parameter boundaries for all error modelling coefficients.

In some embodiments a fine tuning is performed with small parameter boundaries for selected error modelling coefficients while keeping the remaining error modelling coefficients constant or with wide parameter boundaries. According to a second aspect of the present invention, there is provided a processing platform executing a dedicated auto-tuning procedure as recited in the method according to the first aspect of the present invention. The processing platform is a software platform implemented in parallel computing architecture. The platform consists of set of methods allowing to execute steps of the auto-tuning procedure in efficient manner.

According to a third aspect of the present invention, there is provided a machine- readable medium having suitable program instructions to realize the inventive method, for example, on a general-purpose computer or in a programmable integrated circuit. The machine-readable medium may be any kind of physical or non- physical data carrier like, for example, a computer disk, computer tape, CD, DVD, Bluray, a semiconductor memory, or a signal transmitted over a computer network.

BRIEF DESCRIPTION OF THE DRAWINGS

Further features, objects and advantages of the invention will become apparent from the following detailed description, in connection with the annexed schematic drawings, in which:

Fig. 1 shows a sample auto-tuning process for calibrating IMU sensors by means of a Kalman filter according to an embodiment of the present invention,

Fig. 2 shows free inertial drift versus different combinations of tuning parameters representing the yield of a cost function of an auto-tuning process for calibrating IMU sensors according to an embodiment of the present invention,

Fig. 3 shows processing results with initial parameters before performing an auto- tuning process according to an embodiment of the present invention, Fig. 4 shows processing results with optimal parameters after having performed an auto-tuning process according to an embodiment of the present invention, and

Fig. 5 shows an example application of a Kalman filter for calibrating IMU sensors and a GPS aiding.

DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

In detail, the invention is using measured sets of data and a set of constraints or parameters in order to optimize a model based on said constraints by means determining the yield of cost functions. Said optimizations, while recursive in nature, can be executed in parallel.

As an example to demonstrate the present invention, a method is described in detail in which one or more IMU sensors are integrated with GNSS position within a Kalman filter and parameters of the Kalman filter are calibrated. In said example, the following method steps are performed as shown in Fig. 1 in order to use the present method for optimizing INS based navigation system for gravimetric applications such as locating of hydrocarbon deposits, mineral deposits or ore deposits and determining of geoid models:

1 . Selection and preparation of a proper dataset. Depending on type of application data needs to be prepared such that specific cost functions can be used. The available cost functions are based on the concept of minimizing free inertial drift and the fact that gravity value is constant at a specific location.

2. Defining of initial Kalman filter parameters comprising the error modelling coefficients. Kalman filter parameters are strictly related to the type of IMU. For each type of IMU a manufacturer usually provides a specification which can be used to define rough set of initial parameters. 3. First auto-tuning run - wide parameter boundaries for selected error modelling coefficients. During the first tuning run, a selected set of error modelling coefficients from the initial parameters is tuned.

4. Second auto-tuning run - wide parameter boundaries for all error modelling coefficients. The second tuning run aims to examine all error modelling coefficients. At this stage, their boundaries are well defined by outcome of the first tuning run.

5. Fine auto-tuning run - small parameter boundaries for all error modelling

coefficients. The last tuning run is the actual fine tuning. After the optimal local minimum is selected by the second tuning run, all error modelling coefficients are tuned within a small boundary and with a small step.

6. Validation of the results with a different dataset. After the first auto-tuning run is done the following tuning runs can be performed with multiple data sets to improve robustness of the solution.

The set of Kalman filter parameters consists of around 20 different parameters comprising the error modelling coefficients. For each parameter, a specific value needs to be selected. The tuning aims to find a specific combination of parameter values such that the performance of Kalman filter is maximized. Each auto-tuning run provides the set of the best parameter combinations as an outcome.

The following error modelling coefficients influence the filter performance and differ per IMU sensor:

• initial standard deviation for gyro and accelerometers biases and scale factors, and gravity (5 error modelling coefficients)

• spectral densities for gyro and accelerometers biases and scale factors, and gravity (5 error modelling coefficients)

• time constants for gyro and accelerometers biases and scale factors, and gravity (5 error modelling coefficients) spectral densities for attitude and velocity (2 error modelling coefficients)

The measure of Kalman filter performance is mathematically expressed by cost functions. Minimizing a cost function allows to find a combination of parameters providing the best performance of Kalman filter. The type of cost function is related to the type of application.

The commonly used cost function is based on the concept of minimizing free inertial drift. The GNSS position is selected as a reference position, thereby determining a reference trajectory. The reference trajectory is used to compute the free inertial drift. To avoid risk of constraining parameters for specific dynamics, couple of position gaps of different length are introduced to compute the average free inertial drift.

Within the auto-tuning procedure there is a second type of cost function introduced. It is based on the assumption that gravity is constant at a specific location. In detail, the method uses a repeat line (a line that a vehicle follows a few times within one data set) - the gravity profile along a repeat line should be the same at different moments of time. The gravity profiles resulting from a few passages along the repeat line are compared. To represent consistency of gravity results among the generated gravity profiles standard deviation is computed. Within the cost function the standard deviation is minimized - the better tuned Kalman filter the smaller spread among gravity profiles.

The computation of standard deviation (SD) is done at a discrete points along the repeat line. Using trajectories along a repeat line during each passage an average trajectory of repeat line is generated. Discreate points along repeat line are selected within spacing suitable for a specific application.

For a single passage along the repeat line, the gravity profile is linked with position "profile" using time stamps - gravity vs. 3D position datum is created. Using this datum gravity values for every discrete point are interpolated. As this procedure is repeated for all repeat line passages, there are a few gravity values (the amount of gravity values is equal to number of passages along a repeat line) for each discrete point. Gravity SD profile is generated along the average repeat line by computing SD of gravity values for each discrete point. There are couple of statistical measures computed from the gravity SD profile, the mean value is the most important one as represents the final consistency of gravity for a specific set of parameters and it is used as indicator of the best set of parameters.

The auto-tuning method uses a combination of two types of cost functions to find the best set of parameters. For each cost function set of best combinations of parameters is selected. Parameters common for both cost function specific sets are selected as the final tuning parameters.

Initial parameters may be defined based on sensor specifications. Additionally, Allan Variance analysis may be performed. But these methods allow to initialize only a couple of parameters (spectral densities of attitude, velocity and biases). Scale factors are initialized with a value the same as corresponding bias. Time constants are set to 1000 seconds. The initial boundaries for each parameter are defined based on experience. In the first tuning run they are wide e.g. attitude spectral density changes by step of two in power within 10 4 to 10 15 . To simplify the process, not much attention is paid to exact numbers or the size of boundaries. Usually a few manual runs are done to define edges of boundaries - after crossing the edges the filter does not coverage or coverages very poorly.

During the first tuning run, a selected set of parameters is tuned. Scale factors are usually fixed with one value. Time constants are represented by three widely speed values (500, 1000, 2000 seconds). This run aims to find the potential areas of local minimum. Spectral densities of velocity, attitude, accelerometer and gyroscope biases are examined because these parameters have the strongest influence on the filter performance. The second tuning run aims to examine all parameters. At this stage, their boundaries are well defined by outcome of the first tuning run. The values of spectral densities usually differs by 2-3 orders of magnitude. For time constants, the step is usually around 500 seconds.

The last tuning run is the actual fine tuning. After the optimal local minimum is selected by the second tuning run, all parameters are tuned within a small boundary and with a small step.

During each tuning stage, all cost functions are minimized. In practice, for each cost function 100 combinations of best performing parameters are selected. Then values of parameters that repeat within all cost functions are selected as the best ones. The selected set of parameters is used as initial set for the next tuning round. After each stage, the outcome may be validated manually or automatically - a user performs processing and examines other aspects, for instance bias, scale factors behavior. Usually tuning is done with 2-3 datasets. Additional datasets are used for validation purposes.

The complete tuning procedure is an extensive task. To make the process efficient all tuning steps are performed with Azure environment. The processing is scaled such that multiple runs with different set of parameters can be processed at the same time. Processing 160,000 combinations of parameters using 1 5h data set would take one year for one processor (PC). When using Azure set up it can be done within four days.

The process is also automated to limit the user work load. After the data set is uploaded to the Azure server, and parameters boundaries typed in, set of three functions needs to be triggered to start the process. Majority of tuning time is the processing time by the Azure framework when user interaction is not needed. The processing results are organized within a table; then they can be easily sorted to select the best set of parameters. The described auto-tuning procedure was used to tune multiple different IMU sensors with a success. As an example, a gravimetric application was selected when high grade IMU was coupled with GNSS position to provide gravity anomaly estimation for geodetic application such as: geoid or orthometric height estimation without need of gravimeter sensor. Figure 5 presents the concept of data integration.

Figure 2 shows the result of tuning done with a high grade IMU. The tuning was performed using a data set where IMU and GNSS position were integrated, and with a few GNSS gaps introduced. The selected data was processed 160,000 times, each time with different a priori Kalman filter parameters. For each set of parameters, the averaged drift per minute was computed.

Figure 2 is the graphical representation of correlation between free inertial drift and different a priori parameters of the Kalman filter. Free inertial drift (average, in meters per one minute) are shown versus different combinations of tuning parameters (each combination has its own specific number - file number). The chart shows that there are three local minima. Further analysis could reveal which minimum is the best by analyzing the behavior of biases and residuals.

Figures 3 and 4 present the quality of the tuning before and after tuning. It is visible that before tuning position was rejected at many turns, additionally the free inertial drift was in order of a few meters. After tuning the position residuals stay within a few centimeters and the maximum free inertial drift is around 2m.

The presented auto-tuning method aims to define a straightforward and efficient tuning procedure. The process can be speeded up by incorporating into Cloud services such as Azure, AWS, Google. The tuning time is limited to one week (for one specific IMU type). That is significantly shorter considering that manual tuning can last even one or two months. During auto-tuning many combinations of parameters are examined. The selection of the optimal set of parameters is based on minimizing of a couple of cost functions. Additionally, the implemented framework is flexible - incorporation of a new cost function or introducing different filters is straightforward. For instance, the set of cost functions can be extended by a cost function specific for an application such as gravity repeatability cost function for gravimetric applications. This approach enables to achieve more robust result with comparison to using one cost function.

Additionally, a wide range of tuning parameter combinations can be evaluated. As a result the auto-tuning is finalized when the best possible performance of an IMU is reached. Thanks to that the process of selecting an optimal IMU for a particular application can be more efficient. Multiple types of IMU can be evaluated at the same time. Moreover the selection of the most suitable IMU can be based not only on sensor specification but also on the final performance under real conditions.

FIELDS OF APPLICATION

The method according to the present invention shall be understood to be suitable for any system allowing to estimate physical properties that vary with time. Such system has to be observable i.e. the input data allow to calculate the physical properties by applying the mathematical relations containing the links between the input data and physical properties where the mathematical relations are captured within a system model. The three components: input data, system model and physical parameters are the key components of a majority of applications. As the applications are created to answer variety of questions in a best possible and reliable way, the optimization of one of the three components are involved. With the growing demand of solving problems within a very complex system the optimization can be challenging. With increasing complexity the risk of converging to a local but not appropriate minimum is higher. Moreover the complexity makes the capturing of the optimization problem in a mathematical way simply not feasible. Especially in these cases the method according to the present invention can be highly beneficial. An automated tuning does not require defining an optimization method. It requires only providing the three key components of every application and indicating the one which needs to optimized by providing multiple candidates of the component. The optimization is performed in a parallel computing architecture where each instance of a tuning run is executed with a different candidate. The method assumes defining of a cost function and a measure corresponding to the cost function that expresses the yield of the cost function. The candidates providing the best yield of the cost faction are the optimal candidates for the system.

As mentioned each of the three components of the system can be optimized: the input data, the system model, and the physical properties.

The most common case is when the system model needs to be optimized to allow the best performance of the system and the best estimation of the physical parameters. In practice it includes mainly tuning the error modelling coefficients that the system's calibration parameters depend on. The examples of such applications can be: a) Geophysical applications:

Exploration applications where the set of sensor measurements (magnetic, seismic, gravimetric based detection) provides data input, the system model links the sensors data and the detection outcome; the set of parameters within the system needs to be tuned to provide the reliable detection of oil, gas, minerals.

Optimization of IMU sensors used for LiDAR surveys to use them in

INS/GNSS fusion for gravity extraction; the input data are sensor data; the system model contains relations between sensor data and calculated physical properties (e.g. gravity); the set of parameters related to sensor errors needs to be tuned to optimize gravity estimation. ln LiDAR surveys, an optimization of survey conditions which influence the survey outcome (i.e. wind perturbations cause vibration and influence quality of data); the physical properties are the expected quality of measured parameters; the system model describes links between conditions and measured parameters; the coefficients related to varying conditions are altered in order to provide the best quality of measured parameters.

In measurement campaigns (geoid modeling campaigns) where the

measurements collected in a specific pattern (e.g. measurements following parallel lines with a specific spacing) are used to generate a model (i.e. a set of mathematical equations or a grid model); the parameters related to varying patterns of trajectories are alternated to provide the optimum model outcome.

For varied surveys involving sensor fusion, the measuring sensor set needs to be aligned/prepared for a survey in order to provide the optimum measurements. The relations between multiple sensors needs to get constrained and defined. The parameters describing the varying aligning scenarios are varied in order to provide the best alignment. Subsea navigation (ROV, AUV) containing multiple sensors; how long and in which pattern move to secure a good quality of the outcome from the whole survey.

In geoid, gravity modeling, multiple data sources needs to be fused together to provide a reliable model, for example fitting the globe or local area the best;

parameters related to weights of different data sets dependent on data availability, quality etc. can be altered in order to provide the optimum model.

In site characterization, ground penetration analysis: optimization of sensory required to provide reliable interpretation of ground layers; parameters reflecting various characteristics of sensors, the environmental influence (e.g. vibrations, refractions etc.) are varied to provide the desired accuracy with minimum costs (e.g. to minimize the drilling depth). b) Surveying applications:

In ephemeris modelling observations from many reference stations needs to be fused in order to estimate the satellite trajectory; the parameters describing the varying input (how many reference stations, in which geometry, quality of data) can be varied to optimize the model.

In GNSS measurements (PPP, RTK) corrections need to be estimated (tropospheric, ionospheric etc.) to secure high accuracy measurements. The corrections are estimated based on many observations from reference stations. Parameters related to quality, availability of a station and geometrical relations of stations can be varied to provide the best quality of coefficients.

For rapid ambiguity detection for GNSS networks (e.g. RTK), vary parameters describing reference stations geometry and constellation of satellites (availability, quality) in order to predefine an expected ambiguity in a reliable way. c) Measurement campaigns:

In preparation of measurement campaigns where e.g. the locations beacons, the measurement network layout, type of used sensors/technologies needs to predefined to secure a required accuracy.

In subsea navigation (ROV, AUV), different layouts of networks (e.g. of LBL beacons) can be alternated to provide the optimum accuracy of LBL measurements.

In subsea navigation (ROV, AUV), parameters related to usage of different sensors (DLV, USBL, LBL, depth, laser scanner, cameras) can be varied to achieve a required accuracy with minimum cost. In subsea navigation, parameters reflecting the influence of environment (temperature, salinity, pressure, currents etc) on pre-selected sensors for the campaign; vary the parameters to select the optimum set of parameters.

Monitoring with drones small unmanned planes (in case of fires, flooding, high risk areas, situations): vary the parameters related to environmental effects to optimize the trajectories and limit the risks of vehicle damage.

Monitoring of vibration, movements of buildings, dams, civil engineering structures: vary parameters relating to different configuration of measuring sensors in order to successfully monitor a specific feature (e.g. detect vibrations of a pillar, possible cracks).

Indoor navigation: vary parameters describing configuration of many beacons (WiFi, radio-navigation) and their configuration to optimize the accuracy of navigation within the whole area of an indoor structures (e.g. shopping malls, storage

warehouses, rigs, manufacturing warehouses). d) Calibration of sensors/systems:

Varying parameters describing the length of gyro coil, operational

temperature, type of fluid (depending of gyro type: MEMS, mechanical, FOG, RLS) to achieve desired accuracy of rotation rates.

Varying parameters describing operating condition of accelerometers, inclinometers, gravimeters to achieve desired accuracy in a specific range (e.g. 2g, 10g).

Varying parameters describing operating conditions for interferometers for distance measurements. Calibration of strain sensors as used in CPT (cone penetration testing) or SPT (standard penetration testing) and the like. e) Traffic control and optimization of trajectories:

Air security and reliability: in systems utilizing different techniques (ILS, GNSS, LAAS) for plane trajectory estimation; vary parameters referring to these techniques to obtain accurate and secure trajectory determination.

Air traffic control: vary parameters describing different configurations of lanes, potential trajectories, amount of planes etc. to optimize the aircraft traffic.

Sea traffic control: vary parameters referring to layout, depth, type of water ways to optimize sea traffic in ports or TSS (traffic separation scheme).

Roads (highways, motorways etc.) traffic control: vary parameters related to conditions of the road (number of lanes opened, condition of the road), number and type of cars to optimize the vehicle flow.

Public transportation scheduling thereby supporting dynamic scheduling with a public transportation network. f) Energetic grid:

The amount and type of necessary auxiliary power stations may be optimized based on availability of natural resources, such as wind or sunshine to a certain power consumption load. g) Delivery monitoring:

In case of delivery cars: how many cars will be required to deliver goods in a timely manner, depending on traffic situations and other constraints, how long resting time, where are the deliveries, optimized route navigation, prediction of delivery times. h) Architecture and crowd control:

Escape routes, movement of people at airports, arenas, event locations, railway or subway stations. Further, numbers and locations of facilities and amenities can be optimized in dependence of the size of a building and the numbers of tenants, e.g. for smart city, hospital or condominium planning. i) Logistics:

Transport of luggage of the airport, routing of letters and parcels in postal distribution facilities, optimizing manufacturing flows in factories. j) Civil engineering

Optimization of system design and performance, wherein strengths of materials might dictate dimensions, e.g. in scaffolding, ship design, aircraft design, dependent on parameters of use of the systems.

Optimization of designs of structures or structure components to support secure and efficient usage of them e.g. selecting proper design for road turns such that the geometry allows proper vehicle flow with certain speed; in designing rail components (switches, transition areas between soil and metal/concrete) to minimize vibrations of the train and improve a travel comfort for passengers. Defining efficient inspection layouts based on the frequency of inspections in the past, quality of inspection technique, ware/life time of different components or structures e.g. in monitoring quality of rail, road, energetic networks: inspecting the quality of components and the surroundings (e.g. amount of greenery to be removed). k) Oil and gas:

System optimization, e.g. related to drilling, applying pressure, product flow optimization in refineries.

L) Point cloud analysis:

In laser scanning: optimization of laser system parameters (density of points, intensity, angle of view) with respect to scanned structures in order to provide the reliable point cloud interpretation; e.g. optimizing airborne LiDAR system parameters to detect ground points in presence of thick greenery, optimizing land laser systems to detect different building surfaces.

In feature detection: optimizing a suitable model and model parameters to allow efficient feature detection e.g. defining the best model to detect power lines between houses and poles with the highest accuracy rate

The particulars contained in the above description of sample embodiments should not be construed as limitations of the scope of the invention, but rather as

exemplifications of some embodiments thereof. Many variations are possible and are immediately apparent to the person skilled in the arts. In particular, this concerns variations that comprise a combination of features of the individual embodiments disclosed in the present specification. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their legal equivalents.