Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATED, REAL TIME PROCESSING, ANALYSIS, MAPPING AND REPORTING OF DATA FOR THE DETECTION OF GEOTECHNICAL FEATURES
Document Type and Number:
WIPO Patent Application WO/2018/201180
Kind Code:
A1
Abstract:
Embodiments of the present invention are directed to automated systems, and methods of classifying and mapping rock fragmentation at an underground drawpoint. The methods include scanning the underground drawpoint to acquire data about the drawpoint and mapping the acquired data to generate a 3D point cloud comprising a plurality of points. The mapped data is resampled to reduce the number of points in the 3D point cloud and a classifier is trained on samples of the resampled data to delineate between different classifications of rock fragmentation. The method includes generating a representation of the different classifications and/or spatial distributions of the rock fragmentation. Embodiments of the present invention are also directed to methods of optimising autonomous vehicle operation.

Inventors:
STEWART PENNY (AU)
BRUNTON IAN (AU)
Application Number:
PCT/AU2017/051117
Publication Date:
November 08, 2018
Filing Date:
October 16, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PETRA DATA SCIENCE PTY LTD (AU)
International Classes:
G01S17/89; G01C7/06; G06K9/46
Domestic Patent References:
WO2016197251A12016-12-15
WO2011094818A12011-08-11
Other References:
CAMPBELL, A.D. ET AL.: "Application of laser scanning to measure fragmentation in underground mines", MINING TECHNOLOGY, TRANSACTIONS OF THE INSTITUTIONS OF MINING AND METALLURGY: SECTION A, 16 March 2017 (2017-03-16), XP055557485, Retrieved from the Internet [retrieved on 20171127]
ONEDERRA, I. ET AL.: "Measuring blast fragmentation at Esperanza mine using high resolution 3D laser scanning", MINING TECHNOLOGY, TRANSACTIONS OF THE INSTITUTIONS OF MINING AND METALLURGY: SECTION A, vol. 124, no. 1, 21 October 2014 (2014-10-21), XP055557497, Retrieved from the Internet [retrieved on 20171127]
BAMFORD, T. ET AL.: "A real-time analysis of rock fragmentation using UAV technology", 6TH INTERNATIONAL CONFERENCE ON COMPUTER APPLICATIONS IN THE MINERALS INDUSTRIES, 14 July 2016 (2016-07-14), XP055557508, Retrieved from the Internet [retrieved on 20171130]
ZHANG J. ET AL.: "SVM-Based Classification of Segmented Airborne LiDAR Point Clouds in Urban Areas", REMOTE SENSING, 31 July 2013 (2013-07-31), pages 3749 - 3775, XP055557516, [retrieved on 20171130]
BRUNTON, I. ET AL.: "Impact of Blast Fragmentation on Hydraulic Excavator Dig Time", FIFTH LARGE OPEN PIT MINING CONFERENCE, 3 November 2003 (2003-11-03), XP055557520, Retrieved from the Internet [retrieved on 20171130]
Attorney, Agent or Firm:
SPRUSON & FERGUSON (AU)
Download PDF:
Claims:
CLAIMS

1 . An automated method of classifying and mapping rock fragmentation at an underground drawpoint, the method comprising:

scanning the underground drawpoint to acquire data about the drawpoint;

mapping the data about the drawpoint to generate a 3D point cloud comprising a plurality of points;

resampling the mapped data to reduce the number of points in the 3D point cloud;

sampling the resampled data;

training a classifier on the samples of the resampled data to delineate between different classifications of rock fragmentation; and

generating a representation of the different classifications and/or spatial distributions of the rock fragmentation.

2. The method of claim 1 , wherein scanning of the underground drawpoint is executed using a light detection and ranging (LIDAR) system.

3. The method of claim 2, wherein the LIDAR system is handheld or mounted on a vehicle.

4. The method of claim 1 , wherein scanning of the underground drawpoint is executed using photogrammetric methods.

5. The method of claim 1 , wherein scanning of the underground drawpoint is executed using one or more video cameras, such as one or more 360° spherical cameras.

6. The method of claim 5, wherein the one or more video cameras is handheld or mounted to a vehicle, such as a drone.

7. The method of any preceding claim, wherein mapping the data about the drawpoint to generate the 3D point cloud comprises simultaneous localization and mapping (SLAM).

8. The method of any preceding claim, including generating eigenvalues for at least some of the points of the 3D point cloud at a plurality of scales of the 3D point cloud.

9. The method of claim 8, wherein the eigenvalues are generated for all of the points of the 3D point cloud using principal component analysis (PCA).

10. The method of claim 8 or 9, wherein training the classifier on the resampled data includes conducting support vector machine learning based on the eigenvalues.

1 1 . The method of claim 10, including developing a signature representing the variation of dimensions of the 3D point cloud over the plurality of scales of the 3D point cloud and basing the support vector machine learning on the signature.

12. The method of any preceding claim, including training the classifier to delineate between different types and/or sizes of rock fragments.

13. A method for optimising a bucket fill factor of an autonomous vehicle, such as a loader, using the method of classifying rock fragmentation and/or spatial distributions of rock fragmentation at an underground drawpoint, as claimed in any one of claims 1 to 12.

14. A system to automatically classify and map rock fragmentation at an underground drawpoint, the system comprising: a scanning device to scan the underground drawpoint to acquire data about the drawpoint; a memory to store the acquired data, the memory in communication with a processor to: map the data about the drawpoint to generate a 3D point cloud comprising a plurality of points; resample the mapped data to reduce the number of points in the 3D point cloud; sample the resampled data; train a classifier on samples of the resampled data to delineate between different classifications of rock fragmentation; and generate a representation of the different classifications and/or spatial distributions of the rock fragmentation.

15. The system of claim 14, wherein the scanning device is selected from one of the following: a light detection and ranging (LIDAR) system; one or more video cameras, such as one or more 360° spherical cameras.

16. The system of claim 14 or 15, wherein the scanning device is handheld or mounted to a vehicle.

17. The system of any one of claims 14 to 16, wherein the processor further executes the method of any one of claims 7 to 12.

18. A computer readable medium having stored thereon computer executable code to automatically classify and map rock fragments at an underground drawpoint, execution of the computer executable code by a processor causing: mapping data about the drawpoint acquired by a scanning device to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on samples of the resampled data to delineate between different classifications of rock fragmentation; and generating a representation of the different classifications and/or spatial distributions of the rock fragmentation.

19. The computer readable medium of claim 18, wherein execution of the computer executable code by the processor causes performance of the method as claimed in any one of claims 1 to 12.

20. A method for optimising autonomous vehicle operation, such as a loader, using machine learning to optimise productivity, the method comprising: mapping data about a region to be excavated by the vehicle, the data acquired by a scanning device mounted to the vehicle to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on samples of the resampled data to delineate between different features of the region to be excavated; and correlating data associated with the operation of the vehicle with the features of the region to be excavated to automatically modify operation of the autonomous vehicle.

21 . The method of claim 20 wherein the data associated with the operation of the vehicle can include one or more of the following: control inputs to the vehicle; velocity of the vehicle; location/path of the vehicle relative to the region to be excavated, such as a drawpoint; position of hydraulics of the vehicle; video of the region to be excavated during loading; bucket fill-factor for each load; operator information (shift, operator identifier).

22. The method of claim 20 or 21 , wherein the scanning device is selected from one of the following: a light detection and ranging (LIDAR) system; one or more video cameras, such as one or more 360° spherical cameras. The method of any one of claims 20 to 22, the method further comprising the steps of any one of claims 7 to 12.

Description:
TITLE

AUTOMATED, REAL TIME PROCESSING, ANALYSIS, MAPPING AND REPORTING OF DATA FOR THE DETECTION OF GEOTECHNICAL FEATURES

FIELD OF THE INVENTION

The present invention relates to automated, real time processing, analysis, mapping and reporting of data for the detection of geotechnical features, particularly underground. One aspect of the present invention relates to the classification and mapping of rock fragmentation at an underground mine drawpoint using machine learning. Other aspects of the invention relate to other geotechnical applications, such as, but not limited to optimising underground loader automation.

BACKGROUND TO THE INVENTION

In mining and construction, many procedures are automated to improve efficiency and autonomous digging and loading machines are in common use in such environments. In underground mining operations, for example, rock is broken with or without blasting and the broken rock is typically collected at a drawpoint. The resulting rock pile comprises rock fragments having a range of different shapes and sizes. To avoid efficiency being negatively impacted upon and/or damage to the machine, autonomous digging and loading machines must be capable of automatically determining different rock sizes to determine whether the rocks can be handled by that machine or whether special handling is required.

The sampling and quantification of fragmentation can be undertaken by a number of methodologies including, but not limited to sieving, physical measurements, production rate analysis, observational methods or digital image processing (DIP) methods including by programs such as Split, Wipfrag etc. The only practical method for large scale fragmentation measurement currently available uses 2D DIP. Although 2D DIP methods have automatic algorithms to delineate and calculate fragmentation distributions and produce good results for well-lit surface conditions, 2D DIP methods require significant manual editing for drawpoint fragmentation such as trimming and scaling of the images, correcting delineation problems due to water, dust, insufficient light and insufficient image quality, contrast differences due to water and fragment colour changes.

OBJECT OF THE INVENTION A preferred object of at least one aspect of the present invention is to provide an automated, real time processing, analysis, mapping and reporting method for the characterisation of an underground mine drawpoint, such as the classification of rock fragmentation and/or determining the spatial distribution of the rock fragmentation at the drawpoint that addresses or at least ameliorates one or more of the aforementioned problems of the prior art and/or provides a useful commercial alternative.

SUMMARY OF THE INVENTION

Some embodiments of the present invention relate to automated, real time processing, analysis, mapping and reporting methods for the characterisation of an underground mine drawpoint, such as the classification of rock fragmentation and/or determining the spatial distribution of the rock fragmentation at the drawpoint using machine learning.

Some embodiments of the present invention relate to automated, real time processing, analysis, mapping and reporting methods for determining profile changes in underground mines, or the determination of other effects such as spalling, plate deformation, missing plates and/or mesh bagging using machine learning.

Some embodiments of the present invention relate to automated, real time processing, analysis, mapping and reporting methods for optimising autonomous vehicle operation, such as optimising productivity of an underground loader using machine learning.

According to one form, but not necessarily the broadest form, the present invention resides in an automated method of classifying and mapping rock fragmentation at an underground drawpoint, the method comprising: scanning the underground drawpoint to acquire data about the drawpoint; mapping the data about the drawpoint to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on samples of the resampled data to delineate between different classifications of rock fragmentation; and generating a representation of the different classifications and/or spatial distribution of the rock fragmentation.

Suitably, scanning of the underground drawpoint is executed using a light detection and ranging (LIDAR) system.

Suitably, mapping the data about the drawpoint to generate the 3D point cloud comprises simultaneous localization and mapping (SLAM).

Suitably, scanning of the underground drawpoint is executed using photogrammetric methods.

Suitably, scanning of the underground drawpoint is executed using one or more video cameras, such as one or more 360° spherical cameras, which may be handheld or mounted to a vehicle, such as, but not limited to a drone.

Preferably, the method includes generating eigenvalues for at least some of the points of the 3D point cloud at a plurality of scales of the 3D point cloud.

Preferably, eigenvalues are generated for all of the points of the 3D point cloud using principal component analysis (PCA). Preferably, training the classifier on the resampled data includes conducting support vector machine learning based on the eigenvalues. Preferably, the method further includes developing a signature representing the variation of dimensions of the 3D point cloud over the plurality of scales of the 3D point cloud.

Preferably, the support vector machine learning is based on the signature.

Suitably, the LIDAR system is handheld or mounted on a vehicle.

Suitably, training the classifier on the resampled data includes training the classifier to delineate between different types and/or sizes of rock fragmentation.

According to another form, but not necessarily the broadest form, the present invention resides in a system to automatically classify and map rock fragmentation at an underground drawpoint, the system comprising: a memory to store data about the underground drawpoint acquired by a scanning device; a processor in communication with the memory to: map the data about the drawpoint to generate a 3D point cloud comprising a plurality of points; resample the mapped data to reduce the number of points in the 3D point cloud; sample the resampled data; train a classifier on the resampled data to delineate between different classifications of rock fragmentation; and generate a representation of the different classifications and/or spatial distribution of the rock fragmentation.

According to a further form, but not necessarily the broadest form, the present invention resides in a computer readable medium having stored thereon computer executable code to automatically classify and map rock fragmentation at an underground drawpoint, execution of the computer executable code by a processor causing: mapping data about the drawpoint acquired by a scanning device to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on the resampled data to delineate between different classifications of rock fragmentation; and generating a representation of the different classifications and/or spatial distribution of rock fragmentation.

According to another form, but not necessarily the broadest form, the present invention resides in a method for optimising a bucket fill factor of an autonomous vehicle, such as a loader, using the aforementioned method of classifying rock fragmentation and/or spatial distributions of rock fragmentation at an underground drawpoint, or an adapted method thereof.

According to a further form, but not necessarily the broadest form, the present invention resides in a method for optimising autonomous vehicle operation, such as optimising productivity of a loader, using machine learning, the method comprising: mapping data about a region to be excavated by the vehicle, the data acquired by a scanning device mounted to the vehicle to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on samples of the resampled data to delineate between different features of the region to be excavated; and correlating data associated with the operation of the vehicle with the features of the region to be excavated to automatically modify operation of the autonomous vehicle.

Data associated with the operation of the vehicle can include one or more of the following: control inputs to the vehicle; velocity of the vehicle; location/path of the vehicle relative to the region to be excavated, such as a drawpoint; position of hydraulics of the vehicle; video of the region to be excavated during loading; bucket fill-factor for each load; operator information (shift, operator identifier).

Further forms and/or features of the present invention will become apparent from the following detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the present invention will now be described with reference to the accompanying drawings, which are provided by way of example only and in which like reference numerals refer to like features. In the drawings:

FIG 1 is an example of original mapped data of an underground drawpoint obtained from a LIDAR system according to an embodiment of the present invention; FIG 2 is an example of the mapped data shown in FIG 1 resampled;

FIG 3 illustrates sampling of the resampled data shown in FIG 2;

FIG 4 shows a segmented region of the point cloud data representing concrete;

FIG 5 shows a segmented region of the point cloud data representing rock and the segmented region representing concrete shown in FIG 4;

FIG 6 is an example of a result of training a classifier to distinguish between concrete and rock; FIG 7 shows the classification of points of the cloud of the resampled data that represent concrete distinguished from rock;

FIG 8 shows only the points in the resampled data of FIG 7 representing rock;

FIG 9 shows the classification of points of the cloud of the resampled data according to size;

FIG 10 shows only the points in the resampled data of FIG 9 representing small rocks;

FIG 12 shows a representation of sampled cloud point data for different material sizes and classes including coarser fragmentation, finer fragmentation, fines, concrete and shotcrete;

FIG 12A is a general flow diagram representing an automated method of classifying rock fragments at an underground drawpoint;

FIG 13 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a first drawpoint; FIG 14 shows a graph of the different rock fragmentation ranges represented in FIG 13 as a percentage of the total area;

FIG 15 shows a fragmentation curve plotting the cumulative percentage of material passing as a function of the different rock fragmentation ranges (or sieve sizes) for the data represented in FIG 13; FIG 16 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a second drawpoint;

FIG 17 is an image of a rock pile at the second drawpoint;

FIG 18 shows a graph of the different rock fragmentation ranges represented in FIG 16 as a percentage of the total area; FIG 19 shows a fragmentation curve plotting the cumulative percentage of material passing as a function of the different rock fragmentation ranges (or sieve sizes) for the data represented in FIG 16; FIG 20 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a third drawpoint;

FIG 21 is an image of a rock pile at the third drawpoint;

FIG 22 shows a graph of the different rock fragmentation ranges represented in FIG 20 as a percentage of the total area;

FIG 23 shows a fragmentation curve for the data represented in FIG 20;

FIG 24 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a fourth drawpoint;

FIG 25 is an image of a rock pile at the fourth drawpoint;

FIG 26 shows a graph of the different rock fragmentation ranges represented in FIG 24 as a percentage of the total area;

FIG 27 shows a fragmentation curve for the data represented in FIG 24; and

FIG 28 diagrammatically illustrates an electronic device for performing the methods of the present invention.

It will be appreciated that the accompanying drawings may not have been drawn to scale and/or some features may have been distorted and/or omitted and/or represented schematically for the sake of clarity.

DETAILED DESCRIPTION OF THE INVENTION

Embodiments of the present invention are directed to automated, real time processing, analysis, mapping and reporting of data for the detection of underground geotechnical features. One aspect of the present invention relates to the characterisation of an underground mine drawpoint, such as the classification of rock fragmentation and/or determining the spatial distribution of the rock fragmentation at the drawpoint using machine learning. Other aspects of the invention relate to other geotechnical applications such as, but not limited to optimising autonomous vehicle operation, such as optimising productivity of an automated underground loader using machine learning. Embodiments of the present invention utilise an adaptation of a known methodology disclosed by Brodu, N. & Lague, D., 3D Terrestrial LIDAR data classification of complex natural scenes using a multiscale dimensionality criterion: applications in geomorphology, ISPRS Journal of Photogrammetry and Remote Sensing 68, 2012. One of the underlying ideas of this methodology is the characterization of local dimensionality properties of a scene at each point and at different scales. Local dimensionality refers to the geometric appearance of the cloud at a given location at a given scale, i.e. whether it is more 1 D, 2D or 3D. This methodology includes generating a 3D point cloud comprising a plurality of points from the LIDAR data and generating eigenvalues for all of the points of the 3D point cloud using principal component analysis (PCA). PCA is performed at different scales of the 3D point cloud to build a signature representing the variation of dimensions of the 3D point cloud over the different scales. Support vector machine learning is then carried out based on the signature. Some embodiments of the present invention are directed to an automated method of classifying rock fragmentation and/or the spatial distribution of rock fragmentation at an underground drawpoint. With reference to the general flow diagram shown in FIG 12A, the method 1200 comprises at 1205 scanning the underground drawpoint using a scanning system to acquire data about the drawpoint. In some embodiments, the scanning system is a light detection and ranging (LIDAR) system. The LIDAR system can be handheld, mounted on a vehicle or on another type of support or mount, such as a tripod.

In other embodiments, scanning of the underground drawpoint is executed using photogrammetric methods. In further embodiments, scanning of the underground drawpoint is executed using one or more video cameras, such as one or more 360° spherical cameras, which may be handheld or mounted to a vehicle, such as, but not limited to a drone.

The method 1200 comprises at 1210 mapping the data about the drawpoint to generate a 3D point cloud comprising a plurality of points. In some embodiments, mapping the data about the drawpoint to generate the 3D point cloud comprises simultaneous localization and mapping (SLAM). An example of original mapped data obtained from a LIDAR system is shown in FIG 1 . Examples of suitable LIDAR systems that perform the scanning and SLAM include ZEB Revo which has a relative accuracy of 2-3cm and which can be mounted to a vehicle or other mount, such as a tripod. An example of a handheld LIDAR system is ZEB1 -GeoSlam which 0.1 % 3D accuracy. The method 1200 comprises at 1215 resampling the mapped data to reduce the number of points in the 3D point cloud. An example of the mapped data shown in FIG 1 resampled is shown in FIG 2. Resampling improves the performance of the method by reducing the number of points in the 3D point cloud for processing. In other words, the method includes reducing the resolution of the LIDAR data. In the example shown in FIG 2, the resolution of the data has been reduced to a resolution of 2-3cm. Other resolutions may be used.

The method 1200 comprises at 1220 sampling the resampled data whereby a region of interest of the resampled 3D point cloud is selected. With reference to FIG 3, the region 1 10 of interest can be a polygon of any shape and can relate to a particular type of material that is to be recognized or determined compared with other types of material in the acquired data. For example, sampling can include selecting a region of the resampled 3D point cloud data that represents concrete with a view to the method according to embodiments of the present invention automatically distinguishing concrete in the data versus everything else, i.e. other types of material that are in the data. FIG 4 shows a segmented region of the point cloud data representing concrete. Sampling is repeated for each type of material that is to be distinguished within the data. For example, other samples can include samples representing rock, samples representing particular rock classifications, such as, but not limited to coarser fragmentation, finer fragmentation, fines, or samples representing shotcrete. FIG 5 shows a segmented sample region of the point cloud data representing rock and the segmented sample region representing concrete shown in FIG 4.

The method 1200 comprises at 1225 training a classifier on the samples of the resampled data to delineate between different classifications of rock fragments. For example, the classifier can be trained on the sample 1 10 representing concrete selected from the resampled 3D point cloud data and the similar sized sample 1 15 representing rock selected from the resampled 3D point cloud data, as shown in FIG 5. Training of the classifier is carried out at multiple scales of the resampled 3D point cloud data. The scales and the increments between scales are selectable. For example, the scales can be between 0.1 m and 0.5m in 0.05 increments, but other scales and increments can of course be employed. An example of a result of the training of the classifier is shown in FIG. 6 in which the points 120 of a first shade or colour on the left hand side of the line represent concrete and the points 125 of a second shade or colour on the right hand side of the line represent rock. In the original coloured version of this example, the concrete is represented in blue and the rock is represented in red, but other colours can of course be used.

The method 1200 comprises at 1230 generating a representation 130 of the different classifications of rock fragments. This comprises applying the trained classifier to the resampled 3D point cloud data such that all of the points in the resampled 3D point cloud data are classified according to the type of material that they represent. An example of the representation 130 is shown in the image in FIG 7. In this example, points 135 of the cloud in a first shade or colour represent concrete and points 140 of the cloud of a second shade or colour represent rock. In the original coloured version of this example, the concrete is represented in blue and the rock is represented in red, but other colours can of course be used. FIG 8 shows only the rock from the resampled data.

The method can comprise retraining the classifier on the same and/or further samples to improve the results and show different representations. For example, the method can include training the classifier on the resampled data to delineate between different types and/or sizes of rock fragmentation. FIG 9 shows the classification of points of the cloud of the resampled data according to fragment size. In the example in FIG 9, points 145 of the cloud in a first shade or colour represent large rocks and points 150 of the cloud of a second shade or colour represent small rocks. In the original coloured version of this example, the large rocks represented in red and the small rocks are represented in blue, but other colours can of course be used. FIG 10 shows only the points in the resampled data of FIG 9 representing small rocks. FIG 1 1 shows the result of training the classifier to distinguish between rock fines and small rocks based on samples segmented from the resampled 3D point cloud data of an underground drawpoint. The points 155 of a first shade or colour on the left hand side of the line represent the fines and the points 160 of a second shade or colour on the right hand side of the line represent small rocks. In the original coloured version of this example, the fines are represented in blue and the small rocks are represented in red, but other colours can of course be used.

FIG 12 shows a representation of the result of training the classifier to distinguish between different rock sizes or fragmentation classifications. In this example, coarser fragmentation 165, finer fragmentation 170, fines 175, concrete or shotcrete 180 are classified and represented in different colours. The representations also show the spatial distribution of the different fragmentation classifications, i.e. the relative locations at the drawpoint of the different fragmentation types. Other representations can be used, such as different shades of the same colour or gray scale or different symbols. Other representations can include reporting in the format of a graph the different rock fragmentation ranges as a percentage of the total area and/or reporting in the format of a fragmentation curve plotting the cumulative percentage of material passing as a function of the different rock fragmentation ranges (or sieve sizes). Examples are discussed herein.

The method includes generating eigenvalues for at least some of the points of the 3D point cloud at a plurality of scales of the 3D point cloud. In some embodiments, eigenvalues are generated for all of the points of the 3D point cloud using principal component analysis (PCA). The training of the classifier on the resampled data includes conducting support vector machine learning based on the eigenvalues. The method can be considered to include developing a signature representing the variation of dimensions of the 3D point cloud over the plurality of scales of the 3D point cloud and the support vector machine learning can be based on the signature. Further examples will now be discussed. In one example, based on the available LIDAR scans of an underground drawpoint, the following fragmentation size ranges were selected: 0.5m to 1 .0m; 0.3m to 0.5m, 0.1 m to 0.3m and <0.1 m. The different size ranges are summarized in Table 1 :

Table 1

The classifier was trained on sampled cloud point data for the different fragmentation size ranges and the representation of the different fragmentation classifications shown in FIG 13 was generated. In the original coloured version of this example, fragments in the range 0.5m to 1 .0m are represented by the blue points 185, fragments in the range 0.3m to 0.5m are represented by the green points 190, fragments in the range 0.1 m to 0.3m are represented by the red points 195 and fragments in the range <0.1 m are represented by the black points 200. It will be appreciated that other colours or shades can be used for the representation and other size ranges can be selected according to the desired application and the classifier trained accordingly.

FIG 14 illustrates reporting of the same drawpoint data shown in FIG 13 in the format of a graph of the different rock fragmentation ranges as a percentage of the total area. This provides an indicative measure of important fragmentation indices such as an amount of oversize rock fragments and an amount of fines. FIG 15 illustrates reporting of the same drawpoint data shown in FIG 13 in the format of a fragmentation curve plotting the cumulative percentage of material passing as a function of the different rock fragmentation ranges (or sieve sizes). This provides an estimation of the distribution of the fragmentation.

FIG 16 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a second underground drawpoint and FIG 17 is an image of a rock pile at the second drawpoint. FIG 18 shows a graph of the different rock fragmentation ranges represented in FIG 16 as a percentage of the total area. FIG 19 shows a fragmentation curve plotting the cumulative percentage of material passing as a function of the different rock fragmentation ranges (or sieve sizes) for the data represented in FIG 16. FIG 20 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a third underground drawpoint and FIG 21 is an image of a rock pile at the third drawpoint. FIG 22 shows a graph of the different rock fragmentation ranges represented in FIG 20 as a percentage of the total area. FIG 23 shows a fragmentation curve for the data represented in FIG 20.

FIG 24 shows a representation of sampled cloud point data for different fragment classifications for different fragment size ranges for a fourth underground drawpoint and FIG 25 is an image of a rock pile at the fourth drawpoint. FIG 26 shows a graph of the different rock fragmentation ranges represented in FIG 24 as a percentage of the total area. FIG 27 shows a fragmentation curve for the data represented in FIG 24.

In the original coloured versions of the examples shown in FIGS 16, 20 and 24, the same fragmentation ranges are represented and in the same colours as those in FIG 13. Drawpoint scans from different drawpoints in the same underground mine have been used to train the classifier and validate an approach to estimate fragmentation classification and distribution from LIDAR data and photogrammetric data. Due to the relative fine nature of the drawpoint material, only material below the 1 m size fraction was trained in these examples. The results of the analysis indicate a good fit for the different size fragments and distribution thereof analysed. Embodiments of the invention utilised the methodology known from Brodu, N. & Lague, D. and training of the classifier on the drawpoint LIDAR and photogrammetric data was carried out using the Cloudcompare software plugin qCANUPO. However, other software can be employed. According to another form, the present invention resides in a system to automatically classify and map rock fragmentation at an underground drawpoint. The system comprises a memory to store data acquired about an underground drawpoint by a scanning device, such as a LIDAR device, camera for photogrammetric methods or video camera. The memory is in communication with a processor that performs the methods described herein. Hence, the processor maps the data about the drawpoint to generate a 3D point cloud comprising a plurality of points; resamples the mapped data to reduce the number of points in the 3D point cloud; samples the resampled data; trains a classifier on samples of the resampled data to delineate between different classifications of rock fragmentation; and generates a representation of the different classifications and/or spatial distribution of the rock fragmentation.

FIG. 28 diagrammatically illustrates an electronic device 2800 suitable for performing the methods of the present invention. Similarly, the method 1200 of FIG. 12A can be implemented using the electronic device 2800. The electronic device 2800 includes a central processor 2802, a system memory 2804 and a system bus 2806 that couples various system components, including coupling the system memory 2804 to the central processor 2802. The system bus 2806 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The structure of system memory 2804 is well known to those skilled in the art and may include a basic input/output system (BIOS) stored in a read only memory (ROM) and one or more program modules such as operating systems, application programs and program data stored in random access memory (RAM).

The electronic device 2800 can also include a variety of interface units and drives for reading and writing data. In particular, the electronic device 2800 includes a hard disk interface 2808 and a removable memory interface 2810, respectively coupling a hard disk drive 2812 and a removable memory drive 2814 to the system bus 2806. Examples of removable memory drives 2814 include magnetic disk drives and optical disk drives. The drives and their associated computer- readable media, such as a Digital Versatile Disc (DVD) 2816 provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the computer system 2800. A single hard disk drive 2812 and a single removable memory drive 2814 are shown for illustration purposes only and with the understanding that the electronic device 2800 can include several similar drives. Furthermore, the electronic device 2800 can include drives for interfacing with other types of computer readable media. The electronic device 2800 may include additional interfaces for connecting devices to the system bus 2806. FIG. 28 shows a universal serial bus (USB) interface 2818 which may be used to couple a device to the system bus 2806. For example, an IEEE 1394 interface 2820 may be used to couple additional devices to the electronic device 2800. The electronic device 2800 can operate in a networked environment using logical connections to one or more remote computers or other devices, such as a server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant. The electronic device 2800 includes a network interface 2822 that couples the system bus 2806 to a local area network (LAN) 2824 or other communications network.

A wide area network (WAN), such as the Internet, can also be accessed by the electronic device 2800, for example via a modem unit connected to a serial port interface 2826 or via the LAN 2824 or other communications network. Transmission of data can be performed using the LAN 2824, the WAN, or a combination thereof, for example with scanning device 100, which can be a LIDAR device, camera, video camera or combination thereof.

It will be appreciated that the network connections shown and described are exemplary and other ways of establishing a communications link between computers can be used. The existence of any of various well-known protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP and the like, is presumed, and the electronic device 2800 can be operated in a client-server configuration to permit a user to retrieve data from, for example, a web-based server. The operation of the electronic device 2800 can be controlled by a variety of different program modules. Examples of program modules are routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. The present invention may also be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, personal digital assistants, smart watches, smart wearables and the like. Furthermore, the present invention may also be practiced with other methods of visual output from the computer system, including a virtual reality display, a projection of output into an eye of a user, a projection of output onto a surface within the view of the user, such as eyeglasses or another surface close to the eye.

Furthermore, the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

According to a further form, the present invention resides in a computer readable medium, such as exists in system memory 2804 or removable memory 2816 having stored thereon computer executable code to automatically classify and map rock fragmentation at an underground drawpoint. Execution of at least some of the computer executable code by the central processor 2802 causes: mapping data about the drawpoint acquired by a scanning device to generate a 3D point cloud comprising a plurality of points; resampling the mapped data to reduce the number of points in the 3D point cloud; sampling the resampled data; training a classifier on samples of the resampled data to delineate between different classifications of rock fragmentation; and generating a representation of the different classifications and/or spatial distributions of rock fragmentation.

Some embodiments of the present invention relate to automated, real time processing, analysis, mapping and reporting methods for determining geotechnical features, particularly underground, other than rock fragmentation type and/or size and/or spatial distribution. For example, embodiments of the present invention relate to automated, real time processing, analysis, mapping and reporting of profile changes in underground mines, tunnels, construction sites or other underground environments, or the determination of other effects such as spalling, plate deformation, missing plates and/or mesh bagging using machine learning. The principles described herein can be applied to train a classifier to quickly and accurately determine geotechnical features or effects.

According to another form, the present invention resides in a method for optimising a bucket fill factor of an autonomous vehicle, such as a loader, and in particular Load-Haul-Dump (LHD) vehicles, using the aforementioned methods of classifying rock fragmentation and/or spatial distributions thereof at an underground drawpoint or other region to be excavated.

According to a further form, the present invention resides in a method for optimising operation of an autonomous vehicle, such as optimising productivity of a loader, using machine learning. The method comprises mapping data about a region to be excavated by the vehicle, the data acquired by a scanning device, which may be mounted to the autonomous vehicle, to generate a 3D point cloud comprising a plurality of points. The scanning device can be a LIDAR device. Alternatively, the scanning device can be a camera employed in photogrammetric methods for acquiring data about the region. In some embodiments, the scanning device is a video camera, such as a 360° spherical camera, mounted to the autonomous vehicle, or to another vehicle, such as a drone, which is in communication with the autonomous vehicle. The video camera acquires video of the region to be excavated from which the 3d point data is derived. In some embodiments, two or more such scanning devices can be employed in combination. The method comprises resampling the mapped data to reduce the number of points in the 3D point cloud, which assists with the speed of processing. The method includes sampling the resampled data, as described herein for fragmentation classification and/or spatial distribution, adjusted according to the relevant environment. The method comprises machine learning in the form of training a classifier on samples of the resampled data to delineate between different features of the region to be excavated. The method can comprise correlating data associated with the operation of the autonomous vehicle with the features of the region to be excavated to automatically modify operation of the autonomous vehicle, including optimising productivity of the vehicle.

Data associated with the operation of the vehicle can include one or more of the following: control inputs to the vehicle; velocity of the vehicle; location/path of the vehicle relative to the region to be excavated, such as a drawpoint; position of hydraulics of the vehicle; video of the region to be excavated during loading; bucket fill-factor for each load; operator information (shift, operator identifier). For example, the aforementioned data, or a subset thereof, can be used as the basis of machine learning to improve the operation of the autonomous vehicle, such as optimizing productivity.

Hence, the present invention addresses or at least ameliorates one or more of the aforementioned problems of the prior art by quickly and accurately determining geotechnical features or effects using 3D LIDAR data, photogrammetric data and/or video data. Embodiments of the present invention use multi-scale dimensionality classification based on machine learning for applications such as, but not limited to underground rock fragmentation classification, spatial distribution, detecting and reporting of profile changes, determining other effects such as spalling, plate deformation, missing plates and/or mesh bagging and optimising operation of autonomous machines, such as autonomous LHD vehicles. Using 3D point cloud data rather than traditional 2D DIP methods provides a range of advantages. For example, for fragmentation classification and/or spatial distribution determination at underground drawpoints, once the classifier has been trained, rapid classification takes approximately 30 seconds per drawpoint. The classifier only needs to be trained once and subsequently the classifier automatically detects size fractions. The classifier can detect as many size fractions as the user requires. No manual post-processing is required. Dust/mud covering fragments does not impact on the classification because the algorithm is based on 3D data which defines the geometry of the fragment covered by the dust/mud. The methods can automatically differentiate between concrete/shotcrete and the muck pile. The methods are robust to missing data/holes, humidity and changes in point cloud density. A wide range of LIDAR, camera and/or video instruments can be used such that the present invention is not reliant on particular hardware. Only a low resolution x,y,z point cloud is required such as 2-3cm. The methods generate manageable file sizes of only around 3-5 MB, which are easy to upload. The methods do not need to use RGB data. Low cost, off the shelf instruments including point cloud scanners can be used e.g. Zebidee. In this specification, the terms "comprises", "comprising" or similar terms are intended to mean a non-exclusive inclusion, such that an apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed.

The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any form of suggestion that the prior art forms part of the common general knowledge.

Throughout the specification the aim has been to describe the invention without limiting the invention to any one embodiment or specific collection of features. Persons skilled in the relevant art may realize variations from the specific embodiments that will nonetheless fall within the scope of the invention.




 
Previous Patent: EYEWEAR

Next Patent: STANDING NANOWIRE-BASED ELASTIC CONDUCTORS