Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR DETERMINING FRAGMENTATION
Document Type and Number:
WIPO Patent Application WO/2023/000023
Kind Code:
A1
Abstract:
A system and method for determining fragmentation of materials as they're being transported by a vehicle. A sensor is provided that captures depth data of at least the materials as the vehicle passes through a capture window of the sensor. A processor receives the depth data from the sensor, generates a three dimensional representation from the depth data, identifies a region of interest, recognises material fragments within the region of interest, classifies recognised material fragments, and outputs a fragmentation profile generated from the classified material fragments.

Inventors:
GRAYSON ROSS (AU)
Application Number:
PCT/AU2022/050760
Publication Date:
January 26, 2023
Filing Date:
July 19, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TRANSCALE PTY LTD (AU)
International Classes:
G01S17/58; G01N15/02; G01S7/48; G01S7/481; G01S7/497; G01S17/894; G06T7/11; G06T7/521; G06T15/08; G06V10/20
Domestic Patent References:
WO2020049517A12020-03-12
WO2017093608A12017-06-08
Foreign References:
US20030156739A12003-08-21
JP2014095644A2014-05-22
JP2003035527A2003-02-07
JP2013257188A2013-12-26
Other References:
DUFF ELLIOT: "Automated Volume Estimation of Haul-Truck Loads", CSIRO MANUFACTURING SCIENCE AND TECHNOLOGY, ACRA 2000 AUSTRALIAN CONFERENCE IN ROBOTICS AND AUTOMATION, 1 September 2000 (2000-09-01), pages 179 - 184, XP093027498, Retrieved from the Internet [retrieved on 20230228]
"The Lase TVM-3D-M (Motion) system ", LASE TVM-M, LASE INDUSTRIELLE LASERTECHNIK, DE, DE, pages 1 - 8, XP009543098, Retrieved from the Internet [retrieved on 20230314]
LASE INDUSTRIELLE LASERTECHNIK GMBH: "LaseTVM - Truck Volume Measurement", YOUTUBE, XP093027499, Retrieved from the Internet [retrieved on 20230228]
LOADSCAN: "Introducing the Loadscan load volume scanner (for sand mining - civil construction)", YOUTUBE, XP093027501, Retrieved from the Internet [retrieved on 20230228]
SICK SENSOR INTELLIGENCE.: "LVM System: Measuring truck load volumes dynamically and fully automatically | SICK AG", YOUTUBE, XP093027502, Retrieved from the Internet [retrieved on 20230228]
Attorney, Agent or Firm:
DAVIS IP PTY LTD (AU)
Download PDF:
Claims:
CLAIMS:

1. A system for determining fragmentation of materials being transported by a vehicle, the system comprising: a sensor configured to capture depth data of at least the materials as the vehicle passes through a capture window of the sensor; and a processor that is configured to: receive the depth data from the sensor; generate a three dimensional representation from the depth data; identify a region of interest; recognise material fragments within the region of interest; classify recognised material fragments; and output a fragmentation profile generated from the classified material fragments.

2. The system of claim 1, wherein the sensor captures a plurality of scanlines to form a plurality of profiles of at least a portion of the vehicle as it passes through the capture window of the sensor.

3. The system of claim 2, wherein profile data from the plurality of profiles scanned as the vehicle passes through the capture window of the sensor are be combined with vehicle movement to create a three dimensional representation of the captured region of the vehicle.

4. The system of claim 2 or 3, wherein the sensor is configured to capture the profiles whilst the vehicle is in motion.

5. The system of any one of the preceding claims, wherein the sensor comprises a laser scanner. 6. The system of claim 5, wherein the laser scanner is two dimensional with a scan frequency of at least 100,000 points per second and at least 100 profiles per second.

7. The system of claim 5 or 6, wherein the laser scanner is configured to capture depth data whilst the vehicle is passing through the capture window at a speed of between 2 and 10Okm/hr.

8. The system of any one of the preceding claims, wherein the depth data from the sensor comprises one or more of range, displacement, angle, reflectance.

9. The system of any one of the preceding claims, wherein the processor is further configured to determine a speed of the vehicle.

10. The system of claim 9, wherein the speed of the vehicle is determined from the depth data received from the sensor.

11. The system of claim 9 or 10, wherein the speed of the vehicle is determined from a second sensor configured to measure the speed of the vehicle.

12. The system of claim 11 , wherein the second sensor controls operation of the first sensor.

13. The system of claim 12, wherein the second sensor controls operation of the first sensor by actuating the first sensor.

14. The system of claim 12 or 13, wherein the second sensor controls operation of the first sensor by altering performance characteristics of the first sensor. 15. The system of any one of claims 12 to 14, wherein the second sensor controls operation of the first sensor via the processor such that it is the process that is controlling the operation of the first sensor based upon information received from the second sensor

16. The system of any one of the preceding claims, wherein the processor receives vehicle speed data from another source.

17. The system of any one of the preceding claims, wherein the processor is further configured to determine sun data including the angle of the sun relative to the sensor and/or vehicle.

18. The system of any one of the preceding claims, wherein the processor is further configured to measure a volume of the materials being transported by the vehicle.

19. The system of claim 18, wherein measuring the volume of materials comprises determining a load carrying capacity of the vehicle.

20. The system of claim 19, wherein determ ining a load carrying capacity of the vehicle comprises obtaining pre-determ ined or previously calculated capacity data from a database.

21. The system of claim 19, wherein determining a load carrying capacity of the vehicle comprises capturing depth data of a load carrying portion of the vehicle as it passes through the capture window of the sensor when empty.

22. The system of any one of claims 19 to 21, wherein measuring the volume of materials comprises comparing depth data of a load defined by the materials being transported by the vehicle to the determined load carrying capacity of the vehicle. 23. The system of any one of the preceding claims, wherein the three dimensional representation comprises a two dimensional image with pixel characteristics representing depth.

24. The system of any one of the preceding claims, wherein the processor constrains the three dimensional representation to the identified region of interest.

25. The system of claim 24, wherein the processor is further configured to apply a mask to the region of interest.

26. The system of claim 24 or claim 25, wherein the processor is configured to identify regions of fines when recognising material fragments with the region of interest.

27. The system of an one of the preceding claims, wherein the fragmentation profile generated from the classified material fragments comprises a particle size distribution (PSD).

28. A method of determining fragmentation of materials being transported by a vehicle, the method comprising: driving the vehicle past a sensor such the materials pass through a capture window of the sensor; capturing depth data of the materials as they pass through the capture window of the sensor; generating a three dimensional representation from the captured depth data; identifying a region of interest; recognising material fragments within the region of interest; classifying the recognised material fragments; and outputting a fragmentation profile generated from the classified material fragments. 29. The method of claim 28, wherein the depth data comprises a plurality of scanlines.

30. The method of claim 29, wherein the scanlines are from a LIDAR sensor.

31. The method of any one of claims 28 to 30, wherein the step of generating a three dimensional representation comprises converting the scanlines to a two dimensional image with pixel characteristics representing depth.

Description:
SYSTEM AND METHOD FOR DETERMINING FRAGMENTATION

FIELD OF THE INVENTION

[0001] The invention relates to a system and method for determining fragmentation of materials. In particular, the invention relates, but is not limited, to a system and method that captures and analyses LIDAR data of mining materials during transportation and makes an assessment as to the fragmentation of those materials.

BACKGROUND TO THE INVENTION

[0002] Reference to background art herein is not to be construed as an admission that such art constitutes common general knowledge.

[0003] In the mining industry materials are broken down into fragments to make transportation and processing manageable. The size of fragmentation typically varies due to a range of factors including material characteristics and technique. For example, with blasted rock materials the size of fragmentation is often analysed as feedback to control blasting activities. Knowing the fragmentation can also assist with downstream processing.

[0004] In some cases an analysis is performed on materials on a conveyor belt. A downside of this approach is that the materials are not easily relocated should the analysis suggest that it would be desirable to do so. Furthermore, analysing the fragmentation once delivered to a conveyor introduces a delay or lag in providing feedback to the blasting activities. Some attempts have therefore been made to analyse the materials while located in a haul vehicle such as a load haul dump (LHD) vehicle.

[0005] In either case, the analysis is typically performed on data received a camera. Such data has no specific depth information which limits the accuracy of the analysis. Furthermore, camera images can be inaccurate due to obfuscations such as, for example, low contrast between objects, sensor noise (particularly in low light), motion blur, dust, weather, etc. Sensor noise and motion blur may be able to be reduced by providing artificially bright light at the time of capture. This, however, increases energy usage and can provide a distraction or even blinding hazard to a driver. Even then, this only reduces the issues and does not obviate them entirely.

OBJECT OF THE INVENTION

[0006] It is an aim of this invention to provide a system and method for determining fragmentation of materials which overcomes or ameliorates one or more of the disadvantages or problems described above, or which at least provides a useful alternative.

[0007] Other preferred objects of the present invention will become apparent from the following description.

SUMMARY OF INVENTION

[0008] In one form, although it need not be the only or indeed the broadest form, there is provided a system for determining fragmentation of materials being transported by a vehicle, the system comprising: a sensor configured to capture depth data of at least the materials as the vehicle passes through a capture window of the sensor; and a processor that is configured to: receive the depth data from the sensor; generate a three dimensional representation from the depth data; identify a region of interest; recognise material fragments within the region of interest; classify recognised material fragments; and output a fragmentation profile generated from the classified material fragments.

[0009] The sensor may capture a plurality of scanlines to form a plurality of profiles of at least a portion of the vehicle as it passes through the capture window of the sensor. Profile data from the plurality of profiles scanned as the vehicle passes through the capture window of the sensor may be combined with the vehicle movement to create a three dimensional representation of the captured region of the vehicle. The sensor may be configured to capture the profiles whilst the vehicle is in motion.

[0010] The sensor may comprise a laser scanner. The sensor may comprise LIDAR. The laser scanner may be two dimensional with a scan frequency of at least 100,000 points per second. The scan frequency may be at least 100 profiles per second. The scanner may be configured to capture depth data whilst the vehicle is passing through the capture window at a speed of between 2 and 100km/hr, more preferably at a speed of between 5 and 50km/hr, and even more preferably at a speed of between 8 and 40km/hr. It should be appreciated that sensor parameters may be selected and/or configured to suit its location, environment, and/or operating conditions. For example, higher vehicle speeds typically require higher scan frequencies and lower vehicle speeds typically require lower scan frequencies.

[0011 ] The depth data from the sensor may comprise one or more of range, displacement, angle, and reflectance. The depth data may further comprise a timestamp and/or scan line number.

[0012] The processor may be further configured to determine the speed of the vehicle. The processor may receive data relating to the speed of the vehicle from another source. The processor may determine the speed of the vehicle from the depth data received from the sensor. The processor may be further configured to determine the speed of the vehicle by using a second sensor configured to measure the speed of the vehicle. The second sensor may be a LIDAR sensor. The second sensor may be configured to identify approaching objects, such as the vehicle. The second sensor may control operation of the first sensor. The second sensor may control operation of the first sensor by actuating the first sensor. The second sensor may control operation of the first sensor by altering performance characteristics of the first sensor. The second sensor may control operation of the first sensor via the processor such that it is the process that is controlling the operation of the first sensor based upon information received from the second sensor. [0013] The sensor may be mounted above a target surface. The sensor may be mounted to an aboveground support structure. The sensor may be mounted to the support structure between around 2m and 12m above the target surface, preferably between around 4m and 10m above the target surface. The sensor may be mounted to a wall of an underground chamber such as a tunnel of an underground mine. The sensor may be mounted to a ceiling surface of the underground chamber. The sensor may be mounted to the ceiling surface of the underground chamber between around 1m and 4m above the target surface, preferably between around 2m and 3m above the target surface.

[0014] The processor may be further configured to determine sun data such as the angle of the sun relative to the sensor and/or vehicle. The sensor may have a field of view that is larger than the capture window. The processor may be configured to disregard any objects scanned by the sensor that are deemed not to be within the capture window. The processor may be configured to disregard a top region of data points, such as 180° data for example, on each scan due to the sun.

[0015] The processor may be further configured to determine environmental data such as weather interference (e.g. rain). Preferably the step of generating a three dimensional representation from the depth data comprises using the depth data as well as one or more of vehicle speed data, sun data, and weather data.

[0016] The processor may be further configured to measure a volume of the materials being transported by the vehicle. Measuring the volume of materials may comprise determining a load carrying capacity of the vehicle. Determining a load carrying capacity of the vehicle may comprise obtaining pre-determ ined or previously calculated capacity data from a database. Determining a load carrying capacity of the vehicle may comprise capturing depth data of a load carrying portion of the vehicle as it passes through the capture window of the sensor when empty. The load carrying capacity of a vehicle may be stored in a database. The load carrying portion of the vehicle may be in the form of a truck bucket or tray, for example. [0017] Capturing depth data of a load carrying portion of the vehicle as it passes through the capture window of the sensor when empty may comprise segmenting the load carrying portion into regions. Measuring the volume of materials may comprise comparing depth data of a load defined by the materials being transported by the vehicle to the determined load carrying capacity of the vehicle.

[0018] The three dimensional representation may comprise a two dimensional image with pixel characteristics representing depth. The image may be a bit map. The pixel characteristics may comprise pixel intensity. The pixel characteristics may comprise pixel colour. Each pixel may be representative of measured depth data. Alternatively, some pixels may be representative of measured depth data and some pixels may be representative of estimated depth data derived from the measured depth data.

[0019] The processor may constrain the three dimensional representation to the identified region of interest. The processor may be further configured to apply a mask to the region of interest. The region of interest may comprise a region including the materials being carried by a vehicle. The processor may be further configured to identify fragments smaller than a predetermined size. The processor may be configured to identify regions of fines when recognising material fragments with the region of interest.

[0020] The materials preferably comprise mining materials such as blast rock or ore. The vehicle preferably comprises a mining transportation vehicle or haul vehicle. The materials are preferably located in a bucket, tray, dump body or box, or shovel of the vehicle.

[0021] The fragmentation profile generated from the classified material fragments preferably comprise a particle size distribution (PSD).

[0022] In another form, there is provided a method of determining fragmentation of materials being transported by a vehicle, the method comprising: driving the vehicle past a sensor such the materials pass through a capture window of the sensor; capturing depth data of the materials as they pass through the capture window of the sensor; generating a three dimensional representation from the captured depth data; identifying a region of interest; recognising material fragments within the region of interest; classifying the recognised material fragments; and outputting a fragmentation profile generated from the classified material fragments.

[0023] The depth data may comprise a plurality of scanlines, preferably output from a LIDAR sensor. The step of generating a three dimensional representation may comprise converting the scanlines to a two dimensional image with pixel characteristics representing depth.

[0024] Further features and advantages of the present invention will become apparent from the following detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS

[0025] By way of example only, preferred embodiments of the invention will be described more fully hereinafter with reference to the accompanying figures, wherein:

[0026] Figure 1 illustrates image representations of example data captured from a depth sensor;

[0027] Figure 2 illustrates a three dimensional representation generated from depth data captured from a three dimensional sensor such as that illustrated in figure 1 ;

[0028] Figure 3 illustrates a diagrammatic view of a vehicle carrying a fragmented load being captured by a sensor;

[0029] Figure 4 illustrates example data captured from the sensor of figure

3; [0030] Figure 5 illustrates a three dimensional representation generated from three dimensional data captured of a mining bucket;

[0031 ] Figure 6 illustrates a mask generated from the data of figure 5;

[0032] Figure 7 illustrates a three dimensional representation generated from data of figure 5;

[0033] Figure 8 illustrates a fragmentation identification from the data of figure 5 constrained by the mask of figure 6;

[0034] Figure 9 illustrates the fragmentation identification of figure 8 with regions identified as being fines removed;

[0035] Figure 10 is a bar graph showing estimated fragment sizes from the fragmentation identification data of figure 9;

[0036] Figure 11 illustrates a three dimensional representation generated from three dimensional data captured of another mining bucket;

[0037] Figure 12 is a bar graph showing estimated fragment sizes from the data of figure 11 ;

[0038] Figure 13 illustrates a three dimensional representation of another example captured from a depth sensor;

[0039] Figure 14 illustrates the three dimensional representation during processing; and

[0040] Figure 15 illustrates a fragmentation identification from the data of figures 13 and 14.

DETAILED DESCRIPTION OF THE DRAWINGS

[0041] Figure 1 show four separate image representations 10a, 10b, 10c, and 10d of example data captured from a depth sensor such as a two dimensional scanning LIDAR sensor. It should be appreciated that the specific sensor and its operating parameters can be selected and/or configured to suit operating conditions and requirements. As can be seen in the image representations, sample materials have been scanned by the sensor and are shown two dimensionally as shaded line drawings. The source date includes depth data ascertained by the sensor as the materials pass through its capture window.

[0042] Figure 2 illustrates a three dimensional representation in the form of a two dimensional bitmap 20 with each pixel representing depth. As the captured depth data from the sensor may not map directly to each pixel, some pixels may represent measured depth and other pixels may represent estimated depth. In such cases the depth data is preferably of sufficient resolution such that the majority of pixels represent measured depth rather than estimated depth.

[0043] Figure 3 illustrates a diagrammatic view of a vehicle 100 carrying a fragmented load 110 being captured by a LIDAR sensor 120 which may be mounted to a support structure such as a frame, gantry, or scaffold (for example). In underground applications the LIDAR sensor 120 may be mounted to the ceiling of a tunnel, or the like. In use, the LIDAR sensor 120 preferably scans at least the fragmented load 110 at a high speed (e.g. in a preferred form between a frequency of around 100Hz to 500Hz) as the vehicle 100 drives past at a speed of between approximately 8km/hr and 40km/hr.

[0044] The scan frequency depends on various factors including, for example, the distance the LIDAR sensor 120 is mounted above the vehicle 100.

In aboveground operations the LIDAR sensor 120 may be mounted approximately 4 to 10 metres above the vehicle 100 and in underground operations the LIDAR sensor 120 may be mounted approximately 1 to 4 metres above the vehicle. Combining the LIDAR sensor 120 distance and scan frequency with the speed of the vehicle 100 determines a step size between scanlines which in turn effectively determines the resolution of the depth data. A greater distance, lower scan frequency, and/or faster vehicle will result in a lower resolution compared to a shorter distance, higher scan frequency, and/or slower vehicle scenario. The resolution needs to be sufficient to meet minimum fragmentation size requirements for that particular site and/or materials. [0045] For example, with a sensor that scans 200 lines per second mounted approximately 4 metres above the fragmented load 110a vehicle 100 travelling at 8km/hr will result in a step between scanlines of approximately 11mm with steps between points in each scan of between around approximately 5 and 6mm. With such a resolution it is estimated that individual fragments of around 55mm to 110mm and above can be detected. For contrast, if the same vehicle 100 is travelling 40km/hr then the step between scanlines is approximate 55mm. Assuming the steps between points in each scan remain the same, it is estimated that individual fragments of around 500mm and above could be detected, with 250mm fragments possibly being able to be detected depending on orientation (e.g. if it lies across or in line with the scanlines).

[0046] This depth data captured by the LIDAR sensor 120 is then converted into a bitmap image 130. Figure 4 illustrates a close up view of the bitmap image 130, shown in figure 3, for a vehicle 100 carrying a fragmented load 110.

[0047] Figure 5 illustrates a bitmap image 230 of another fragmented load 210 being carried by a vehicle 200 (only partially visible). Figure 6 illustrates a mask generated from the data of figure 5 separating a region identified to contain the vehicle 202 and a region identified to contain the fragmented load 212, being a region of particular interest. Figure 7 illustrates a three dimensional representation generated from data of figure 5 with rocks 214 emphasised. Figure 8 illustrates a fragmentation identification from the data of figure 5 constrained by the mask of figure 6. Fragments 216 may be determined by identifying dividing ridges and/or troughs in the bitmap image. Figure 9 illustrates the fragmentation identification of figure 8 with regions identified as being fines 218 being excluded such that the fragments 216 are representative of rocks 212 contained in the fragmented load 210. The fragmentation identification preferably includes an array of particle objects. Each particle object preferably comprises attributes such as, for example, one or more of sieve size, area, and centre (e.g. x, y, z co-ordinates). The fragments 216 can then be classified by size, such as into predetermined size classes that may match processing requirements (e.g. sieve size), and a fragmentation profile outputted. Figure 10 illustrates a fragmentation profile in the form of particle size distribution (PSD) shown as a bar graph 240 having fragment sieve size class in mm along the x-axis and number of rocks 212 identified along the y-axis.

[0048] Figure 11 illustrates a bitmap image 330 of another fragmented load 310 being carried by a vehicle 300 (only partially visible and figure 12 is a fragmentation profile in the form of a bar graph 340 having fragment sieve size class in mm along the x-axis and number of rocks 212 identified along the y- axis. It should be appreciated that the fragmentation profile need not be in the form of a bar graph, and may simply comprise fragmentation information including PSD or metric(s) indicative of rock fragmentation size, for example.

[0049] Figure 13 to 15 illustrates another example of a fragmented load 410. The fragmented load 410 includes a particularly large object 412. As can be seen in figure 15 the large objection has been identified as a large fragment. Identification of some medium sized objects 414 can also be observed.

[0050] In addition to fragmentation, a volume of the fragmented load 110 can be measured. The fragmented load 110 can be compared to a load carrying capacity of a load carrying portion 112 of the vehicle 100. The load carrying capacity of the load carrying portion of the vehicle can be measured by driving the vehicle 100 past the sensor 120 when empty (without any load). Alternatively, a predetermined previously measured load carrying capacity may be retrieved from a database (which may be local or remote, or even transmitted from the vehicle). If scanning the vehicle when empty, the sensor preferably segments the load carrying portion 112 into regions to prevent the load carrying portion 112 being identified as fines or fragmented material.

[0051] In a preferred embodiment, during setup various vehicles 100 are scanned empty with their load carrying capacities being stored in a database. In such situations, the vehicle 100 only needs to pass the sensor 120 when loaded. Vehicle 100 analyst software can then compare a scanned volume of materials to the previously measured load carrying capacity for that vehicle 100 from the database. The load carrying capacity could be a designated volume or last measured volume. The load carrying capacity may be in the form of a subtracted depth data set which can be aligned and compared with captured depth data of the load.

[0052] Advantageously, the processing of the materials being carried by the vehicle can be adjusted depending upon the fragmentation profile. Furthermore, the fragmentation information can be used as feedback for mining operations, to ensure that optimum blasting. With accurate depth data from a LIDAR sensor, depth data of the materials can be obtained quickly during normal transportation of the materials by a vehicle such as a load haul vehicle. By being able to analyse the materials during normal transportation, there is no delay in slowing down or stopping the materials to perform a fragmentation analysis. Furthermore, because the analysis occurs whilst the materials are being transported by a vehicle, before the materials reach a conveyor for example, there is reduced delay in making an assessment after a blast, improving both blasting control feedback and further processing options. Additionally, the volume of the load can be measured using the same sensor. Knowing the volume can be helpful for both operational and downstream processes.

[0053] Although the invention is described with respect to a preferred mining application, it should be appreciated that the invention could be utilised in relation to other fields where determining object size during transportation may be desirable such as, for example, in agriculture, manufacturing, transportation, packaging, etc.

[0054] In this specification, adjectives such as first and second, left and right, top and bottom, and the like may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the context permits, reference to an integer or a component or step (or the like) is not to be interpreted as being limited to only one of that integer, component, or step, but rather could be one or more of that integer, component, or step etc.

[0055] The above description of various embodiments of the present invention is provided for purposes of description to one of ordinary skill in the related art. It is not intended to be exhaustive or to limit the invention to a single disclosed embodiment. As mentioned above, numerous alternatives and variations to the present invention will be apparent to those skilled in the art of the above teaching. Accordingly, while some alternative embodiments have been discussed specifically, other embodiments will be apparent or relatively easily developed by those of ordinary skill in the art. The invention is intended to embrace all alternatives, modifications, and variations of the present invention that have been discussed herein, and other embodiments that fall within the spirit and scope of the above described invention.

[0056] As used herein, an element or operation recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural elements or operations, unless such exclusion is explicitly recited. Furthermore, references to “one embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.

[0057] In this specification, the terms ‘comprises’, ‘comprising’, ‘includes’, ‘including’, or similar terms are intended to mean a non-exclusive inclusion, such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed.