Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MAPPING OF MINING EXCAVATIONS
Document Type and Number:
WIPO Patent Application WO/2013/170348
Kind Code:
A1
Abstract:
An apparatus for installation on a vehicle suitable for mining excavation including at least a first and second camera configured to capture digital images of at least a portion of the mining excavation; a data processor in communication with the first camera and the second camera; the data processor being operative for generating a digital three dimensional (3d) representation of the portion of the mining excavation. The apparatus further controlling at least one operation of the vehicle.

Inventors:
STEELE RODERICK MARK (CA)
Application Number:
PCT/CA2013/000307
Publication Date:
November 21, 2013
Filing Date:
March 28, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TESMAN INC (CA)
International Classes:
E21C35/00; B60R11/04; E21C35/08; G06T17/00; G06T17/20; H04N7/18; H04N13/239
Foreign References:
US6296317B12001-10-02
CA2657161A12009-09-04
Attorney, Agent or Firm:
NORTON ROSE FULBRIGHT CANADA LLP/S.E.N.C.R.L., S.R.L. et al. (1 Place Ville MarieMontréal, Québec H3B 1R1, CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. An apparatus for installation on a vehicle, the apparatus being useful for mapping a mining excavation and also controlling at least one operation of the vehicle, the apparatus comprising:

a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of the digital images captured by the first camera and the second camera;

generate signals representative of a digital 3D representation of the portion of the mining excavation based on the captured digital images; and generate signals useful in the operation of the vehicle based on at least one of the captured digital images.

2. The apparatus as defined in claim 1 , wherein at least one of the first field of view and the second field of view includes a portion of the vehicle and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the vehicle from the digital 3D representation of the portion of the mining excavation.

3. The apparatus as defined in claim 2, wherein the portion of the vehicle includes a movable implement and the data processor is responsive to machine- readable instructions causing the data processor to generate signals useful in the operation of the movable implement.

4. The apparatus as defined in claim 3, wherein the movable implement includes a drill boom.

5. The apparatus as defined in any one of claims 1-4, wherein the digital 3D representation of the portion of the mining excavation comprises a 3D mesh.

6. The apparatus as defined in claim 5, wherein the digital 3D representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.

7. The apparatus as defined in any one of claims 1-6, wherein the generation of signals representative of the 3D digital representation of the portion of the mining excavation and the generation of signals useful in the operation of the vehicle are conducted individually by the data processor.

8. The apparatus as defined in claim 1 , wherein:

the first camera and the second camera are configured to capture low- resolution digital images; and

the data processor is responsive to machine-readable instructions causing the data processor to generate a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images.

9. The apparatus as defined in claim 8, wherein:

at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution digital image of the common portion of the first field of view and the second field of view; and

the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.

10. The apparatus as defined in any one of claims 1-9, wherein the signals representative of the digital 3D representation are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.

11. An apparatus for installation on a vehicle, the apparatus being useful for mapping a mining excavation, the apparatus comprising: a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion, at least one of the first field of view and the second field of view being configured to include a portion of the vehicle; and

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of the digital images captured by the first camera and the second camera; and

generate signals representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.

12. The apparatus as defined in claims 11 , wherein:

the first camera and the second camera are configured to capture low- resolution digital images; and

the data processor is responsive to machine-readable instructions causing the data processor to generate a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images.

13. The apparatus as defined in claim 12, wherein:

at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution digital image of the common portion of the first field of view and the second field of view; and

the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.

14. The apparatus as defined in any one of claims 11-13, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals useful in the operation of the vehicle based on at least one of the captured digital images.

15. The apparatus as defined in any one of claims 11-14, wherein the digital 3D representation of the portion of the mining excavation comprises a 3D mesh.

16. The apparatus as defined in claim 15, wherein the digital 3D representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.

17. An apparatus for installation on a vehicle, the apparatus being useful for mapping a mining excavation, the apparatus comprising:

a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of low-resolution digital images captured by the first camera and the second camera;

generate signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images; receive signals representative of a high-resolution image captured by at least one of the first camera, the second camera and a third camera, the high-resolution image being of the common portion the first field of view and the second field of view; and

transform the high-resolution digital image according to the 3D mesh.

18. The apparatus as defined in claim 17, wherein at least one of the first field of view and the second field of view is configured to include a portion of the vehicle and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the vehicle from the 3D mesh.

19. The apparatus as defined in any one of claims 17 and 18, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals useful in the operation of the vehicle based on at least one of the captured digital images.

20. The apparatus as defined in any one of claims 17 and 18, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals useful in the operation of a drilling implement of the vehicle based on at least one of the captured digital images.

21. The apparatus as defined in any one of claims 17-20, wherein the signals representative of the 3D mesh are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.

22. A vehicle comprising the apparatus as defined in any one of claims 1-21.

23. A vehicle for conducting drilling in an underground environment, the vehicle comprising the apparatus as defined in any one of claims 1-21.

24. A vehicle for conducting drilling in an underground environment, the vehicle comprising:

a drilling implement;

a first camera and a second camera configured to capture digital images of at least a portion of the underground environment, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the underground environment based on the captured digital images.

25. The vehicle as defined in claim 24, wherein at least one of the first field of view and the second field of view includes at least a portion of the drilling implement.

26. The vehicle as defined in claim 25, wherein the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the movable drilling implement from the digital representation of the portion of the underground environment.

27. The vehicle as defined in any one of claims 24-26, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals useful in the operation of the drilling implement based on the captured digital images.

28. The vehicle as defined in claim 27, wherein the generation of signals representative of a digital 3D representation of the portion of the underground environment and the generation of signals useful in the operation of the drilling implement are conducted individually by the data processor.

29. The vehicle as defined in any one of claims 24-28, wherein the digital 3D representation of the portion of the underground environment comprises a 3D mesh.

30. The vehicle as defined in any one of claims 24-28, wherein:

the first camera and the second camera are configured to capture low- resolution digital images;

the data processor is responsive to machine-readable instructions causing the data processor to generate a digital 3D mesh of at least a portion of the underground environment based on the low-resolution digital images; at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution image of the common portion the first field of view and the second field of view; and

the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.

31. The vehicle as defined in any one of claims 24-30, wherein the signals representative of the digital 3D representation are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.

32. The vehicle as defined in any one of claims 24-31 , comprising one or more standard operating lights and the cameras are configured so that the common portion of the first field of view and the second field of view is illuminated by the one or more standard operating lights.

33. A method for mapping a mining excavation and also controlling at least one operation of a vehicle, the method performed by a data processor and comprising: receiving signals representative of at least two digital images of at least a common portion of the mining excavation;

generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images; and

generating signals useful in the at least one operation of the vehicle based on signals representative of at least one of the digital images.

34. The method as defined in claim 33, wherein at least one of the digital images includes a portion of the vehicle and the digital 3D representation of the portion of the mining excavation excludes the portion of the vehicle.

35. The method as defined in claim 34, wherein the portion of the vehicle includes a drill boom and the signals useful in the at least one operation of the vehicle are useful in controlling the drill boom.

36. The method as defined in any one of claims 33-35, wherein the digital 3D representation of the portion of the mining excavation comprises a 3D mesh.

37. The method as defined in claim 36, wherein the digital 3D representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.

38. The method as defined in any one of claims 33-37, wherein the generation of signals representative of the 3D digital representation of the portion of the mining excavation and the generation of signals useful in the at least one operation of the vehicle are conducted individually.

39. The method as defined in claim 33, wherein:

the at least two digital images are low-resolution digital images; and the digital 3D representation of the portion of the mining excavation comprises a 3D mesh based on the low-resolution digital images.

40. The method as defined in claim 39, comprising:

receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and

transforming the high-resolution digital image according to the 3D mesh.

41. The method as defined in any one of claims 33-40, wherein the signals representative of the digital 3D representation are generated based on stereo matching of the at least two digital images

42. A method for mapping a mining excavation, the method performed by a data processor mounted to a vehicle, the method comprising: receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of the vehicle; and

generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle.

43. The method as defined in claim 42, wherein:

the at least two digital images are low-resolution digital images; and the digital 3D representation of the portion of the mining excavation comprises a 3D mesh based on the low-resolution digital images.

44. The method as defined in claim 43, comprising:

receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and

transforming the high-resolution digital image according to the 3D mesh.

45. The method as defined in any one of claims 42-44, comprising generating signals useful in the at least one operation of the vehicle based on signals representative of at least one of the digital images.

46. The method as defined in claim 42, wherein the digital 3D representation of the portion of the mining excavation comprises a 3D mesh.

47. The method as defined in claim 46, wherein the digital 3D representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.

48. A method for mapping a mining excavation, the method performed by a data processor mounted to a vehicle, the method comprising:

receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation; generating signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images;

receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and

transforming the high-resolution digital image according to the 3D mesh.

49. The method as defined in claim 48, wherein at least one of the digital images includes a portion of the vehicle and the 3D mesh excludes the portion of the vehicle.

50. The method as defined in any one of claims 48 and 49, comprising generating signals useful in the operation of the vehicle based on at least one of the captured digital images.

51. The method as defined in any one of claims 48-50, comprising generating signals useful in the operation of a drilling implement of the vehicle based on at least one of the captured digital images.

52. The method as defined in any one of claims 48-51 , wherein the signals representative of the 3D mesh are generated based on stereo matching of the at least two low-resolution digital images.

Description:
MAPPING OF MINING EXCAVATIONS

TECHNICAL FIELD

[0001] The disclosure relates generally to underground mining operations, and more particularly to mapping of mining excavations.

BACKGROUND OF THE ART

[0002] The creation of photo-realistic three-dimensional (3D) models of observed scenes has been an active research topic for years. Such 3D models can be useful for both visualization and measurements in various applications. Existing methods typically require specialized equipment including high-resolution cameras, camera mounts and customized lighting that must be deployed and used on site by trained personnel. Existing methods can also require significant computing time and power. Accordingly, existing methods used to create such models are typically conducted under controlled environmental conditions (e.g., lighting) and can be relatively difficult and expensive to conduct in underground environments.

[0003] Improvement is therefore desirable. SUMMARY

[0004] The disclosure describes apparatus and methods for mapping mining excavations. In some examples, the apparatus disclosed herein may be suitable for installation on a vehicle and some of the methods disclosed herein may be conducted onboard such vehicle. For example, the apparatus and methods disclosed herein may be suitable for generating three-dimensional (3D) digital representations of mining excavations (including tunnels) and may be integrated in mining vehicles including those suitable for underground operations such as drilling machines (e.g., jumbo drills).

[0005] In one aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus may be useful for mapping a mining excavation and also controlling at least one operation of the vehicle. The apparatus comprises: a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion; a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of the digital images captured by the first camera and the second camera;

generate signals representative of a digital 3D representation of the portion of the mining excavation based on the captured digital images; and generate signals useful in the operation of the vehicle based on at least one of the captured digital images.

[0006] In another aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus may be useful for mapping a mining excavation. The apparatus comprises:

a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion, at least one of the first field of view and the second field of view being configured to include a portion of the vehicle; and

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of the digital images captured by the first camera and the second camera; and

generate signals representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.

[0007] In another aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus is useful for mapping a mining excavation. The apparatus comprises:

a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of low-resolution digital images captured by the first camera and the second camera;

generate signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images; receive signals representative of a high-resolution image captured by at least one of the first camera, the second camera and a third camera, the high-resolution image being of the common portion the first field of view and the second field of view; and

transform the high-resolution digital image according to the 3D mesh.

[0008] In another aspect, the disclosure describes a vehicle for conducting drilling in an underground environment. The vehicle comprises:

a drilling implement;

a first camera and a second camera configured to capture digital images of at least a portion of the underground environment, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:

receive signals representative of images captured by the first camera and the second camera; and

generate signals representative of a digital 3D representation of the portion of the underground environment based on the captured digital images. [0009] In another aspect, the disclosure describes a method for mapping a mining excavation and also controlling at least one operation of a vehicle. The method may be performed by a data processor and comprises:

receiving signals representative of at least two digital images of at least a common portion of the mining excavation;

generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images; and

generating signals useful in the at least one operation of the vehicle based on signals representative of at least one of the digital images.

[0010] In another aspect, the disclosure describes a method for mapping a mining excavation. The method may be performed by a data processor mounted to a vehicle. The method comprises:

receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of the vehicle; and

generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle.

[0011] In another aspect, the disclosure describes a method for mapping a mining excavation. The method may be performed by a data processor mounted to a vehicle. The method comprises:

receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation;

generating signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images;

receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and

transforming the high-resolution digital image according to the 3D mesh. [0012] In another aspect, the disclosure describes vehicles including drilling machines comprising apparatus disclosed herein. In a further aspect, the disclosure describes such vehicles onboard which methods disclosed herein may be conducted.

[0013] Further details of these and other aspects of the subject matter of this application will be apparent from the detailed description and drawings included below.

DESCRIPTION OF THE DRAWINGS

Reference is now made to the accompanying drawings, in which:

[0014] FIG. 1 shows a schematic representation of an apparatus for mapping of a mining excavation according to one embodiment;

[0015] FIG. 2 shows a schematic representation of the apparatus of FIG. 1 incorporated in a vehicle;

[0016] FIG. 3 shows a more detailed schematic representation of the apparatus of FIG. 1 ;

[0017] FIG. 4 shows a schematic side elevation view of a vehicle to which the apparatus of FIG. 1 may be mounted;

[0018] FIG. 5 shows a photograph of the vehicle of FIG. 4;

[0019] FIG. 6 show a linear image taken from a camera of the apparatus of FIG. 1 ;

[0020] FIG. 7 shows a visual representation of a 3D mesh generated using the apparatus of FIG. 1 ;

[0021] FIG. 8 shows a visual representation of a transformed image generated using the linear image of FIG. 6, the 3D mesh of FIG. 7 and the apparatus of FIG. 1 ;

[0022] FIG. 9 shows a flow chart illustrating a method for mapping mining excavations;

[0023] FIG. 10 shows a flow chart illustrating a method for associating camera calibration parameters to a linear image;

[0024] FIG. 11 shows a flow chart illustrating a method for deskewing a linear image based on camera calibration parameters; [0025] FIG. 12 shows a flow chart illustrating a method for applying a Gaussian blur to an image;

[0026] FIG. 13 shows a flow chart illustrating a method for producing a merged disparity image;

[0027] FIG. 14 shows a flow chart illustrating a method for producing a 3D mesh and transforming an image based on the 3D mesh;

[0028] FIG. 15 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation and generating signals useful in the operation of a vehicle;

[0029] FIG. 16 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation based on digital images and excluding a portion of a vehicle captured in the digital images; and

[0030] FIG. 17 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation based on low-resolution and high- resolution digital images.

DETAILED DESCRIPTION

[0031] Aspects of various embodiments are described through reference to the drawings.

[0032] Although terms such as "maximize", "minimize" and "optimize" may be used in the present disclosure, it should be understood that such term may be used to refer to improvements, tuning and refinements which may not be strictly limited to maximal, minimal or optimal.

[0033] In some example embodiments, the present disclosure describes apparatus and methods for image/motion capture in an underground environment, or other harsh environments, such as where there may be poor lighting, high vibrations, dust and mud, limited power, limited space and rough handling of equipment. In particular, the disclosed apparatus and methods may involve the extrapolation from video, still images or other electronic representations of data regarding the position of one or more subjects in the images. The image capture of subjects from two or more known vantage points, this data may be extrapolated into three-dimensional (3D) data (e.g., x, y and z co-ordinates). [0034] FIG. 1 is a schematic representation of an exemplary apparatus 10 that may be used for mapping of mining excavations such as underground environments including tunnels, for example. As explained further below, apparatus 10 may also be useful in controlling at least one operation of a vehicle to which apparatus 10 may be mounted.

[0035] Apparatus 10 may comprise one or more digital cameras 12 and data processing device(s) 14. For example, two or more cameras 12 (including multiple pairs of cameras 12) may be required so that two or more digital images of a portion of a mining excavation to be mapped may be acquired from different locations (vantage points) and stereo matching may be performed (e.g., stereophotogrammetry). Digital camera(s) 12 and data processing device(s) 14 may be coupled to permit digital images captured by camera(s) 12 to be received, stored and/or processed by data processing device(s) 14 in accordance with methods described herein. Digital camera(s) 12 and data processing device(s) 14 may also be configured to provide a live view of the portion of mining excavation. Apparatus 10 may be configured to generate output(s) 16 which may, for example, be useful in generating digital 3D representations of existing mining excavations (e.g., tunnels) for mining operations. For example, apparatus 10 may be useful in generating three-dimensional (3D) geometric models of underground tunnels and/or 3D transformed images (e.g., 3D textured maps) useful in geological exploration and monitoring.

[0036] FIG. 2 shows that apparatus 10 may be mounted to or incorporated in a stationary or mobile piece of equipment such as, for example, vehicle 18. Vehicle 18 may be suitable for traveling in a tunnel of a mine and may be configured to perform one or more mining-related task such as drilling. For example, vehicle 18 may comprise one or more drilling or other type(s) of implements related to mining operations. Vehicle 18 may be configured for use in vertical and/or horizontal excavation and/or tunnels under shaft sinking galloways for example.

[0037] Data processing device(s) 14 may, for example, include a relatively low- power, portable and low footprint computer such as a Mac™ Mini. The use of a low- power and portable system may be suitable for an underground environment, because of the limited space and power available. The use of a low-power and portable system may be also be suitable for incorporation into vehicle 18. Other conventional or other types of data processing device(s) 14 may also be suitable. [0038] Apparatus 10 may comprise one or more input devices 17 such as a keyboard, mouse, touchpad, touch screen, switches, buttons and/or other type of input device(s) suitable for permitting data processing device(s) 14 to receive input from an operator. Apparatus 10 may comprise one or more display(s) 19 for displaying a graphic user interface with responsive objects for receiving input from an operator of apparatus 10. Display(s) 19 may also display information about the status/operation of apparatus 10 and/or the status/operation of vehicle 18. For example, display(s) 19 may comprise a touch screen for receiving input from the operator. The graphic user interface shown on display(s) 19 may be used to start and/or control one or more operations of apparatus 10 and/or one or more operations of vehicle 18. For example, the graphic user interface may be used to set appropriate settings for camera(s) 12 such as, for example, exposure settings, shutter timing(s), gain(s) and alignment settings.

[0039] Camera(s) 12 may comprise relatively low-power YUV (i.e., black and white) or color (RGB) digital cameras. Camera(s) 12 may have a relatively low pixel density and small size (e.g., about 1 cubic inch) however it should be understood that other types of camera(s) 12 may be suitable. Camera(s) 12 may have relatively low pixel resolution, as a trade-off for lower processing times. For example, camera(s) 12 may have a resolution of 640x480 pixels or lower, and may have a power consumption of about 2W at 12VDC. For example, camera(s) 12 may include one or more Bonsai™ Fire-i™ digital cameras sold under the trade name Unibrain™. Other suitable cameras may be used, and the power consumption and pixel resolution of the camera(s) 12 may be different for different applications and requirements. For example, the resolution described above may be suitable for motion capture and photography of a subject at a range of up to 30 feet. Higher resolutions, such as up to 2448x2048 may be used for motion capture and/or still photography of a subject at a farther range, such as up to 140 feet. For example, camera(s) 12 may be configure to capture relatively low-resolution images (e.g., 640x480 pixels or lower) and/or high-resolution images (e.g., 1024x600 pixels or higher). Alternatively, one or more of camera(s) 12 may be configured to capture low-resolution digital images and one or more of camera(s) 12 may be configured to capture digital images of higher resolution. Outputs 16 may be stored within data processing device(s) 14 onboard vehicle 18 and/or exported from vehicle 18. [0040] FIG. 3 shows a more detailed schematic representation of apparatus 10. For example, data processing device 14 may comprise one or more data processors 20. Data processor 20 may comprise one or more digital computer(s) or other data processors. Data processing device(s) 14 may also comprise memory(ies) 22 and memory data devices or register(s) 24. Memory(ies) 22 may comprise any storage means (e.g. devices) suitable for retrievably storing machine-readable instructions executable by processors) 20. Memory(ies) 22 may be non-volatile. For example, memory(ies) 22 may include erasable programmable read only memory (EPROM) and/or flash memory. Such machine-readable instructions may cause processors) 20 to: receive signals 26 representative of digital images captured by camera(s) 12; generate signals 16a representative of a digital 3D representation of the portion of a mining excavation based on the captured digital images; and generate signals 16b useful in the operation of vehicle 18 based on at least one of the captured digital images. Memory(ies) 22 may also comprise any data storage devices suitable for storing data received and/or generated by processor(s) 20, preferably retrievably. For example, memory(ies) 22 may comprise one or more of any or all of erasable programmable read only memory(ies) (EPROM), flash memory(ies) or other electromagnetic media suitable for storing electronic data signals in volatile or nonvolatile, non-transient form.

[0041] Data processing device(s) 14 may be configured to perform two or more functions. For example, while data processing device(s) 14 may be configured to: (1) generate signals 16a representative of a digital 3D representation of the portion of a mining excavation based on the captured digital images; and (2) generate signals 16b useful in the operation of vehicle 18 based on at least one of the captured digital images, the generation of signals 16a and 16b may be conducted simultaneously or individually (i.e., separately). For example, in order to reduce the amount of processing time required from data processing device(s) 14, it may be desired to generate signals 16a and 16b individually instead of simultaneously. For example, apparatus 10 may be configured to receive input from an operator that is indicative of which of signals 16a and 16b are to be generated at a particular time. For example, such input may be provided by the operator via display(s) 19 or any other suitable input device(s) 17 that may be coupled to data processing device(s) 14. The reduced processing time required from data processing device(s) 14 may facilitate the integration of data processing device(s) 14 on vehicle 18 and may permit the generation of signals 16a and 16b onboard of vehicle 18 and substantially in real-time. Accordingly, outputs representative of signals 16a and/or 16b may be presented to an operator of vehicle 18 via display(s) 19.

[0042] FIG. 4 shows a schematic side elevation view of an exemplary vehicle 18 to which apparatus 10 may be mounted. Vehicle 18 may be configured to conduct specific mining-related operations within a mining excavation and accordingly may comprise one or more implements (e.g., tools) for conducting such operations. For example, vehicle 18 may be configured to conduct drilling and may comprise one or more movable drill booms 28 comprising respective drills that may extend in front of and/or hang below vehicle 18. Such vehicle 18 may also be referred to as a mobile drilling machine also known as a "jumbo drill". Boom(s) 28 may be maneuvered to position and orient the drills for creating blast holes in the rock in a tunnel or other type of mining excavation. The positioning and orientation of booms 28 may directly affect the accuracy of the positioning of the blast holes. Most conventional jumbo drills typically rely on the judgment of the operator to position/orient boom(s) 28 and drill the blast holes based on visual inspection of boom(s) 28 from cab 30 of vehicle 18. This may result in inconsistent placement and orientation of blast holes. As explained further below, apparatus 10 may assist an operator of vehicle 18 in the movement and positioning of boom(s) 28. For example, apparatus 10 may be useful in assisting an operator of vehicle 18 in accordance with the teachings of PCT application No. PCT/CA2011/001105, filed September 30, 2011 and titled SYSTEMS AND METHODS FOR MOTION CAPTURE IN AN UNDERGROUND ENVIRONMENT, the entire disclosure of which being incorporated herein by reference.

[0043] Vehicle 18 may comprise one or more housings 32 inside which data processing device(s) 14 and power source(s) 34 may be housed. Power source(s) 34 may serve to power data processing device(s) 14, camera(s) 12 and/or display(s) 19. Camera(s) 12 may be mounted to a front side of vehicle 18 and may be positioned and configured such that at least a portion of vehicle 18, such as drilling boom(s) 28 for example, may be within the field(s) of view of camera(s) 12. In case where multiple cameras 12 are used, the fields of view of two or more of such cameras 12 may have a common portion which may be used for stereo matching. Camera(s) 12 may each have a wide-angle lens providing a field of view of, for example, 180 degrees and may provide wide coverage and adequate view of light reflections. For example, a common portion of a mining excavation and/or a common portion of boom(s) 28 may be within the field of view of a plurality of cameras 12. Camera(s) 12 may be disposed in suitable camera housing(s) 35 in order to protect camera(s) from hazards such as falling rock. In other examples, camera(s) 12 themselves may be relatively robust and resistant to damage, and camera housing(s) 35 may not be necessary. Display(s) 19 may be disposed so as to be visible to an operator inside cab 30. One or more light targets 36 may be disposed on boom(s) 28 and may be used to track the position/movement of boom(s) 28 by apparatus 10.

[0044] FIG. 5 shows a photograph of a front side of the jumbo drill (vehicle 18) of FIG. 3, in which apparatus 10 may be integrated. The front side of vehicle 18, shows that, for example, three (or more) cameras 12 may be disposed and oriented towards a portion of mining excavation (e.g., tunnel) ahead of vehicle 18. Alternatively, cameras 12 may be positioned on one or more other sides of vehicle 18 to provide visibility of portions of the mining excavation in various directions relative to vehicle 18. As shown, cameras 12 may be positioned along a leading edge of cab 30 of vehicle 18, just under the roof. In other configurations, for example depending on the layout of cab 30, cameras 12 may be positioned at other locations on vehicle 18 such as, for example, above or in front of cab 30. Cameras 12 may also be positioned at other suitable locations on vehicle 18. Cameras 12 may communicate with data processing device(s) 14 through wired or wireless communications. The cameras 12 may have at least partially overlapping fields of view. One or more lights 37 may be used to illuminate the portion of mining excavation to be mapped during the acquisition of images. The lights may comprise one or more lights 37 provided on vehicle 18, one or more lights provided inside mining cavity and/or ambient lights.

[0045] Lights 37 may comprise standard lights (e.g., headlights, work lights) that are typically (i.e., by default) provided on vehicle 18. In some examples, apparatus 10 may not require any additional custom/special lighting for operation. For example, lights 37 and camera(s) 12 may generally face the same direction and the illumination provided by lights 37 may in some applications be sufficient for the operation of apparatus 10. In other examples additional lighting may be used to supplement lights 37 if required or desired.

[0046] During operation, apparatus 10 may be used for the generation of signals that may be useful in the operation of at least one aspect of vehicle 18. For example, apparatus 10 may be useful in control of the movement and position/orientation of drill boom(s) 28 of vehicle 18 shown in FIG. 4. Apparatus 10 may be used to track movement(s) of drill boom(s) 28 via light targets 36. Two or more cameras 12 may capture digital images including light targets 36 and display(s) 19 may provide feedback to an operator of vehicle 18. For example, two or more cameras 12 may capture digital images of targets 36 from different positions (i.e., vantage points) and stereo matching may then be conducted by data processing device(s) 14 to determine the position of targets 36 and thereby determine the position and orientation of drill boom(s) 28.

[0047] An exemplary method for assisting an operator in the operation of at least one aspect of vehicle 18 may include: capturing one or more digital images from at least two cameras 12 where the cameras 12 are positioned at different reference locations to capture image data of at least two light targets 36; determining any spots corresponding to light targets 36 in the digital images by blurring the image data to remove background noise and applying a set of criteria based on predetermined characteristics of light targets 36; calculating three- dimensional (3D) locations of the light targets 36 using stereo matching; and providing feedback to an operator relating to the position of light targets 36 (and consequently drill boom(s) 28).

[0048] Apparatus 10 may also be used for the generation of a digital 3D representation (e.g., digital map) of at least a portion of a mining excavation such as an underground tunnel. As mentioned above, apparatus 10 may comprise at least one pair of cameras 12 positioned at known positions relative to each other and each having a field of view that is at least partially common (i.e., at least partially overlapping) so that the pair of cameras 12 can be used to acquire images that can be used as a basis for stereo matching. For example, as shown in FIG. 5, apparatus 10 may comprise three cameras 12. The use of more than two cameras 12 may provide redundancy in avoiding blind spots. For example, any two of the three cameras 12 may be used as a pair to acquire stereo images. Accordingly, for a total of three cameras 12 (e.g., left, right and center), three separate pairs of cameras 12 may be available. A blind spot may, for example, include any portion of vehicle 18 or any other object that does not form part of the mining cavity to be mapped. For example, boom(s) 28 may be disposed within the field of view of one or more cameras 12 and obstruct the view of the mining excavation by one or more of cameras 12. Accordingly, depending on the position of boom(s) 28, different combination of cameras 12 may be used to acquire stereo images suitable for mapping the tunnel ahead of vehicle 18 and with minimal obstruction. A pair of cameras 12 may be selected to minimize obstruction from boom(s) 22. Alternatively, one or more additional pairs of cameras 12 may be used to capture portions of the mining cavity that may have been obstructed when photographed using a first pair of cameras 12. Accordingly, different pairs of cameras from different vantage points may be used to capture digital images of portions of the mining cavity and reduce blind spots caused by obstructions.

[0049] One or more pairs of cameras 12 may be used to capture digital images of the mining excavation (e.g. underground tunnel) in stereo and apparatus 10 may used such images to generate a 3D mesh that may be useful in the geometric modeling of the mining excavation. In addition or alternatively, apparatus 10 may be used to generate digital images that have been transformed according to such 3D mesh to provide a 3D textured map and assist geological exploration and monitoring and geotechnical ground support design.

[0050] In one example, a first camera 12 and a second camera 12 may be configured to capture digital images of at least a portion of the mining excavation. First camera 12 may have a first field of view and second camera 12 may have a second field of view. The first field of view and the second field of view may have a common portion. At least one of the first field of view and the second field of view may be configured to include a portion of vehicle 18 such as boom(s) 28. Data processor(s) 20 may be in communication with first camera 12 and second camera 12. Data processor(s) 20 may be responsive to machine-readable instructions causing data processor(s) 20 to: (1 ) receive signals 26 representative of digital images captured by the first camera 12 and the second camera 12; and (2) generate signals 16a representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.

[0051] FIG. 6 shows an example of a linear (e.g., 2D) digital image 38 taken using one of cameras 12 on vehicle 18. Linear image 38 shows a portion of a tunnel ahead of vehicle 18. Boom(s) 28 and/or other portion(s) of vehicle 18 may also be visible in linear image(s) 38. [0052] FIG. 7 shows an example of 3D mesh(es) 40 (e.g., depth map) of the portion of tunnel shown in linear image(s) 38. 3D mesh(es) 26 may be generated based on 3D information extracted based on at least two linear images 38 of the same portion of tunnel taken from two different cameras 12 at different locations (i.e. from different vantage points) according to the methods described below. 3D mesh(es) 40 may be in a format (e.g. DXF) suitable of importing into computer aided design (CAD) system and suitable for geometric modeling of the portion(s) of tunnel.

[0053] FIG 8. shows an example of a transformed image 42 (e.g., 3D textured map) generated based on 3D information extracted based on at least two linear images 38 taken in stereo. Transformed image(s) 42 may comprise a re-positioning of individual pixels of linear image(s) 38 based on the 3D information extracted from the two linear images 38 taken in stereo. Specifically, pixels of linear images(s) 38 may have been repositioned at their respective 3D positions in a digital 3D environment to produce transformed images(s) 42 also known as texturalised images. Accordingly, transformed image(s) 42 may comprise features shown in linear image(s) 38 positioned in a 3D environment that is representative of their actual positions inside the portion of tunnel. Such transformed image(s) 42 may be useful for geological exploration and monitoring and geotechnical ground support design. For example, based on the colors, shades, differences in brightness, and/or other features of transformed image(s) 42, the ore (e.g., geological structure) visible on the internal surface of the portion of tunnel mapped may also be visible at corresponding 3D locations in the transformed image(s) 42 and similiarly rock structures existing on the internal surface of the portion of the tunnel mapped may be also visible at corresponding 3D locations in the transformed images(s) 42.

[0054] FIG. 9 is a flowchart illustrating exemplary method(s) 800 that may be performed using apparatus 10 to generate a digital 3D representation of a portion of mining excavation. For example, method 800 may comprise: acquiring at least one digital linear image 38 of the portion of mining excavation to be mapped from at least two cameras 12 at different locations (i.e. taken in stereo from different vantage points) on vehicle 18 (see block 802); based on the two linear images 38, extracting 3D information of the portion of mining excavation using stereo matching (see block 804); based on the 3D information, generating 3D mesh(es) 40 representative of the geometry(ies) of the portion of mining excavation shown in the linear images 38 (see block 806); and repositioning the pixels of at least one of the linear image(s) 38 according to the 3D information and/or the 3D mesh(es) 40 (e.g. projecting the pixels of the linear image(s) 38 onto 3D mesh(es) 40) to produce transformed image(s) 42 (see block 808). The linear images 38 used may be of the same (e.g., low or high) resolution. Alternatively, linear images 38 used to generate 3D mesh(es) 40 may be of low-resolution and linear image(s) 38 used in block 808 may be different linear image(s) 38 and may be of higher resolution. In the interest of reducing computing time required from data processing device(s) 14, it may be desired in some applications to generate 3D mesh(es) 40 using image(s) 38 of lower resolution.

[0055] Depending on the types of cameras 12 and geometry of the mining excavation, apparatus 10 may be used to map a portion of mining excavation of up to about 140 feet ahead of vehicle 18. Different portions of mining excavation may be mapped separately and sequentially as vehicle 18 advances through the mining excavation and the separately acquired maps may later be assembled in software designed for viewing and editing the generated 3D sections to produce a map of the entire mining excavation or a least of a larger portion of mining excavation that is of interest. A suitable viewer may be used to allow a user to digitally navigate through the mining excavation digitally and view the inside of the mining excavation in any direction using display 19 inside cab 30 or on another display (not shown) remote from vehicle 18. The viewer may also allow for the examination of transformed images(s) 42 against the walls of the portion of mining excavation that is mapped. The mapping of portions of mining excavation may be conducted in relation to one or more known reference points to permit the digital assembly of the portions of mining excavation that are mapped. For example, one or more a pre-established survey points located in the tunnel and/or mine may serve as one or more common reference points for the purpose of assembling the 3D information, mesh(es) 40 and/or transformed image(s) 42 relative to each other in a digital 3D environment. The viewer may also be used to extrapolate the existence of some of the features of geology outlines or geotechnical rock structures and connect these features in adjacent 3D sections of the tunnel for inclusion in CAD mine models later.

[0056] The acquisition of at least two linear digital images 38 may be started by first making the proper settings to cameras 12 as explained above and calibrating cameras 12. For example, it may be necessary or desired that cameras 12 be properly aligned (e.g., at least two of cameras 12 in the same direction for the purpose of stereo imaging) prior to the image capture. Aperture settings for cameras 12 may, in some applications, be selected to maximize light but simultaneously allow for a low shutter speed to reduce the effects of vibration on the image(s) captured). The adjustment of camera settings may permit cameras 12 to pick up the reflective lighting of objects in the camera's view (face, walls, back, floor, equipment), and using the know positions of cameras 12 and calibration settings, may be used to triangulate (e.g., stereo match) pixel locations for all surfaces as described further below. The wide view allows for shadows to be cast to the different cameras which can then used to determine structural and geological features. One or more of camera(s) 12 may be sensitive to visible light and/or to light in the infrared and/or gamma range depending on the application. Additional cameras 12 may be provided to provide additional coverage of the mining excavation (e.g., tunnel) to avoid or reduce the number and/or size of blind spots.

[0057] Once at least two linear images 38 of a portion of mining excavation of interest have been captured from different cameras 12 at different known locations on vehicle 18, the linear images 38 (i.e. stereo images) are processes to extract 3D information representative of the actual geometry of the mining excavation. For example, 3D information may be extracted by using a disparity map generated in a process of stereo matching of linear images 38. It may also be necessary or desirable that a deskewing operation be performed on the linear images 38 prior to stereo matching. The deskewing operation may use stored calibration settings/parameters of cameras 12.

[0058] Using the 3D information extracted based on linear images 38 and raw data from the linear images 38, one or more transformed images 42 may be generated by shifting the pixels of the linear images 38 to their corresponding digital locations in space (corresponding to their actual locations along the walls of the tunnel). As mentioned above, the transformed images 42 may be useful in geological exploration and monitoring.

[0059] Alternatively or in addition, the 3D information extracted from linear images 38 may be used to generate 3D mesh(es) 40 representative of the geometry of the portion of mining excavation. 3D mesh(es) 40 may, for example, be in a digital format such as Drawing Exchange Format (DXF) suitable for importing into a Computer Aided Design (CAD) system. [0060] Apparatus 10 may also be configured to remove or omit portions of linear images 38 that are of no interest from being included into mesh 40 and/or transformed image(s) 42. For example, vehicle 18 may comprise booms 28 or other implement or equipment related to mining operations that may be captured in linear images 38 but that may not be part of the geometry of the mining excavation and/or that may be of no geological relevance. Accordingly, suitable filtering may be applied during processing in order to omit such features from 3D mesh(es) 26 and/or transformed image(s) 28. Consequently, mesh(es) 40 and transformed image(s) 38 may include one or more holes 44 (e.g. blind spots) where booms 28 and/or other features may have been omitted. The omission or filtering out of known and irrelevant features captured by cameras 12 may be done by simply omitting 3D information and/or pixels that are at distances or regions corresponding to those of the known and irrelevant features. For example, if the position of each boom 28 is known, the corresponding area(s) of linear images 38 may then be ignored for the purpose of generating mesh(es) 40 and/or transformed image(s) 42 and hence form holes 44 as shown in FIGS. 5 and 6.

[0061] The relevant information that may be missing due to holes 44 may subsequently be obtained by repeating the above process(es) using a different combination of cameras 12 positioned at different locations on vehicle 18 and having visibility of portion(s) of the mining excavation that were hidden by booms 28 in the first pair of linear images 38. Alternatively, the missing information could also be subsequently be obtained using the same two cameras 12 but after having moved (e.g., advanced) vehicle 18 and/or repositioned booms 28 in order to provide visibility of the missing portions of the mining excavation.

[0062] The exemplary portions of computer program code below represent detailed embodiments of various steps of processing that may be executed either by data processing device(s) 14 or some other data processing means external to apparatus 10. The portions of computer program code are presented for illustrative purposes only and are written in a combination of C++ and Objective C. The portions of computer program code below may be suitable for execution on a Mac™ Mini. One of ordinary skill in the art will appreciate that other suitable programming language(s) and/or other algorithms may also be suitable.

[0063] FIG. 10 shows a flow chart representative of a function performed based on the exemplary portion of code below. The exemplary portion of code below may be used for reading calibration data of cameras 12 and populate variables that may be used later to extract 3D information from a disparity image (e.g. stereo matching) created from the at least two stereo linear images 38. For example, the portion of code below may associate the calibration data of cameras 12 with respective linear images 38 that have been acquired. Accordingly, inputs to the function may include an image container (e.g., data structure) and camera calibration parameters. An output of the function may include a rectified image container.

(TMBMPFile*)rectifiedForCalibration:(TMCalibrationParameters *)params

inWindow: (CGRect) window

outputWidth: (NSInteger) outputWidth {

const NSInteger outputHeight = (CGFloat) outputWidth *

window, size, height / window, size, width;

TMBMPKilo *ret = [[TMBMPPi le alloc] ini tWithWidth:outputWidth

height: outputHeight] ; bmppixel *tdata = (bmppi el*) ret. data;

[self ca1 ibratedImageInto : tdata

width : outputWidth

height : outputHeight

forCa1 ibration : params

window: window] ; return ret. autorelease;

}

[0064] FIG. 1 1 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may represent a function used for deskewing the at least two linear images 38 acquired. For example, it may desirable to deskew each linear image 38 acquired in order to counteract any "fish-eye" effect introduced by the optics (e.g. wide-angle lens(es)) of cameras 12 in order to produce "flat" images that can be rectified and compared on a common plane. Deskewing may also compensate for a difference between actual and optical centres of the acquired linear images 38. The function below may take in a calibrated image (all images captured from cameras 12 are calibrated) and then applies the calibration values to deskew the image to give a true image (fish eyed effect removed). Accordingly, inputs to the function may include a container to hold the image information, width and height of a block to be calibrated, calibration parameters and a "world" container holding image information. An output of the function may include a filled image container (of a deskewed image).

- (void) calibratedlmagelnto: (bmppi xel*) imageDestination

width: (NSlntcger) blockWidth

height: ( SInteger)blockHeight

forCal ibration : (TMCa I. ibra i onParameters*) params

window: (CGRect) viewWindow {

const bmppixel black = {255,20,20,20},

white = {255,235,235,235} ; const CGFloat fwidth = (CGFloat) self, width,

fheight = (CGFloat) self, height,

fblockWidth = (CGFloat)blockWidth, fblockHeight - (CGFloat) blockHeight ; for (int i=0; i<blockWidth; ++i)

for (int j=0; j<blockHeight ; ++j) {

// ihe half is because we actuall want to consider the centre of the pixel, not the minx, rainy corner

const CGFloat tanAlpha - viewWindow. origin, x +

viewWindow. size, width * ((CGFloat) i+0.5) /fblockWidth,

tanGamma = viewWindow. origin, y + viewWindow. size, height * ( (CGFloat) j+0.5) /fblockHeight ; const NSPoint pixelPoint = [params

convertTanToPixel :NSMakePoint (tanAlpha, tanGamma) ] I i f (pixelPoint. >=0 && pixelPoint. x <= fwidth &&

pixelPoint. y>=0 && pixelPoint. y <= fheight) {

// user is still on the original frame

imageDestination[i + blockWidth * j] = [self generalPixelAtX: pixelPoint. x

Y: pixelPoint. y] ;

} else {

// checkerboard to let the user know that they have strayed off the original frame

if (i % 20 >= 10)

if (j % 20 >= 10)

imageDestination[i + blockWidth *j] = black;

else

imageDestination[i + blockWidth *j] = white;

else

i (j % 20 >= 10)

imageDestination[i + blockWidth *j] = white; else imageDestination[i + blockWidth *j]

}

}

}

[0065] FIG. 12 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for applying a Gaussian blur to the deskewed images and saving the blurred images as rectified images in preparation for stereo matching. The application of the Gaussian blur may effectively remove jagged edges in the images and provide a smoothing effect. The application of the Gaussian blur may be desire and beneficial to subsequent stereo matching. Accordingly, an input to the function may include a radius of the blur to apply and an output of the function may include image data has been blurred (smoothed).

-(void) gaussianBlur : (const NSInteger) radius {

const NSInteger kernelR = radius * 2,

kernelW = 2 * kernelR + l;

CGFloat fradius = (CGFloat) radius,

*kernel = malloc(kernelW*kernelW*sizeof (CGFloat)) ; double sum = 0; for (int i=0; i<kernelW; ++i)

for (int j=0; j<kernelW; ++j) {

const int dx = i- (CGFloat) (i-kernelR) ,

dy = j- (CGFloat) (j-kernelR),

r2 - dx*dx + dy*dy;

const double val = (CGFl oat) r2/fradius/fradius, eval = exp(-val); kernel [i+j*kernelW] = eval;

sura += eval ;

}

for (int i=0; KkernelW; ++i)

for (int j=0; j<kernelW; ++j)

kernel [i+j*kernelW] /= sum;

for (NSInteger i=kernelR; i<width-kernelR-l ; ++i)

for (NSInteger j=kernelR; j<height-kernelR-l ; ++j) {

CGFloat asum = 0; for (NSInteger ii=-kernelR; ii<=kernelR; ++ii) for (NSInteger jj=-kernelR; jj<=kernelR; ++jj)

asura += (CGFloat)dat.a[ii+i + width * (jj + j)] * kernel [ii+kernelR + (jj+kernelR) * kernelW] ; data[i + width * j] = (unsigned char)asum;

}

free (kernel) ;

}

[0066] Then, the rectified images may be subjected to a stereo matching process to create a disparity image from the rectified images.

[0067] FIG. 13 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for stereo matching of the rectified images. This process may take the two rectified (e.g. deskewed and blurred) images and compare areas of the two images to find commonalities in them. Once the commonalities are identified, the stereo matching process may create a new (i.e. disparity) image out of the two rectified images by shifting data based on the identified commonalities and merge them into the new disparity image.

[0068] In the event where more than one pair (e.g. three pairs) of cameras 12 are used, a separate disparity image may be created for each pair of cameras 12 and then the separate disparity images may also be compared to each other. For example, the separate disparity images obtained based on the multiple pairs of cameras 12 may subsequently be merged into a single disparity image. The process of merging the separate disparity images may include looping through the disparity images and matching pixels between the disparity images 12 on a one by one basis. For example, the pixel values from a first disparity image may be compared to those of a second disparity image and, in the event where there is a value in the first disparity image and not in the second disparity image, the value of the pixel in the first disparity image may be used, or vice versa. However if a pixel value exists in both disparity images but there is discrepancy between the two values, then the average value may be used in the merged disparity image. Accordingly, an input to the function may include two or more disparity images and an output to the function may include a merged disparity image. if ( [stereoParameters. disparityUse isEqualToString:@"merged w ] ) //merges dispari tv files

{

int leftVal = 0, rightVal = 0, index = 0;

f ' or nt i = 0; i < dispfileLC. width; ++i)

{

For (int j = 0; j < dispfileLC. height ; ++j) {

leftVal = (int) dispfileLC. data [i + dispfileLC. width * j];

rightVal = (int) dispfileCR. data [i + dispfileCR. width * j];

index = i + dispfileLC. width * j;

//checking the pixels values for blank (black) and a value.

//if 1 disp image is blank and the other is not then it uses the 1 that is not blank

//'otherwise if both are blank the loft most image value is used (blank)

//and finally if both contain a value then the average value of the pixels are used

if (leftVal == 0 && rightVal != 0)

pgmFile. data[index] = dispfileCR. data [index] ;

else if (leftVal != 0 && rightVal == 0)

pgmFile. data [index] = dispfileLC. data [index] ;

else if (leftVal == 0 && rightVal == 0)

pgmFile. data [index] = dispfileLC. data [index] ;

else

pgmFile. data [index] = (unsigned

char) ((int) ((leftVal + rightVal) /2) ) ;

}

}

}

[0069] FIG. 14 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for extracting 3D information from the disparity image(s) created above. This may be done by looping through the disparity image(s) and using the calibration information to calculate the 3D location of each pixel. Once the 3D location of each pixel has been determined, mesh(es) 40 may be created according to a desired tolerance. For example mesh(es) 40 may comprise a triangular mesh produced based on the newly calculated 3D points of each pixel. A suitable tolerance may be specified to provide a relatively smooth representation of the internal surface of the mining excavation. For example a grid interval (e.g. longitudinal slice of tunnel) of around 50 inches and 8 segments (e.g. triangular elements) per interval may be suitable but a finer or coarser tolerance may be used as needed depending on the application and of the technical capabilities of the equipment used (e.g. resolution of cameras 12). Based on the 3D information of each pixel of the disparity image, transformed image(s) 42 (e.g., 3D textured map) may then be produced by repositioning each pixel of linear image(s) 38 in a digital 3D environment at its correct position (i.e. digitally repositioning each pixel at its correct position against the inside wall of the tunnel).

[0070] Accordingly, inputs to the function below may include a disparity image (PGM file), a digital image (BMP file) (e.g., low-resolution or high-resolution, color) of the mining excavation and calibration results for the disparity image. An output of the function may include a 3D mesh 40 with a colored texture mapped onto 3D mesh 40 (PLY file).

-(id)initWithPGMFile:(TMPGMFile*)pgmFile

calibration: (TMRocti fiedCa 1 ibration*) calib

mcsliColours: (TMBMPFi 1.e*) colourfile {

sell ' = [self init] ;

if (self) {

const NSInteger width - pgmFile. width,

height = pgmFile. height ; const unsigned char *data = pgmFile. data;

bmppixcl *colourData = colourfile. data ; if (colourfile)

co I ouredVertic.es = YES;

else

colouredVertices = NO;

// construct, array of vertices

for (NSInteger j=0; j<height; ++j) {

for (NSInteger i=0; Kwidth; ++i) {

NSInteger disparity = data[i + width * j] ; if (disparity) {

const CGFloat tanAlpha = calib. viewport, size. width * (CGFloat) i / (CGFloat) width + calib. viewport. origin. ,

tanGamma = calib. viewport, size, height

* (CGFloat) j / (CGFloat) height + calib. viewport, origin, y,

dtanAlpha = calib. viewport, size, width

* (CGFloat) disparity / (CGFloat) width, z = 28.75 / dtanAlpha,

X = tanAlpha * z,

y = tanGamma * z ;

TMPLYVer ex *vert = nil; if (colouredVort ices) {

const bnippixel pix = colourData[i + width*j] ; vert = [[TMI'LYVertex alloc] initWithX x

y z pix. r pix. g pix. b] ;

} else {

vert - [[TMI'LYVertex alloc] initWithX

y z];

[vertices addObject : vert] ;

[vert release] ;

} else {

[vertices addObject: [NSNu.11 null]]

}

}

// construct triangles

for (NSInteger i=0; i<width-l; ++i)

for (NSInteger j=0; j<height-l; ++j) {

NSInteger bl = i + j*width,

br = i+1 + j*width,

tr = i+1 + (j+l)*width, tl = i + (j+l)*width;

TMPLYVertex *vbl - [vertices objectAt Index :bl],

*vbr = [vertices objectAtIndex:br],

*vtr = [vertices objectAtlndex : tr] ,

*vtl = [vertices objectAtIndex:tl] ;

/// clockwise oriented triangles if (! Lvbl isEqual: [NSNull null]]

&& ![vbr isEqual: [NSNull null]]

&& ![vtl isEqual: [NSNull. null]]) { TMPLYTriangle *tri = [[TMFLYTri angle alloc]

initWi.thVerte.xA:br B:bl c:tl];

[triangles addObject:tri] ;

[tri release] ;

}

if (![vtr isEqual: [NSNull null]]

&& ![vbr isEqual: [NSNull null]]

&& ![vtl isEqual: [NSNull null]]) {

TMPLYTriangle *tri = [[TMPLYTriangle alloc]

initWithVertex/\:tl

B:tr

C:br];

[triangles addObjec†,:tri] ;

[tri release] ;

}

}

}

return self;

}

[0071] FIG. 15 shows a flow chart illustrating method 1500 for mapping a mining excavation and also controlling at least one operation of vehicle 18. Method 1500 may be conducted by apparatus 10 and may, for example, include:

receiving signals representative of at least two digital images of at least a common portion of the mining excavation (see block 1502);

generating signals 16a (see FIG. 3) representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images (see block 1504); and

generating signals 16b (see FIG.3) useful in the at least one operation of vehicle 18 based on signals representative of at least one of the digital images (see block 1506).

[0072] As explained above, the generation of signals representative of the 3D digital representation of the portion of the mining excavation (block 1504) and the generation of signals useful in the at least one operation of vehicle 18 (block 1506 may be conducted individually. At least one of the digital images may include a portion of vehicle 18, such as boom(s) 28, and the digital 3D representation of the portion of the mining excavation may excludes that the portion of vehicle 18. Also, the signals 16b useful in the at least one operation of vehicle 18 may be useful in controlling boom(s) 28. In method, 1500, the digital 3D representation of the portion of the mining excavation comprise 3D mesh(es) 40. Alternatively or in addition, the digital 3D representation of the portion of the mining excavation may comprise at least one of the digital images transformed according to 3D mesh(es) 40 to form transformed image 42.

[0073] As mentioned above, The at least two digital images may be low- resolution digital images and the digital 3D representation of the portion of the mining excavation comprises 3D mesh(es) 40 based on the low-resolution digital images. The image used to produce transformed image 42 may be one of the two low-resolution images or may be a separate high-resolution image obtained from one of cameras 12. In any case, the digital 3D representation may generated based on stereo matching of at least two digital images as described above.

[0074] FIG. 16 shows a flow chart illustrating method 1600 for generating a digital 3D representation of a mining excavation based on digital images and excluding a portion of vehicle 18 captured in the digital images. Method 1600 may, for example, include:

receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of vehicle 18 (see block 1602); and

generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation excluding the portion of vehicle 18 (see block 1604).

[0075] FIG. 17 shows a flow chart illustrating method 1700 for generating a digital 3D representation of a mining excavation based on low-resolution and high- resolution digital images. Method 1700 may, for example, include:

receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation (see block 1702); generating signals representative of digital 3D mesh(es) 40 of at least a portion of the mining excavation based on the low-resolution digital images (see block 1704);

receiving signals representative of a high-resolution digital image of the common portion of the mining excavation (see block 1706); and

transforming the high-resolution digital image according to the 3D mesh (see block 1708).

[0076] Methods 1500, 1600 and 1700 may be performed by apparatus 10 in accordance with and in combination with the various aspects of the present disclosure.

[0077] The above apparatus and methods may be used in underground mining or other applications. For example, the disclosed apparatus and methods may be used for: orientating and setting up of production drills to help improve accuracy and consistency of target achievement of the planned drill layout; tracking haulage trucks and/or loaders (also known as scoops and scoop trams), and/or their components (e.g., dump boxes, load and dump buckets), during transit and/or operation; tracking position and movement of components such as chutes, skips and/or load and dump pockets in the shaft process; tracking of robotic machinery to provide positional data useable by the machinery to move itself to specific locations and/or orientations; tracking the motion of the booms of the jumbo drilling unit for purposes other than drilling accuracy, for example to help ensure the non-conflict of the booms or to help optimize the utilization of the location of the booms for the work being completed (e.g., bolting, shotcreting, screening, or material handling).

[0078] The above description is meant to be exemplary only, and one skilled in the relevant arts will recognize that changes may be made to the embodiments described without departing from the scope of the invention disclosed. For example, the blocks and/or operations in the flowcharts and drawings described herein are for purposes of example only. There may be many variations to these blocks and/or operations without departing from the teachings of the present disclosure. For instance, the blocks may be performed in a differing order, or blocks may be added, deleted, or modified. The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. Also, one skilled in the relevant arts will appreciate that while the apparatus and devices disclosed and shown herein may comprise a specific number of elements/components, the apparatus and devices could be modified to include additional or fewer of such elements/components. For example, while any of the elements/components disclosed may be referenced as being singular, it is understood that the embodiments disclosed herein could be modified to include a plurality of such elements/components. The present disclosure is also intended to cover and embrace all suitable changes in technology. Modifications which fall within the scope of the present invention will be apparent to those skilled in the art, in light of a review of this disclosure, and such modifications are intended to fall within the appended claims.