Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AERIAL VEHICLES, METHODS OF IMAGING A TUNNEL AND METHODS OF IMAGING A SHAFT
Document Type and Number:
WIPO Patent Application WO/2019/190398
Kind Code:
A1
Abstract:
According to various embodiments, there is provided an aerial vehicle. The aerial vehicle includes: an airframe comprising a central member defining a longitudinal axis; a gimbal coupled to the central member; a camera mounted on the gimbal to face a direction at least substantially orthogonal to the longitudinal axis; wherein the gimbal is rotatable about the longitudinal axis to spin the camera around the longitudinal axis; and a propulsion means configured to propel the aerial vehicle, wherein the propulsion means is offset from the camera along the longitudinal axis.

Inventors:
FOONG SHAOHUI (SG)
KYI HLA WIN (SG)
TAN CHEE HOW (SG)
BIN SHAIFUL DANIAL SUFIYAN (SG)
SOE WIN LUKE THURA (SG)
LIM HOCK BENG (SG)
YEUNG SAI-KIT (SG)
Application Number:
PCT/SG2019/050167
Publication Date:
October 03, 2019
Filing Date:
March 26, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV SINGAPORE TECHNOLOGY & DESIGN (SG)
International Classes:
B64D47/08; B64C39/02; G05D1/10
Domestic Patent References:
WO2017132990A12017-08-10
Foreign References:
KR101707865B12017-02-27
CN106442570A2017-02-22
CN106441286A2017-02-22
KR101752998B12017-07-19
Attorney, Agent or Firm:
VIERING, JENTSCHURA & PARTNER LLP (SG)
Download PDF:
Claims:
CLAIMS

1. An aerial vehicle comprising:

an airframe comprising a central member defining a longitudinal axis;

a gimbal coupled to the central member;

a camera mounted on the gimbal to face a direction at least substantially orthogonal to the longitudinal axis;

wherein the gimbal is rotatable about the longitudinal axis to spin the camera around the longitudinal axis; and

a propulsion means configured to propel the aerial vehicle, wherein the propulsion means is offset from the camera along the longitudinal axis.

2. The aerial vehicle of claim 1, wherein the airframe further comprises a plurality of arms coupled to the central member, wherein each arm of the plurality of arms is offset from the camera along the longitudinal axis.

3. The aerial vehicle of claim 2, wherein each arm has a first end and a second end opposing the first end, wherein the first end is coupled to the central member and wherein the second end is separated from the central member.

4. The aerial vehicle of claim 3, wherein a distance between the first end and the camera along the longitudinal axis is smaller than a distance between the second end and the camera along the longitudinal axis.

5. The aerial vehicle of any one of claims 2 to 4, wherein the propulsion means comprises a respective rotor coupled to each arm, wherein rotor is offset from the camera along the longitudinal axis.

6. The aerial vehicle of claim 5, wherein each rotor comprises a propeller arranged to spin about a propeller axis, wherein the propeller axis is at least substantially orthogonal to the longitudinal axis.

7. The aerial vehicle of any one of claims 2 to 6, wherein the airframe comprises four arms arranged symmetrically about the longitudinal axis, wherein the propulsion means comprises four rotors.

8. The aerial vehicle of any one of claims 2 to 7, wherein each arm is a straight elongated structure.

9. The aerial vehicle of any one of claims 1 to 8, wherein the central member is elongated along the longitudinal axis.

10. The aerial vehicle of any one of claims 1 to 9, wherein the gimbal is rotatable through 360° about the longitudinal axis.

11. The aerial vehicle of any one of claims 1 to 10, wherein the camera is fixed with respect to the gimbal.

12. The aerial vehicle of any one of claims 1 to 11, further comprising a processor configured to control a rotation speed of the gimbal based on a velocity of the aerial vehicle.

13. The aerial vehicle of any one of claims 1 to 12, further comprising:

a range sensor coupled to the camera, wherein the range sensor is configured to measure a distance between the camera and a nearest surface from the camera.

14. The aerial vehicle of any one of claims 1 to 13, wherein the central member comprises a cavity, wherein the gimbal is housed inside the cavity.

15. The aerial vehicle of claim 14, wherein the central member comprises a transparent window at least substantially longitudinally aligned with the camera.

16. The aerial vehicle of any one of claims 1 to 15, wherein the gimbal is configured to rotate continuously as the aerial vehicle is moving at least substantially along the longitudinal axis, such that the camera captures a spiral panoramic image.

17. The aerial vehicle of any one of claims 1 to 16, further comprising: a plurality of range sensors, each range sensor mounted on a respective position on the airframe and configured to measure a distance between the respective position and a nearest surface from the respective position;

a memory storing geometrical information about an enclosed space;

a processor configured to determine a planar position of the aerial vehicle in the enclosed space based on measurements from the plurality of range sensors and further based on the geometrical information.

18. A method of imaging a tunnel, the method comprising:

flying an aerial vehicle along a lengthwise direction of the tunnel;

wherein the aerial vehicle comprises an airframe defining a longitudinal axis, a gimbal coupled to the airframe and a camera mounted on the gimbal to face a direction at least substantially orthogonal to the longitudinal axis;

wherein the longitudinal axis is at least substantially parallel to the lengthwise direction when the aerial vehicle is in flight;

rotating the gimbal while the aerial vehicle is in flight such that the camera revolves around the longitudinal axis to capture a spiral panoramic image; and

reconstructing a virtual three-dimensional model of the tunnel based on the spiral panoramic image.

19. The method of claim 18, further comprising:

obtaining measurements from a range sensor coupled to the camera;

reconstructing the virtual three-dimensional model further based on the measurements.

20. A method of imaging a shaft, the method comprising:

flying an aerial vehicle along a depthwise direction of the shaft;

wherein the aerial vehicle comprises an airframe defining a longitudinal axis, and a camera mounted on the airframe to face a direction at least substantially orthogonal to the longitudinal axis;

wherein the longitudinal axis is at least substantially parallel to the depthwise direction when the aerial vehicle is in flight;

rotating the aerial vehicle about the longitudinal axis while the aerial vehicle is in flight such that the camera revolves around the longitudinal axis to capture a spiral panoramic image; and reconstructing a virtual three-dimensional model of the shaft based on the spiral panoramic image.

21. A method of locating an aerial vehicle in an enclosed space, the method comprising: storing geometrical information about the enclosed space in a memory;

measuring a plurality of distances using a plurality of range sensors, each range sensor mounted on a respective position on the airframe;

wherein each measured distance is a distance between the respective position and a nearest surface of the enclosed space from the respective position; and

determining a planar position of the aerial vehicle based on the measured distances and the geometrical information.

Description:
AERIAL VEHICLES. METHODS OF IMAGING A TUNNEL AND

METHODS OF IMAGING A SHAFT

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of Singapore Patent Application number 10201802492 Y filed 26 March 2018, the entire contents of which are incorporated herein by reference for all purposes.

TECHNICAL FIELD

[0002] Various embodiments relate to aerial vehicles, methods of imaging a tunnel and methods of imaging a shaft.

BACKGROUND

[0003] Some enclosed infrastructures, such as train tunnels, sewage tunnels and other underground networks, require an infrastructure surveillance system that generates minimum disturbance to its surroundings while capturing as much data as possible. Many infrastructure surveillance systems today are ground based, making them vulnerable to debris and liquid on the floor of the infrastructure as they operate. Conventional robots may not be ideal as infrastructure surveillance systems. For example, unmanned ground vehicles (UGV) are unable to traverse sewage tunnels filled with silts, sewerage or debris. Unmanned surface vessels (USV) can only work in tunnels that are partially filled with a liquid of highly diluted consistency. Pipeline and tunnel robots can only work in small to medium diameter pipes and may require complex hoisting and winching mechanisms for deployment and retrieval.

SUMMARY

[0004] According to various embodiments, there may be provided an aerial vehicle including: an airframe including a central member defining a longitudinal axis; a gimbal coupled to the central member; a camera mounted on the gimbal to face a direction at least substantially orthogonal to the longitudinal axis; wherein the gimbal is rotatable about the longitudinal axis to spin the camera around the longitudinal axis; and a propulsion means configured to propel the aerial vehicle, wherein the propulsion means is offset from the camera along the longitudinal axis.

[0005] According to various embodiments, there may be provided a method of imaging a shaft, the method including: flying an aerial vehicle along a depthwise direction of the shaft; wherein the aerial vehicle includes an airframe defining a longitudinal axis, and a camera mounted on the airframe to face a direction at least substantially orthogonal to the longitudinal axis; wherein the longitudinal axis is at least substantially parallel to the depthwise direction when the aerial vehicle is in flight; rotating the aerial vehicle about the longitudinal axis while the aerial vehicle is in flight such that the camera revolves around the longitudinal axis to capture a spiral panoramic image; and reconstructing a virtual three- dimensional model of the shaft based on the spiral panoramic image.

[0006] According to various embodiments, there may be provided a method of locating an aerial vehicle in an enclosed space, the method including: storing geometrical information about the enclosed space in a memory; measuring a plurality of distances using a plurality of range sensors, each range sensor mounted on a respective position on the airframe; wherein each measured distance is a distance between the respective position and a nearest surface of the enclosed space from the respective position; and determining a planar position of the aerial vehicle based on the measured distances and the geometrical information.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments are described with reference to the following drawings, in which:

[0008] FIG. 1 shows a simplified diagram of an aerial vehicle according to various embodiments.

[0009] FIG. 2 shows a simplified diagram of an aerial vehicle according to various embodiments.

[0010] FIGS. 3 A and 3B show simplified illustrations of a method of imaging a tunnel.

[0011] FIGS. 4A and 4B show simplified illustrations of a method of imaging a shaft.

[0012] FIG. 5 illustrates an implementation of the aerial vehicle according to various embodiments.

[0013] FIG. 6 shows a diagram of a revolving camera system according to various embodiments. [0014] FIG. 7 shows a diagram that illustrates a method of mapping a tunnel according to various embodiments

[0015] FIG. 8 shows a planar cross-section with two parallel lines which is the geometry of a tunnel reduced to a 2D planar form.

[0016] FIG. 9 shows a circular cross-section which is the geometry of a vertical shaft reduced to a 2D form.

[0017] FIG. 10 shows a visual illustration of the optimization problem in formulated in 2D, where the sensor placements and the spatial constraints are illustrated in a 2D plane.

[0018] FIGS. 11A-11D shows possible, but not limiting configurations of the sparse array sensors according to various embodiments.

[0019] FIG. 12 shows graphs that illustrate the results of the numerical simulation.

[0020] FIG. 13 shows a photo of a prototype of the aerial vehicle used in the experiments.

[0021] FIG. 14 shows a graph that plots the result of a first experiment.

[0022] FIG. 15 shows a graph that plots the result of a second experiment.

[0023] FIG. 16 shows the results of a third experiment.

[0024] FIG. 17 shows the results of a fourth experiment.

[0025] FIG. 18 shows a graph that plots the absolute Euclidean error r throughout the experimental flight.

DESCRIPTION

[0026] Embodiments described below in context of the aerial vehicles are analogously valid for the respective methods, and vice versa. Furthermore, it will be understood that the embodiments described below may be combined, for example, a part of one embodiment may be combined with a part of another embodiment.

[0027] It will be understood that any property described herein for a specific aerial vehicle may also hold for any aerial vehicle described herein. It will be understood that any property described herein for a specific method may also hold for any method described herein. Furthermore, it will be understood that for any aerial vehicle or method described herein, not necessarily all the components or steps described must be enclosed in the device or method, but only some (but not all) components or steps may be enclosed.

[0028] It should be understood that the terms“on”,“over”,“top”,“bottom”,“down”, side”, “back”, “left”, “right”, “front”, “lateral”, “side”, “up”, “down” etc., when used in the following description are used for convenience and to aid understanding of relative positions or directions, and not intended to limit the orientation of any device, or structure or any part of any device or structure. In addition, the singular terms“a”,“an”, and“the” include plural references unless context clearly indicates otherwise. Similarly, the word“or” is intended to include“and” unless the context clearly indicates otherwise.

[0029] The term “coupled” (or “connected”) herein may be understood as electrically coupled or as mechanically coupled, for example attached or fixed, or just in contact without any fixation, and it will be understood that both direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.

[0030] In order that the invention may be readily understood and put into practical effect, various embodiments will now be described by way of examples and not limitations, and with reference to the figures.

[0031] According to various embodiments, an aerial vehicle may be provided for inspecting covered infrastructures. The aerial vehicle may be equipped with a revolving camera system. The revolving camera system may capture panoramic images of the immediate surroundings as the aerial vehicle moves forward. Unlike a conventional 360° camera with wide-angled lens, the revolving camera system may capture images of entire lateral surfaces, and the images can be of high fidelity and minimal optical distortion, while only using a single camera. The images captured by the revolving camera system may be stitched together to virtually reconstruct the infrastructure for detection of damages such as cracks and deterioration. These images may be processed to reconstruct the infrastructure in a virtual space and used for detecting cracks and deteriorations. The aerial vehicle may be an unmanned aerial vehicle (UAV) that may autonomously carry out visual inspection of the covered infrastructures. The aerial vehicle may have a minimal set of simple navigation sensors, so that the weight, battery and computational power required by the navigation sensors are low. The navigation sensors may include a sparse array of time-of-flight (ToF) rangefinders that are arranged in an optimal fashion to localize in different environments, such as horizontal tunnels and vertical shafts. The aerial vehicle may be powered by lithium- ion batteries that have higher energy denser than traditional lithium-polymer batteries to enhance the endurance of the aerial vehicle.

[0032] One possible application of the aerial vehicles according to various embodiments, may be to inspect underground sewerage infrastructure, like the Deep Tunnel Sewerage System (DTSS) in Singapore which is large and extensive. The tunnels in the DTSS are protected with specially-designed Corrosion Protection Lining and periodic inspections are required. The environment inside these tunnels, which extend more than 30m underground, is hazardous and human access is difficult and dangerous. The aerial vehicle may be able access the tunnels of about 3 to 6m in diameter via the vertical direct access shaft of about 3 to 5 metres in diameter, without requiring winch and hoisting system like conventional pipeline robots. Being agile and versatile to traverse 3D space, the aerial vehicle may be able to enter the tunnels even in the presence of sewerage, silts, debris and unknown obstacles in a fully operational sewerage system.

[0033] FIG. 1 shows a simplified diagram of an aerial vehicle 100 according to various embodiments. The aerial vehicle 100 may be a rotary aerial vehicle, as such as a multicopter, for example, a quadcopter. The airframe of the aerial vehicle 100 may include a central member 104 and a plurality of arms 106 that extend out of the central member 104. To facilitate description of the orientations and positions of the aerial vehicle components, the central frame 104 is referred to as defining a body frame. The body frame has a longitudinal (X) axis 120, a lateral (Y) axis 130 and a height (Z) axis 140. Each of the X axis 120, the Y axis 130 and the height axis 140 extend perpendicular to each other and they may intersect in a centre of gravity of the aerial vehicle 100. The height axis 140 is not shown in FIG. 1 as it is perpendicular to the plane of the drawing sheet.

[0034] The aerial vehicle may include rotors 108 as the propulsion means. At least one rotor may be coupled to each respective arm 106. The aerial vehicle 100 may include a camera system 102 coupled to the central member 104. The camera system 102 may include a camera 110. The camera 110 may be fixed in position and orientation with respect to the camera system 102. The camera system 102 may be configured to rotate 150 about the X axis 120, for example using a rotatable gimbal of the camera system 102. As the camera system 102 rotates, the camera 110 may revolve around the X axis 120. The camera system 102 may be an internal camera system. The internal camera system may be housed within a cavity in the central frame 104. The central frame 104 may include an at least substantially transparent window aligned, for example longitudinally aligned, with the camera 110 so that the camera 110 may receive light from outside of the central frame 104. Alternatively, the camera system 102 may be an external camera system, mounted outside of the central frame. The external camera system may rotate around the central frame 104. For example, the camera system 102 may include a ring-shaped gimbal that receives the central frame in a centre of the gimbal, for example, in a concentric manner.

[0035] FIG. 2 shows a simplified diagram of an aerial vehicle 200 according to various embodiments. The aerial vehicle 200 may be different from the aerial vehicle 100, in that it may be a fixed wing aerial vehicle. As such, the airframe of the aerial vehicle 200 may include a central member 104 that is a fuselage, and a pair of wings 206. Similar to the description with respect to the aerial vehicle 100, the central frame 104 is referred to as defining a body frame, and the body frame has a longitudinal (X) axis 120, a lateral (Y) axis 130 and a height (Z) axis 140. The pair of wings 206 may be coupled at wing roots, to opposing lateral sides of the fuselage. In other words, a straight line joining the wing tips of the pair of wings 206 may be at least substantially parallel to the Y axis 130. The fuselage may include a nose 204 and a tail 208. A straight line joining the nose 204 and the tail 208 may be at least substantially parallel to the X axis 120. Similar to the aerial vehicle 100, the aerial vehicle 200 may also include the camera system 102 which may be configured to rotate 150 about the X axis 120, so as to spin the camera 110 about the X axis 120.

[0036] According to various embodiments, the aerial vehicles 100 or 200 may be unmanned aerial vehicles (UAV), which may also be referred herein as aerial robots. The aerial vehicles 100 or 200 may include a datalink configured to receive and transmit data between the aerial vehicle and a ground control station. The aerial vehicles may be designed and optimized to accommodate the revolving camera system and achieve high endurance. The aerial vehicles may feature a fine-tuned propulsion system which allows for maximum cooling of its sensors and payloads. The aerial vehicles may be manually piloted semi-autonomously using long range radio and video transmission. In cases where manual control is not necessary or not possible, the aerial vehicles may fly autonomously using pre-planned flight paths. The aerial vehicles may include a memory storing geometrical information about an enclosed space. Alternatively, the geometrical information about the enclosed space may be stored external to the aerial vehicle. A processor, either onboard the aerial vehicle or external to the aerial vehicle, may determine a planar position of the aerial vehicle in the enclosed space based on measurements from range sensors on the aerial vehicle and further based on the geometrical information.

[0037] FIGS. 3 A and 3B show simplified illustrations of a method of imaging a tunnel. The method may include sending an aerial vehicle 300 into the tunnel 330. For simplicity, the aerial vehicle 300 is not fully illustrated and instead, is only represented by its central frame 104 in these figures. The aerial vehicle 300 may be any one of the aerial vehicle 100 or 200.

[0038] FIG. 3A illustrates the method with a side perspective view 300A of a tunnel 330. For clarity in the figure, the Y axis 130 and the Z axis 140 are shown outside of the aerial vehicle 300, although it should be understood that these axes are defined with respect to the central frame 104. The aerial vehicle 300 may travel along a path 340. The path 340 may be at least substantially parallel to a lengthwise direction of the tunnel 330, such that the longitudinal axis 120 may be at least substantially parallel to the lengthwise direction of the tunnel 330 when the aerial vehicle 300 is in flight. If the tunnel 330 has a symmetrical geometry, the aerial vehicle 300 may ideally fly along a centre of the tunnel 330. As the aerial vehicle 300 travels along the path 340, the gimbal of the camera system 102 may rotate about the X axis, causing the camera 110 to revolve around the X axis. The camera 110 may capture images at a regular rate, as the gimbal rotates at least substantially continuously and as the aerial vehicle 300 travels. The rotation movement by the gimbal and the translational movement by the aerial vehicle 300 may cause the camera 110 to capture a spiral panoramic image of the interior of the tunnel 330. The camera system 102 may include a ranging sensor that may measure a distance between the camera 110 and a nearest surface. The ranging sensor may emit an electromagnetic (EM) wave and receive the EM wave that bounces off the nearest surface of the tunnel 330. The ranging sensor may determine the distance between the camera 110 and the nearest surface of the tunnel 330 based on the time taken between emitting the EM wave and receiving the reflected EM wave. The aerial vehicle 300 may transfer the spiral panoramic image and the output of the ranging sensor to a processor. The processor, which may be onboard the aerial vehicle 300, or external to the aerial vehicle 300, may convert the spiral panoramic image into a three-dimensional visual model, with the aid of the ranging sensor measurements.

[0039] FIG. 3B illustrates the method of FIG. 3A, with a widthwise cross-sectional view 300B of the tunnel 330. While the figures depict the tunnel 330 as being cuboid in shape, it should be understood that the method is applicable for tunnels of other shapes and geometry.

[0040] FIGS. 4A and 4B show simplified illustrations of a method of imaging a shaft. The method may include sending an aerial vehicle 400 into the shaft 440. For simplicity, the aerial vehicle 400 is not fully illustrated and instead, is only represented by its central frame 104 in these figures. The shaft 440 may be different from the tunnel 330 in its orientation, in that the shaft 440 is at least substantially vertical whereas the tunnel is at least substantially horizontal. In other words, flying out of an underground shaft 440 may require the aerial vehicle 300 to travel in an opposite direction from the pull of gravity. The aerial vehicle 400 may be any one of the aerial vehicle 100 or 200. Alternatively, the aerial vehicle 400 may be different from any one of the aerial vehicle 100 or 200, in that its camera system 102 may be exclude a gimbal 102. The camera 110 may be fixed in position and/or orientation with respect to the central frame 104 of the aerial vehicle 400, either mechanically or as controlled by a motor of the gimbal 102. [0041] FIG. 4A illustrates the method with a side perspective view 400A of a shaft 440. For clarity in the figure, the X axis 120 and the Y axis 130 are shown outside of the aerial vehicle 400, although it should be understood that these axes are defined with respect to the central frame 104. The aerial vehicle 400 may travel along a path 430. The path 430 may be at least substantially parallel to a depthwise direction of the shaft 440, such that the longitudinal axis 120 may be at least substantially perpendicular to the depthwise direction of the shaft 440 when the aerial vehicle 400 is travelling along the shaft 440. If the shaft 440 has a symmetrical geometry, the aerial vehicle 400 may ideally fly along a centre of the shaft 440. As the aerial vehicle 400 travels along the path 430, the aerial vehicle 400 may also spin 450 about the Z axis 140, causing the camera 110 to revolve around the Z axis 140. The camera 110 may capture images at a regular rate, as the aerial vehicle 400 rotates 450 and travels along the shaft 440. The rotation movement and translational movement by the aerial vehicle 400 may cause the camera 110 to capture a spiral panoramic image of the interior of the shaft 440. The camera system 102 may include a ranging sensor that may measure a distance between the camera 110 and a nearest surface. The ranging sensor may emit an electromagnetic (EM) wave and receive the EM wave that bounces off the nearest surface of the shaft 440. The ranging sensor may determine the distance between the camera 110 and the nearest surface of the shaft 440 based on the time taken between emitting the EM wave and receiving the reflected EM wave. The aerial vehicle 400 may transfer the spiral panoramic image and the output of the ranging sensor to a processor. The processor, which may be onboard the aerial vehicle 400, or external to the aerial vehicle 400, may convert the spiral panoramic image into a three-dimensional visual model, with the aid of the ranging sensor measurements.

[0042] FIG. 4B illustrates the method of FIG. 4A, with a top view 400B of the shaft 440. While the figures depict the shaft 440 as being cylindrical in shape, it should be understood that the method is applicable for shafts of other shapes and geometry.

[0043] According to various embodiments, the aerial vehicle 400 may fly in a spiral path around the depthwise axis of the shaft 440, instead of spin about the Z axis 140. This may be especially applicable if the aerial vehicle 400 is a fixed wing aerial vehicle, like the aerial vehicle 200.

[0044] FIG. 5 illustrates an implementation of the aerial vehicle 100 according to various embodiments. The airframe may include a central member 104, also referred herein as the backbone of the airframe. The central member 104 may be a single carbon fiber tube. The central member 104 may be connected to four arms 106, to form a dual Y-shaped airframe. The central member 104 may be an elongated structure, for example a hollow tube, for example a carbon fiber tube. The central member 104 may be elongated along the longitudinal axis 120. Each arm 106 may also be a straight elongated structure, fabricated out of a similar material as the central member 104. The arms 106 may be arranged symmetrically about the longitudinal axis 120. Each arm 106 may have a first end connected to the central member 104, and may have an opposing second end terminated with a propeller guard 510. The second end may be separated from the central member 104. A distance between the first end and the camera 110 along the longitudinal axis 120 may be smaller than a distance between the second end and the camera along the longitudinal axis 120. The propulsion means of the aerial vehicle 100 may include at least one rotor 108 coupled to each arm 106. Each rotor 108 may include a propeller blade 532 and a motor 530, for example a brushless motor, for spinning the propeller blade 532. The propeller blade 532, also referred herein as propeller, may be arranged to spin about a propeller axis that is at least substantially orthogonal to the longitudinal axis 120. The central member 104 may be hollow for carrying payloads, electrical wiring and processors. All electrical wirings of the aerial vehicle 100 may be routed through the central member 104 to prevent occlusion to the FOV of the camera 110. A controller 516 may be arranged in the central member 104. The controller 516 may be a flight controller that is configured to process sensor readings and to control the flight of the aerial vehicle 100. The controller 516 may also control the data communications of the aerial vehicle 100, for example via the telemetry link 516 and a video link 508. The telemetry link 516 may transmit to and receive telemetry data from a computing device that is external to the aerial vehicle 100. The video link 508 may transmit payload data, including imagery captured by the camera system 102 to the computing device that is external to the aerial vehicle 100. The video link 508 may optionally be built into the camera system 102. The central member 104 may also house the camera system 102. The camera system 102 may include a camera 110 which may be a high definition camera. The camera system 102 may further include light emitting diodes to provide illumination for the camera 110 to record images in dark environments. The camera system 102 may also include a range sensor, which may be time-of-flight (ToF) sensors. The range sensor of the camera system 102 may provide distance measurements for correlating its captured images of a covered infrastructure, with the geometry of the covered infrastructure. The camera system 102 may include a rotary actuator, also referred herein as a gimbal. The rotary actuator may be coupled to the camera 110, so as to spin the camera 110. The rotary actuator may be rotatable through 360° about the longitudinal axis 120. The central member 104 may be at least partially transparent. The central member may at least partially surround the camera system 102 with a panoramic glass enclosure 512 such that the camera 110 may always face the glass enclosure 512 even as it rotates through 360°. The glass enclosure 512 may be at least substantially transparent so that the camera 110 may capture images outside of the central member 104. The aerial vehicle 100 may include an array of range sensors 502. The range sensors may be coupled to external surfaces of the airframe, for example, at the front, rear, and sides of the airframe. The range sensors 502 which may be infrared ToF sensors, may provide distance measurements to the controller 516. The distance measurements may be measurements of a distance between the camera 110 and a nearest surface from the camera 110. The controller 516 may determine the location and orientation of the aerial vehicle 100 based on the measurements from the range sensors 502. The airframe may be designed to prevent occlusion of the field-of-view (FOV) of the camera 110 by the propellers 532 and other functional on-board components. The arms 106, the propeller guards 510 and the propellers 532 may be offset from the camera 110, in particular along the longitudinal axis. The arms 106, the propeller guards 510 and the propellers 532 may be positioned such that they do not obstruct the field-of-view of the camera 110. The aerial vehicle 100 may include lithium-ion batteries with high energy density to achieve long endurance.

[0045] Optionally, the aerial vehicle 100 may include an optical flow sensor 514 to aid in obstacle avoidance and localization within the covered infrastructure. The aerial vehicle 100 may also include a front operator camera 522, for an operator of the aerial vehicle 100 to see where the aerial vehicle 100 is heading during flight. The aerial vehicle 100 may include a fine-tuned propulsion system which allows for maximum cooling of both the motors and the electronic speed controllers. The aerial vehicle 100 may be manually piloted semi- autonomously using long range radio and video transmission systems. In cases where manual control is not necessary or not possible, the aerial vehicle may make autonomous flights using pre-planned paths which may be stored in the controller 516.

[0046] In addition to the above-mentioned sensors and components, the UAV can also carry an array of environmental sensors that measure the immediate environmental conditions such as temperature and pressure. The array of environmental sensors may also include integrated hazardous gas sensor 520 that detect or measure the concentration of specific gases. If a dangerous operating environment is detected, the UAV can be programmed to return to home in a low-power state.

[0047] FIG. 6 shows a diagram of a revolving camera system 600 according to various embodiments. The revolving camera system 600 may include, or may be part of, the camera system 102. The revolving camera system 600 may include a camera 110 and a mechanical rotating gimbal 620. The gimbal 620 may be rotatable through a full revolution. The camera 110 may be affixed to the gimbal 620. A ranging sensor, referred herein as ToF sensor 602, may be coupled to the gimbal 620, in close proximity to the camera 110. The gimbal 620 may include a motor 606 and a set of gearing 610. The motor 606 may operate to rotate a first gearing wheel 6l0a of the set of gearing, and the first gearing wheel 6l0a may interlock with a second gearing wheel 6l0b such that rotation of the first gearing wheel 6l0a causes the second gearing wheel 6l0b to also rotate. The second gearing wheel 6l0b may be larger in diameter, as compared to the first gearing wheel 6l0a. The gimbal 620 may include an opening, for example, through its centre of gravity. The gimbal 620 may be mounted to the central member 104 by having the opening receive the central member 104. Electrical wirings of the revolving camera system 600 may be arranged within the hollow core of the central member 104 and within the opening. The revolving camera system 600 may include its own battery 604 that may independently power its camera 110, motor 606 and ToF sensor 602. The revolving camera system 600 may further include an image transmission unit 608. The image transmission unit 608 may transfer imagery or video captured by the camera 110, to a processor onboard the aerial vehicle via the electrical wiring 612. Additionally, or alternatively, the image transmission unit 608 may include a wireless transmitter to transmit the imagery or video to a computing device external to the aerial vehicle. The revolving camera system 600 may enable high-resolution imaging with minimal optical distortion. The images captured by the camera 110 may be stitched offline for the visual inspection of the imaged surface. The processor on the aerial vehicle may control the rotation speed of the gimbal 620 based on a velocity of the aerial vehicle. The gimbal 620 may spin the camera 110 in a controlled manner, taking images of the covered infrastructure inner surface as the aerial vehicle moves forward. The result may be a sequence of spiral panoramic images. These images may be stored locally in the revolving camera system 600, or may be stored in a storage unit in the aerial vehicle. These images may be transmitted to a ground station. The revolving system 600 may include multiple cameras 110 depending on the translational speed required for the aerial vehicle, the rotational speed of the gimbal 620, and the overall quality of the captured image. The aerial vehicle may also include more than one revolving camera system 600.

[0048] FIG. 7 shows a diagram 700 that illustrates a method of mapping a tunnel according to various embodiments. As the aerial vehicle 400 travels through the tunnel 330, its camera 110 may take successive images of the interior of the tunnel 330 as the gimbal rotates with a known fixed angle interval. Using known aerial vehicle translational speed, gimbal rotational speed, and distance from camera to wall, a panoramic image may be stitched. Sub-diagrams 702, 704 and 704 represent sequential time instances of the aerial vehicle 400 imaging the tunnel 330 as the aerial vehicle 400 travels through the tunnel 330 along the X-axis of the tunnel 330. In sub-diagram 702, the camera 110 of the aerial vehicle 400 images an area qi 712. Area qi 712 may be part of an inner wall of the tunnel 330. The size of area qi 712 may be determined by at least one of the angular position of the camera 110, the distance measured from the ranging sensor of the camera system 102, the flight velocity v of the aerial vehicle 400 and the rotational speed u of the camera 110. Next, in sub-diagram 704, the camera 110 has rotated to another position and may image an area q 2 714. Next, in sub diagram 706, the camera 110 has rotated to another position and may image an area q 3 716. The camera system 102 may transmit each of images of areas 712, 714 and 716 to a processor that may be onboard or in a ground control station. The processor may stitch the images together to form a spiral panoramic strip 720 and may further“roll” or construct a three- dimensional model 722 based on the spiral panoramic strip 720. In forming the three- dimensional model, the processor may rely on the readings from the ranging sensor of the camera system 102, the flight velocity v of the aerial vehicle 400, and the rotational speed u of the camera 110, to correlate each imaged area to a particular position in the tunnel 330.

[0049] The main challenges of autonomously navigating in the tunnel environments using aerial robots is the problem of localization in pitch-black GPS -denied environments, and the development of an energy-efficient aerial platform and sensing methodology to perform extended hours of inspection in long tunnels.

[0050] According to various embodiments, an aerial vehicle may employ a sparse sensing system for obstacle avoidance and localization in tunnels and shafts, to address the abovementioned challenges. The sparse sensing system may be lightweight and energy efficient so that the aerial vehicle may have a high payload capacity and may have a long endurance. The sparse sensing system may require prior knowledge of the tunnel geometry and may perform well in tunnel environments that are relatively featureless, especially so under poor illumination.

[0051] The sparse sensing system may include an array of ranging sensors (for example: ToF sensors) mounted on the aerial vehicle. Depending the on the environment, there may be an optimal sensor configuration that may enable localization with the lowest degree of errors. The optimal configuration may be mathematically formulated as a spatial optimization problem with constraints such as preventing occlusion to the rotating camera field-of-view and the feasibility for mechanical implementation on the aerial vehicle.

[0052] In the following, the design optimization of the sparse sensing system is described in detail.

[0053] The localization approach employed on the aerial vehicle may rely on the knowledge of the geometry of the tunnel, and may be formulated using parametric representation of the known geometry. The position of the robot may be estimated based on the sparse array of rangefinders. Analytically, there exists a sub-optimal placement of the sensors in the geometrical blindspot of the environment. For instance, assuming the front of the robot is aligned with the longitudinal axis of the tunnel and all sensors are placed pointing directly in front and behind the robot, pose estimation may not be possible when the robot is rotated in the yaw axis such that the sensors are not within range of the tunnel walls. A design optimization is formulated to search for an optimal spatial configuration of the sensors in both tunnel and shafts results in low error tracking of the robot pose. To tackle the large search space, genetic algorithm (GA) is used to solve the optimization problem.

[0054] Notation

[0055] Frames are denoted with italic fonts, e.g. A, with the unit vectors and

origin 0 A . The local frame, L, may be defined with x L parallel to the longitudinal axis of the tunnel, and z L defined parallel to the gravity vector, and y L such that x L X y L = z L . B may be a body-fixed frame with x B pointing to the front of the robot, z s down and y B such that The origin 0 B is attached to the geometric centre of the robot. A rotation matrix L R B transforms a point in frame B into L. The full state of the craft is defined in the local frame as

[0056] Environmental assumptions

[0057] Tunnel environments may refer to structured environment with high reflection symmetry, a characteristic prevalent in man-made structures, e.g. canal, penstocks, sewerage tunnel, shafts and etc. These structures are uni-axial with a known cross-sectional parametric representation, and visually-degraded with poor illumination. In particular, the following description focuses on navigating a horizontal tunnel with a rectangular cross-section and vertically descending a cylindrical shaft with circular cross-section although it should be understood that the aerial vehicle may not be limited to navigating tunnels and shafts of such geometries. Due to the lack of salient geometric landmarks and the symmetry of the environment, there are inherent“blind spots” that prohibits the reliable estimation of the position of the robot along the tunnel axis solely from the sparse array of rangefinders.

[0058] Perception Algorithm

[0059] Based on prior knowledge of the geometry of the environment, the perception algorithm translates the range input from the sparse sensing array into reduced-order pose estimates in the tunnel. It involves first finding a suitable cross-section of the tunnel that maximizes the number of robot states that can be estimated, and subsequently deriving a parametric representation for the chosen cross-section of the tunnel. For horizontal tunnels, the reflection symmetry of the tunnel about local x-z plane makes it possible to only estimating 2 DOF: the lateral position offset, y L , and the yaw, <p L , of the robot. For traversing in vertical shafts, the axial symmetry about the z L allows for only 2 DOF pose estimates: the lateral position displacement, x L and y L . In both cases, the position along the longitudinal axis of the tunnel, i.e. x L in the horizontal tunnels and z L in the vertical shaft, is not possible to be estimated from the sparse sensing array, and is left controlled by the operator. The remaining states are estimated with information from the on-board IMU and a downward pointing rangefinder. The sparse sensing array is populated with rangefinders that have physical range limitations, the minimum and maximum range is denoted by r min and r max respectively. These limits can either be retrieved from manufacturer technical specifications or empirically determined. The range measurement from a sensor is along the x 5 -axis, where 5 is the sensor frame with the origin attached to the body of the sensor. The individual measurements are rotated to the body frame, B. The resulting point cloud of the range measurements from the sparse sensing array in B is denoted as p B E R Mxl , where M is the number of sensors in the array. Using linear least square approach, the point cloud, p B , is used to estimate the parameters for the parametric representation of the tunnel. The parameter fitting procedure for tunnel environments is discussed first, followed by vertical shaft environments. Unless otherwise stated, the proceeding calculations are performed in frame B.

[0060] Pose Estimation in Tunnel Environments

[0061] Most man-made tunnels have local sections with right rectangular prism geometry, bounded by vertically parallel walls on two sides, water body on the bottom, and a ceiling on the top. In addition, all angles are right angles. This justifies the reduction of the geometry to a 2D planar form, formed by intersecting the geometry with a local x L -y L plane. [0062] FIG. 8 shows a planar cross-section 800 with two parallel lines which is the geometry of a tunnel reduced to a 2D planar form. The Hesse normal form is used to parametrically represent the line as

where p 0 is the perpendicular distance from the 0 B to the line, a 0 is the slope of line in a bounded interval (-p, p], given by the angle the line makes with X B , and x and y are coordinates of the feasible set that falls on the line. The feasible set contains the range measurements from the sparse sensing array, where p is

the measurement of the i-th rangefinder projected onto the local x plane.

[0063] The distance and slope parameters can be determined from the linear regression formulation as shown in below and

where p x l and P y is a column vector of the x and y component of the points that falls on the left and right line tunnel wall respectively, p Q l and p£ is the perpendicular distance to the left and right wall respectively, a is the slope of the line segments, and HZ is a diagonal matrix with weights on the diagonal, with l determined from an optimization

algorithm that will be discussed in subsequent paragraphs.

[0064] In the above formulation, the tunnel walls are assumed to be parallel, which is highly accurate of canal environments that exhibit reflection symmetry about the local x— z plane. The partial pose of the robot in the tunnel is then given by

and

and w is the estimated width of the tunnel and is given by w

[0065] However, to construct the A matrix in (4), there is a need to determine which rangefinder in the sensing array measured the left and the right tunnel wall, given by p x and P y respectively. Equation (2) can be rewritten as

where p is the expected sensor measurements if placed with orientation Q given that the perpendicular distance, p 0 , from the origin and slope, a 0 .

[0066] Equation (2) does not accurately reflect the physical sensor model as the rangefinders have limited range, and does not output negative range. A more accurate representation would be

[0067] An over-complete dictionary, D, may be defined with the i -column, d L given by

where Q is a column vector from with discretization DQ , and with

discrete interval Da.

[0068] The set of points from the sparse sensing array that falls on the left or right tunnel surface can be found by solving the following optimization problem

where HZ is a diagonal matrix with weights w P on the diagonal, with l from the

optimization output, 5 is a iV-sparse column vector with at most N non-zero elements.

[0069] The minimization problem defined (11) is a known NP-hard problem. However, such problems are well-explored in compressing sensing literature, and various approximation methods are widely used. The method of matching pursuit (MP) may be used to find the a N- sparse approximation to the problem. The solution is denoted as 5 * . Then, the points that fall on the same tunnel wall are given by the index set, S, of the non-zero entries of Ds * , i.e.

[0070] Let p s denote the submatrix of p containing only the elements with index in S and p § is the set of entries of p that are not in the set S. Currently, the points are segmented into two sets, p s and p § , but it is still not known which sets belongs to the left or the right tunnel walls. The notation defined earlier may be used to resolve this final issue. Let and p r = ps , and using the linear regression formulation in (4) - (5), the resultant f vector in (5) should result in and If the converse is true, then and

p § = p 1 instead. [0071] Pose Estimation in Vertical Shaft Environments

[0072] As vertical shafts are cylindrical, this reduces the problem to a 2D form, by intersecting the geometry with a local x— y plane.

[0073] FIG. 9 shows a circular cross-section 900 which is the geometry of a vertical shaft reduced to a 2D form. The parametric representation of the circle is given as

where (x 0 ,yo) is the centre of the circle in frame B, and r 0 is the radius of the circle.

[0074] Similarly, the parameters can be estimated by the least square formulation to fit the sensor measurements to the parametric representation

[0075] Then, the local position of the robot within the vertical shaft is

[0076] The radius of the shaft can also be determined as

[0077] Design Optimization Of Sparse Sensing Array

[0078] FIGS. 8 to 9 illustrate the planar position estimation for an aerial vehicle that is traversing a cylindrical shaft vertically and right rectangular prism tunnel horizontally. A planar configuration may be sufficient to estimate the planar position (x and y) of the aerial vehicle in the environment. In this scenario, the sensor inputs are used to fit a circle and a pair of parallel lines for the shaft and tunnel navigation respectively, to estimate the position of the aerial vehicle in the environment. However, these geometries can be generic depending on the geometry of the tunnel, and is not constrained to cylindrical shafts and right rectangular prism tunnels. For a robust estimation of 3D position (x, y, and z) in 3D space, it is necessary for the sensors to be placed in more than one plane. The algorithm for estimating the position of the aerial vehicle is similar to the 2D case, the measurements from the sensor input are used to fit a parametric representation of the known geometry of the tunnel.

[0079] FIG. 10 shows a visual illustration 1000 of the optimization problem in formulated in 2D, where the sensor placements and the spatial constraints are illustrated in a 2D plane. The optimization variables are illustrated in the body frame of an aerial vehicle. The algorithm optimizes the spatial position and orientation of m sensors mounted on the right half plane of the aerial vehicle. The infeasible region for placement of the sensors due to the camera FOV 1002 and mechanical placement 1004 are shown. The optimization variables are illustrated in the body frame of the aerial robot. The algorithm optimizes the spatial position and orientation of m sensors mounted on the right half plane of the craft. The infeasible region for placement of the sensors due to the camera FOV (blue) and mechanical placement (red) are shown. There are a total of m 0 sensors on the robot. The optimization changes the mounting positions d B and mounting orientations q B for all the m 0 sensors. It is assumed that the sensors are mounted symmetrically on the robot, and x B is the line of symmetry. This assumption is valid due to the reflection symmetry of tunnel environments. Hence, the optimization explores a (3 X m)-dimension (where , for the placement of the sensors

on the right-hand plane of the robot. The optimal sensor configuration is optimized for a certain range of tunnel parameters g i.e. corresponds to w, a of a tunnel or r of a shaft. For a sampled tunnel parameter sensor readings from a set of robot pose h within the tunnel are also simulated. The sensor noise is simulated as a random Gaussian distribution centered at . To evaluate the performance of a sensor configuration, the root mean square (rms) error E, is used. E is defined as

where f denotes the estimated position of the robot (6) and (16) calculated from the simulated sensors, and f * is the ground truth position the robot. The subscript i,j denotes that pose was estimated from simulated sensor readings in tunnel parameter and robot pose h j . In this formulation, the errors across all the robot states are equally weighted i.e. 1 rad error in yaw estimation is equivalent to 1 m displacement error in the tunnel.

[0080] Then, the optimal sensor placement can be obtained by minimizing the error function E, or

where g is a logarithmic barrier function that penalizes the placement of the sensors near the constraint bounds. The design optimization also solves for the optimal l, a parameter for the radial weighting function used in the weighted least squares, and the minimization problem in (11).

[0081] There are physical constraints on the robot that limits the feasible region for the placement of the sensors. In the case of aerial robots designed for visual inspection of the entire tunnel surface, the sensor cannot be placed within the field of view (FOV) of the camera, and the sensors need to be reasonably close to the mechanical components of the robot for the ease of mounting and electrical wiring. These physical bounds can be described mathematically by a generic user-defined function. For simplicity, the infeasible region due to the camera FOV is defined as a rectangle given by and region due to

mechanical placement is defined as the area outside the bounding box with the corners fixed on the centre of each propulsion system The infeasible

regions are also illustrated in FIG. 10.

[0082] FIGS. 11A-11D shows possible, but not limiting configurations of the sparse array sensors according to various embodiments.

[0083] FIG. 11A shows a top view 1100A of a planar configuration of the sparse array sensors for vertical navigation along a cylindrical shaft. In the planar configuration, the ToF sensors may be mounted on the top-down plane. The ToF sensors may be arranged to detect the lateral side walls in vertical ascending and descending flights within covered infrastructure.

[0084] FIG. 11B shows a depthwise cross-sectional view 1100B of the cylindrical shaft with an aerial vehicle fitted with the planar configuration of the sparse array sensors.

[0085] FIG. 11C shows a front cross-sectional view 1100C of a tunnel with an aerial vehicle fitted with the front/ rear configuration of sparse array sensors. In the rear configuration, the ToF sensors may be mounted on the front-back plane. The front and rear configurations may feature ToF sensors that detect the cross-section of the covered infrastructure in forward and reverse flight.

[0086] FIG. 11D shows a side cross-sectional view 1100D of a tunnel with an aerial vehicle fitted with the front/ rear configuration of sparse array sensors.

[0087] The ToF sensors for the planar and front/rear configuration may overlap to reduce the total number of ToF sensors required. The ToF sensors may be complemented by optical flow sensors that are used for localization along the tunnel axis, and may also be complemented with a downward pointing laser altimeter for measuring the altitude of the UAV within the covered infrastructure. The downward pointing laser altimeter may or may be not part of the rear configuration.

[0088] Optimization using Genetic Algorithm

[0089] Genetic algorithm (GA) is a nature-inspired evolutionary algorithm that uses mutation, crossovers and selection to yield high-quality solutions for large optimization problems. In this case, the design optimization presented involves a large combinatorial search over a (3 X m) -dimension space for the optimal sensor configuration that minimizes the rms error E. GA tackles the large search space by the discretizing the space into nodes where each node is a possible location to mount the sensors. During the search, each candidate solution is coded into a gene, which contains the optimization variables d, Q and l. The rms error E is directly used to evaluate the fitness of a particular gene. In each generation, a group of elites with the best fitness is guaranteed to survive next generations. The remaining genes are used to breed the next generations of candidate solutions, known as children, through crossover and mutation. The algorithm is allowed to evolve over many generations until the average change in the best fitness of the population stalls over a user- defined number of generations. The gene with the best fitness in the final generation is the optimal solution to the optimization problem.

[0090] Numerical Results

[0091] A numerical simulation was conducted and the rms error E was studied for m = 2, 3 and 4 configuration, corresponding to the layout of m 0 = 4, 6 and 8 sensors on the robot. The optimal configuration from the GA and the parameters used for the numerical simulation are shown in Table I.

[0092] In all cases, the best and mean penalty value and the average distance between individual converges, indicating that an optimal solution is found. [0093] FIG. 12 shows graphs 1200A and 1200B that illustrate the results of the numerical simulation. The graph 1200A includes a vertical axis 1202 indicating average distance, and a horizontal axis 1204 indicating the number of generations. The graph 1200A shows the average distance between individual converges in 180 generations for the m=3 optimization. The graph 1200B includes a vertical axis 1206 indicating penalty value, and a horizontal axis 1204 indicating the number of generations. The graph 1200B shows that the mean penalty value and the best penalty value converges in 180 generations for the m=3 optimization. The average fitness of the best penalty value stalled after 180 generations. The best and mean penalty of the final population is 0.00763 and 0.0488 respectively.

[0094] The optimal configuration output from GA have good localization performance of rms error E of at least 0.06 (the E for M2 configuration). The rms error improves with the increase in sensor count, M3 and M4 have lower E. Increasing m = 2 to m = 3 results to a significant two-fold reduction of the rms error from 0.058 m to 0.03 m. Further increment m = 4 reduces the rms error but this time, the reduction is not as significant.

[0095] There may be an intuitive explanation for the GA results for M2. M2 configuration have the sensors pointing to the immediate left and right of the robot. This would ensure that the range measurement of the rangefinder would remains within range throughout the simulated yaw range of ±60° at various pose within the tunnel. The GA results for M3 and M4 are more challenging to find an intuitive explanation. The results from the GA are compared to sensor configurations that are placed heuristically. These configurations always results in a optimal E. Inspired by M2, M2’ have one sensor pointing 25° forward and the other pointing at 0°, the immediate right. M2’ have a high E of 1.189 and a reduced yaw range of ±30°. M4’ have configuration with sensor pointing at 25° interval. The resultant E is 0.804, and a reduced yaw range of ±40° which is worse than the M4. M6’ have a E of 1.51 m and a reduced yaw range of ±40°. The placement of the sensors in the sparse sensing array through heuristics or trial and error is a challenging one. On average, the rms error of the GA is at least 36 times better than the suboptimal configurations and is shown to be extremely powerful in producing a optimal sensor configuration with minimal E.

[0096] Experimental results

[0097] A series of four experiments were performed to evaluate the performance of the optimal sensing configuration with the proposed pose estimation to autonomously navigate tunnel environments. In these experiments, when the robot was autonomously flight tested, the pilot only commanded the position of the robot along the longitudinal axis of the tunnel i.e. along x £ -direction in horizontal tunnel environments and z £ -direction in vertical shaft environments. The remaining DOF are controlled by the on-board controller. The experiments in the horizontal tunnel are discussed first, followed by those performed in the vertical shaft.

[0098] Prototype platform

[0099] FIG. 13 shows a photo of a prototype 1300 of the aerial vehicle used in the experiments. The prototype 1300 may be representative of the aerial vehicle according to various embodiments. The prototype 1300 includes the rotating camera system 102, the lean sensing system, lithium-ion battery, and fine-tuned powertrain integrated on the aerial vehicle platform. Data was collected from experiments done in actual tunnels and shafts. The endurance of the prototype was also tested in lab environment.

[00100] The visual inspection system, also referred herein as the camera system 102, may be independent of the aerial platform, supplied with its own battery and microcontroller. As such, all electrical wiring for power and digital signals runs through a centre 25mm diameter carbon-fibre rod that the visual inspection system rotates about. The carbon-fibre rod may be the central member 104. The arms 106 of the prototype 1300 may also be fabricated out of carbon-fibre composite. The rotor 108, or propulsion system may include a DJI E1200 powertrain. The flight controller 516 may be Pixhawk 2.1 flight controller with built-in IMU, and a Intel Edison as companion computer. The range sensors 502 may include a lightweight array of six TeraRanger One rangefinders, weighing lOg each. The sensor configuration is a hybrid of the M4 and M6 configuration from the GA result of the numerical simulation. The hybrid allows for a redundancy of the sensing system. For example, if either of the front two rangefinder were to fail, the algorithm can fall back to an algorithm that uses only four rangefinders to continue the navigation mission. The prototype weighs 5 kg, inclusive of the 2l000-mAh 6S 10C Li-Ion battery.

[00101] Table II shows the breakdown of the weights of the various subsystems. The sparse sensing array and additional companion computer contribute to merely 2.8% of the total weight. TABLE II

[00102] Table III shows that compared to similar UAVs documented in academic literature, the sparse sensing system is on average 5 times lighter and consumes 12 times less power. The lean and low-power sensing system, as a result, enables the prototype to achieve 35 mins of autonomous flight. The total weight of the proposed system is at least quarter of conventional system and at least a ten-fold reduction in power consumption. These savings directly translate into improved flight endurance of the proposed platform.

[00103] System architecture

[00104] The proposed perception algorithm described in the earlier sections is implemented on an Intel Edison computer, running a Debian-based Linux distribution. The Edison runs the Robot Operating System (ROS) middleware for low-latency data acquisition, inter-hardware communication, and the high-level processing task. It interfaces to the sparse planar sensing array, a downward-pointing altimeter, and the IMU, and translates these sensory inputs into partial local position estimates within the tunnel. These estimates are fed into the EKF for full state estimates of the robot. A Pixhawk2.l autopilot running the PX4 flight stack outputs for low-level command for the control of the SWIRL platform, based on the pilot input and the state estimates of the robot.

[00105] Autonomous flight in tunnels

[00106] The first and second experiments were carried out in horizontal tunnels. In these experiments, the pilot manually controls the robot to take-off, and then fly to an initial position, f £ , which is roughly aligned to the tunnel axis of the tunnel, i.e f = 0. At this point, the pilot toggles to autonomous mode. In this mode, the pilot only retains manual control of the acceleration along the tunnel axis, x £ .

[00107] FIG. 14 shows a graph 1400 that plots the result of a first experiment. The first experiment was performed in an indoor mock-up tunnel. The indoor mock-up environment is a corridor approximately lOm long, 4.2m wide, and 2.6m high. The robot flew autonomously for a period of 46 seconds, covering a total distance of 20m (lOm forward and 10 m back to the initial starting point). The graph 1400 includes a vertical axis 1402 that indicates position in metres, and a horizontal axis 1404 that indicates autonomous time in seconds. The rms position error of the robot was 0.1 m, and the maximum deviation from the centreline was 0.3 m. The fluctuation of the estimates is partially attributed to the presence of protruding aluminum fire extinguisher cabinets on the left side, and permanent structures on the right of the corridor.

[00108] FIG. 15 shows a graph 1500 that plots the result of a second experiment. The second experiment was carried out in a covered section of the Eu Tong Sen Canal, mimicking the poor illumination of the DTSS main tunnel. The tunnel section is approximately 45m long, 6m width and 2m high (measured from the water surface to the top surface). The robot flew autonomously for a period of 36 seconds, travelling a horizontal distance of 45m. The graph 1500 includes a vertical axis 1502 that indicates position in metres, and a horizontal axis 1504 that indicates autonomous time in seconds. The graph 1500 shows a sudden spike in the position estimates at around 28s into the autonomous flight due to an unexpected opening on the left side of the canal. The rms position error of the robot was 0.13 m, and the maximum deviation from the centreline was 0.41 m. The errors are comparably close to that of the experiment in the mock-up environment.

[00109] Autonomous flight in shafts

[00110] The goal of the third and fourth experiment was to evaluate the autonomous flight performance in vertical shaft environments. Similar to the previous experiments, the pilot manually controls the robot to take-off, and then fly to an initial position, f £ , which is roughly at the centre of the tunnel, i.e. f £ = 0. At this point, the pilot toggles to autonomous mode. In this mode, the pilot only retains manual control of the acceleration along the tunnel axis, z L .

[00111] FIG. 16 shows the results of a third experiment. The third experiment was performed in an indoor mock-up of a vertical shaft. The indoor mock-up environment was assembled from curtain hung on a circular ring, creating a cylindrical environment that is similar to the vertical access shaft to the DTSS main tunnel. The mock-up tunnel is approximately 2.4m high and 2.8m in diameter. The robot flew autonomously for about 4.5 minutes, travelling a total vertical distance of 32.3 m (21 repetitions of climbing about 0.75m up and descending 0.75m down from the initial altitude of 1 m). The rms position error of the robot was 0.1 lm, and the maximum deviation from the centreline was 0.32m.

[00112] FIG. 17 shows the results of a fourth experiment. The fourth experiment was carried out in a vertical access shaft to the main tunnel of the DTSS. The vertical shaft is approximately 45m long, with a diameter of about 5m. In these experiments, the robot is inserted into the shaft through a lm manhole opening at ground level. A safety tether was also attached from a winching system at the manhole opening to SWIRL for the insertion of the platform to the 5m diameter section of the vertical shaft, and for emergency retrieval when necessary. The robot was flown autonomously for about 4.5 minutes and the total vertical distance travelled was approximately 8 m. The opening of the manhole of the vertical access shaft results in the venting of the highly pressurized DTSS tunnel. As a result, there was large volume of high velocity air escaping from this manhole. The measured wind speed at the manhole opening was up to 16 m/s. The constant wind updraft in the shaft causes the robot to oscillate visibly during the autonomous flight. This is coupled with some effects of turbulence when flying in an enclosed environment. The combined effects from the destabilizing wind up-draft and the turbulent environment resulted in a significantly higher rms position error of 0.53m, and the maximum deviation from the centre of the shaft was recorded at 1.44 m, as compared to the experiment in the indoor mock-up shaft.

[00113] Extended flight in shaft environments

[00114] Lastly, to evaluate the endurance of the system, the prototype 1300 was tested in an indoor mock-up of the vertical shaft. In this experiment, the robot was commanded to hover at a predetermined height and at the centre of the tunnel. The prototype 1300 achieved a total flight time of 35 minutes and 41 seconds. [00115] FIG. 18 shows a graph 1800 that plots the absolute Euclidean error r throughout the flight. The rms error and max error during the autonomous time was 0.16 m and 0.46 m respectively. While embodiments of the invention have been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced. It will be appreciated that common numerals, used in the relevant drawings, refer to components that serve a similar or the same purpose.

[00116] It will be appreciated to a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[00117] It is understood that the specific order or hierarchy of blocks in the processes / flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes / flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[00118] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean“one and only one” unless specifically so stated, but rather“one or more.” The word“exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as“at least one of A, B, or C,”“one or more of A, B, or C,”“at least one of A, B, and C,”“one or more of A, B, and C,” and“A, B, C, or any combination thereof’ include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as“at least one of A, B, or C,”“one or more of A, B, or C,”“at least one of A, B, and C,”“one or more of A, B, and C,” and“A, B, C, or any combination thereof’ may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words“module,”“mechanism,”“element,”“device, and the like may not be a substitute for the word“means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase“means for.”