Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD, DEVICE, AND SYSTEM FOR MONITORING AN UNDERGROUND ASSET
Document Type and Number:
WIPO Patent Application WO/2022/192965
Kind Code:
A1
Abstract:
A monitoring apparatus for monitoring for threats to underground assets within a geographic area, comprising: a controller interfaced with an image capture device, the controller configured to: control the image capture device to capture images of an imaging area associated with the geographic area; process the captured images using a pretrained image analyser stored in a memory of the controller configured to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determine a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generate an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus, and related methods and systems.

Inventors:
BARTHELEMY JOHAN (AU)
Application Number:
PCT/AU2022/050246
Publication Date:
September 22, 2022
Filing Date:
March 18, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FUTURE FUELS CRC LTD (AU)
International Classes:
G06K9/00; E02F9/24; G06N20/00; G06Q50/00; G06V20/00; G08B23/00
Foreign References:
US20180307912A12018-10-25
US20100013627A12010-01-21
KR20160038595A2016-04-07
US20150364029A12015-12-17
US20150339911A12015-11-26
Attorney, Agent or Firm:
GRIFFITH HACK (AU)
Download PDF:
Claims:
Claims:

1. An image analyser for implementation with a monitoring apparatus for monitoring for threats to underground assets within a geographic area, said monitoring apparatus comprising a controller interfaced with an image capture device and comprising a memory, said controller configured to implement the image analyser, the image analyser configured to: receive images of an imaging area captured by the image capture device, wherein the imaging area is associated with the geographic area; process the captured images to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determine a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generate an instruction to the controller to generate an alert indicative that the threshold threat requirement is met, wherein the image analyser utilises, at least in part, a pretrained machine learning algorithm to identify the one or more objects and the threat factor.

2. An image analysis method for implementation by a monitoring apparatus for monitoring for threats to underground assets within a geographic area, comprising the steps of: receiving images of an imaging area captured by an image capture device of the monitoring apparatus, wherein the imaging area is associated with the geographic area; processing, utilising at least in part a pretrained machine learning algorithm, the captured images to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determining a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generating an instruction to generate an alert indicative that the threshold threat requirement is met.

3. A controller of a monitoring apparatus for monitoring for threats to underground assets within a geographic area, the controller comprising a memory having stored an image analyser according to claim 1, wherein the controller is configured to: control an interfaced image capture device to capture images of an imaging area associated with the geographic area; process the captured images using the image analyser; and in response to generation of the instruction to generate an alert, generate an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

4. A monitoring apparatus for monitoring for threats to underground assets within a geographic area, comprising: a controller interfaced with an image capture device, the controller configured to: control the image capture device to capture images of an imaging area associated with the geographic area; process the captured images using a pretrained image analyser stored in a memory of the controller configured to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determine a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generate an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

5. A monitoring apparatus as claimed in claim 4, wherein the image capture device is a camera.

6. A monitoring apparatus as claimed in claim 4 or claim 5, further comprising a network interface operably interfaced with the controller, and wherein the controller is further configured to: communicate, via the network interface, with a network; and send, via said network, generated alerts to a processing server.

7. A monitoring apparatus as claimed in claim 6, wherein the network interface is adapted to communicate with a base station via a low-power wide-area network such as LoRaWAN™.

8. A monitoring apparatus as claimed in claim 6 or claim 7, wherein the network interface is adapted to communicate with a satellite constellation in order to access the processing server.

9. A monitoring apparatus as claimed in claim 6 or claim 7, wherein the network interface is adapted to communicate with a mobile broadband network. 10. A monitoring apparatus as claimed in any one of claims 6 to 9, wherein the network interface is adapted to communicate with one or more other monitoring apparatuses.

11. A monitoring apparatus as claimed in any one of claims 4 to 10, wherein the threat factor corresponds to the identification of any object as a threat.

12. A monitoring apparatus as claimed in any one of claims 4 to 10, wherein the threat factor accounts for the threat profile or profiles of identified object(s).

13. A monitoring apparatus as claimed in any one of claims 4 to 12, wherein the data structure of the alert further comprises data indicating a particular value of the determined threat factor.

14. A monitoring apparatus as claimed in any one of claims 4 to 13, wherein the controller is further configured to cease generation of further alerts after generation of a particular alert until a reset condition is satisfied.

15. A monitoring apparatus as claimed in claim 14, comprising one or more of the following reset conditions: the identification of a previously unidentified threat; the elapse of a predefined time since the most recent alert; and the determination of the identified threat or threats having exited the image area.

16. A monitoring apparatus as claimed in any one of claims 4 to 15, wherein the controller is further configured to: require an identified object or objects to be present for a predefined period of time before determining a threat level meeting the threshold.

17. A monitoring apparatus as claimed in any one of claims 4 to 16, wherein the image analyser is trained using a plurality of training images each comprising at least one object associated with a threat and annotated indicating the presence of said object or objects.

18. A monitoring apparatus as claimed in claim 17, wherein the image analyser is also trained using a plurality of additional training images each not comprising an object associated with a threat and annotated indicating that no threat is present. 19. A monitoring apparatus as claimed in any one of claims 4 to 18, wherein the image analyser is configured to process captured images in real-time or near real-time.

20. A monitoring apparatus as claimed in any one of claims 4 to 19, wherein the image analyser utilises a machine learning algorithm, such as a deep convolutional neural network (DCNN).

21. A monitoring apparatus as claimed in claim 20, wherein, after or during training and testing, the DCNN of the image analyser is modified by reducing a precision utilised during operation of the image analyser when identifying objects, thereby reducing required processing power with respect to a non-modified DCNN.

22. A monitoring apparatus as claimed in claim 21, wherein the DCNN is modified by one or more of the following: a) utilising a mixture of unmodified floating-point precision, such as FP32, and a reduced floating-point precision, such as FP16, during training and testing; b) utilising Quantisation Aware Training (QAT) during training and testing to enable conversion to an integer precision, such as INT8; and c) performing, after training and testing, quantisation to convert to an integer precision, such as INT8.

23. A monitoring apparatus as claimed in claim 21 or claim 22, wherein the DCNN is modified by pruning, after training and testing, the DCNN to remove nodes with a relatively small contribution to identifying threats.

24. A monitoring apparatus as claimed in any one of claims 4 to 23, wherein the underground asset is a pipeline.

25. A monitoring apparatus as claimed in any one of claims 4 to 24, further comprising a power supply comprising a power generator, such as a solar panel, and/or a power storage, such as a battery.

26. A monitoring apparatus as claimed in any one of claims 4 to 25, wherein an object’s threat profile is based on a capacity of the object to damage the underground asset.

27. A monitoring apparatus as claimed in any one of claims 4 to 26, wherein the controller is configured for operation in at least an active state and a sleep state, such that, when operating in the sleep state, a total power usage of the monitoring apparatus is lower than when operating in the active state.

28. A monitoring apparatus as claimed in claim 27, wherein the controller is configured to enter the active state from the sleep state in response to an elapse of a predefined sleep period since entering the sleep state and/or wherein the controller is configured to enter the active state from the sleep state in response to a signal generated by an interfaced environment sensor indicative of the presence of an object within the imaging area.

29. A monitoring apparatus as claimed in any one of claims 4 to 28, wherein the controller is updateable via transfer of a replacement image analyser program into its memory, such that the controller implements the replacement image analyser program after the transfer when implementing the image analyser.

30. A monitoring system comprising a plurality of monitoring apparatuses each as claimed in any one of claims 4 to 29, wherein the monitoring apparatuses are arranged over a geographic area such that each monitoring apparatus images and thereby monitors an imaging region associated with the geographic area, preferably wherein each imaging region is substantially non-overlapping with any other imaging region, and preferably wherein each imaging region includes a portion of the underground asset.

31. A monitoring system as claimed in claim 30, wherein at least one of the plurality of monitoring apparatuses is affixed to a marker post comprising warning indicia indicating the presence of the underground asset.

32. A method for monitoring for threats to underground assets within a geographic area, comprising the steps of: controlling an image capture device to capture images of an imaging area associated with the geographic area; processing, by a controller interfaced with the image capture device, the captured images using a pretrained image analyser stored in a memory of the controller configured to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determining a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generating an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

33. A method as claimed in claim 32, wherein the image capture device is a camera.

34. A method as claimed in claim 32 or claim 33, further comprising the steps of: communicating, via network interface, with a network; and sending, via said network, generated alerts to a processing server.

35. A method as claimed in claim 34, wherein the network interface is adapted to communicate with a base station to enable communication with the network via a low-power wide-area network such as LoRaWAN™

36. A method as claimed in claim 34 or claim 35, wherein the network interface is adapted to communicate with a satellite constellation in order to access the processing server.

37. A method as claimed in any one of claims 34 to 36, wherein the network interface is adapted to communicate with a mobile broadband network.

38. A method as claimed in any one of claims 34 to 37, wherein the network interface is adapted to communicate with one or more other network interfaces.

39. A method as claimed in any one of claims 32 to 38, wherein the threat factor corresponds to the identification of any object as a threat.

40. A method as claimed in any one of claims 32 to 38, wherein the threat factor accounts for the threat profile or profiles of identified object(s).

41. A method as claimed in any one of claims 32 to 40, wherein the data structure of the alert further comprises data indicating a particular value of the determined threat factor.

42. A method as claimed in any one of claims 32 to 41, further comprising the step of: ceasing generation of further alerts after generation of a particular alert until a reset condition is satisfied.

43. A method as claimed in claim 42, wherein the reset condition corresponds to one or more of: the identification of a previously unidentified threat; the elapse of a predefined time since the most recent alert; and the determination of the identified threat or threats having exited the image area.

44. A method as claimed in any one of claims 32 to 43, wherein an identified object or objects are required to be present for a predefined period of time before determining a threat level meeting the threshold.

45. A method as claimed in any one of claims 32 to 44, wherein the image analyser is trained using a plurality of training images each comprising at least one object associated with a threat and annotated indicating the presence of said object or objects.

46. A method as claimed in claim 45, wherein the image analyser is also trained using a plurality of additional training images each not comprising an object associated with a threat and annotated indicating that no threat is present.

47. A method as claimed in any one of claims 32 to 46, further comprising the step of: training the image analyser.

48. A method as claimed in any one of claims 32 to 47, wherein the image analyser is configured to process captured images in real-time or near real-time.

49. A method as claimed in any one of claims 32 to 48, wherein the image analyser utilises a machine learning algorithm, such as a deep convolutional neural network (DCNN).

50. A method as claimed in claim 49, wherein after or during training and testing, the DCNN of the image analyser is modified by reducing a precision utilised during operation of the image analyser when identifying objects, thereby reducing required processing power with respect to a non-modified DCNN.

51. A method as claimed in claim 50, wherein the DCNN is modified by one or more of the following: a) utilising a mixture of unmodified floating-point precision, such as FP32, and a reduced floating-point precision, such as FP16, during training and testing; b) utilising Quantisation Aware Training (QAT) during training and testing to enable conversion to an integer precision, such as INT8; and c) performing, after training and testing, quantisation to convert to an integer precision, such as INT8. 52. A method as claimed in claim 50 or claim 51, wherein the DCNN is modified by pruning, after training and testing, the DCNN to remove nodes with a relatively small contribution to identifying threats.

53. A method as claimed in any one of claims 32 to 52, wherein the method is implemented by monitoring apparatus configured for monitoring an area in which the underground asset, such as a pipeline, is located.

54. A method as claimed in any one of claims 32 to 53, wherein an object’s threat profile is based on a capacity of the object to damage the underground asset.

55. A method as claimed in any one of claims 32 to 54, wherein the controller is configured for operation in at least an active state and a sleep state, such that, when operating in the sleep state, a total power usage of the monitoring apparatus is lower than when operating in the active state.

56. A method as claimed in claim 55, further comprising the steps of: after entering the sleep state, the controller determining that a predefined sleep period has elapsed since entering the sleep state; and in response, the controller entering the active state.

57. A method as claimed in claim 55 or claim 56, further comprising the steps of: after entering the sleep state, the controller identifying the presence of a signal generated by an interfaced environment sensor indicative of the presence of an object within the imaging area; and in response, the controller entering the active state.

58. A method as claimed in any one of claims 32 to 57, further comprising the step of: updating the controller via transfer of a replacement image analyser program into its memory, such that the controller implements the replacement image analyser program after the transfer when implementing the image analyser.

59. A computer program comprising program code configured to cause a controller of a monitoring apparatus to implement the method of claim 2.

60. A computer readable storage medium comprising the computer program of claim 59. 61. A marker post located in a geographic area associated with an underground asset to which a monitoring device of any one of claims 4 to 29 is affixed, wherein the marker post further comprises warning indicia indicating the presence of the underground asset.

62. A method comprising the step of affixing a monitoring device as claimed in any one of claims 4 to 29 to a marker post located in a geographic area associated with an underground asset, wherein the marker post further comprises warning indicia indicating the presence of the underground asset.

63. A monitoring device as claimed in any one of claims 4 to 29, comprising adjustable attachment means for attaching said device to a structure, such as to enable positioning of the monitoring device for an optimal field-of-view at a location.

Description:
METHOD, DEVICE, AND SYSTEM FOR MONITORING AN UNDERGROUND ASSET

Related Applications

The present application claims convention priority to Australia patent application no. 2021900811 (filed on 19 March 2021), Australia patent application no. 2021221541 (filed on 24 August 2021), and Canada patent application no. 3135629 (filed on 25 October 2021). The entire content of each convention application is incorporated herein by reference.

Field of the Technology

The disclosure generally relates to method, device, and system for monitoring of an area.

Background

External interference threats arising from third party activities poses a significant risk to underground assets such as high-pressure transmission pipelines. For example, third party drilling or excavation work risks damaging these pipelines. The third party may discover the pipeline only when it has damaged the pipeline.

Australian Standard AS/NZS 2885 (Pipelines - Gas and Liquid Petroleum) outlines an approach for prevention, detection and control of external interference threats to a pipeline applicable in Australian and New Zealand. Similar standards are applicable in a number of other jurisdictions. Even with many preventive measures in place, best practice pipeline operators still record instances of encroachments by third parties. For example, although warning signs are provided regularly along a pipeline, these are not always heeded by third parties. The need for high quality detection strategies is vital for the few occasions where preventative controls fail in order to avoid potentially catastrophic consequences (e.g. pipeline damage or even rupture). A particular concern applies to gas and liquid petroleum as leakage of the piped material can lead to significant environmental damage. kinder AS/NZS 2885, pipeline licensees are required to complete a safety management study as a major preventative control against external threats. The safety management study identifies and assesses all threats associated with the pipeline, put controls in place to appropriately manage the identified threats, monitors the effectiveness of controls put in place and identifies new threats as they arise. Other preventative measures include community and stakeholder awareness programs, such as landholder liaison visits and pipeline awareness sessions. Licensees are expected to develop and maintain community and stakeholder registers so that all affected, or potentially affected stakeholders understand the consequences of interfering with high pressure pipelines.

The Standard outlines that preventative controls should be enforced in conjunction with detection controls, as neither in isolation provide sufficient management of external interference. The main detection method is patrolling of the asset both from the air and on the ground. This is to specifically monitor for third party activities that have proved, or will prove, threatening to the pipeline. As threats can only be detected shortly before or as they are unfolding, traditional patrolling methods providing periodic detection are limited in effectiveness. This, in combination with the increasing easement activity near pipelines due to population growth and urban expansion, suggests the current patrolling methods can be strained to provide adequate protection from threats. The limitations around pipeline patrolling are not unique to Australia; this is a global issue.

Other industries, such as the mining industry, have been able to implement sophisticated drone technology to complete more frequent, less invasive, value adding surveillance techniques to monitor sites and detect threats. Drone and satellite technologies used for the purpose of detection of external interference encounter practical issues (e.g. commercially available drones are not all weather, and satellite photogrammetry is limited by cloud), and where it is technically feasible, it is often not economically feasible given the large areas over which pipeline assets are deployed.

Similar concerns may apply to other underground assets, such as underground electrical or data cabling. For example, a relatively recent development has been the National Broadband Network (NBN) in Australia, which has involved laying of new cabling (primarily optical fibre cabling) which can be difficult to repair when damaged. Additionally, discrete assets (i.e. not extending over a significant area like a pipeline or cabling) which may be located underground may also be at risk of interference by third parties. In one example, such a discrete asset (or plural discrete assets) may be located at a temporary site, such as a building or other development site, or may be located at the fringe of an urban area.

US 2018/0307912 A1 describes a video security system and method for monitoring active environments that detects a security-relevant breach of a virtual perimeter and can track a virtual perimeter breaching object to detect risk-relevant behaviour of persons and objects such as loitering and parking, and provides fast and accurate alerts. However, the described system is not suitable for low power or poor power stability environments — for example, where a connection to the national grid is not feasible — and low network bandwidth availability environments, which are common to remote pipelines and other underground assets. It is also common that a nominally non-remote location cannot provide the network bandwidth and power supply requirements of the described system, for example, where a required positioning for a suitable field-of-view of a site means electrical wiring and/or high bandwidth data communications are unavailable or difficult to provide.

Summary of the Invention

According to an aspect of the present invention, there is provided an image analyser for implementation with a monitoring apparatus for monitoring for threats to underground assets within a geographic area, said monitoring apparatus comprising a controller interfaced with an image capture device and comprising a memory, said controller configured to implement the image analyser, the image analyser configured to: receive images of an imaging area captured by the image capture device, wherein the imaging area is associated with the geographic area; process the captured images to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determine a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generate an instruction to the controller to generate an alert indicative that the threshold threat requirement is met, wherein the image analyser utilises, at least in part, a pretrained machine learning algorithm to identify the one or more objects and the threat factor.

According to another aspect of the present invention, there is provided an image analysis method for implementation by a monitoring apparatus for monitoring for threats to underground assets within a geographic area, comprising the steps of: receiving images of an imaging area captured by an image capture device of the monitoring apparatus, wherein the imaging area is associated with the geographic area; processing, utilising at least in part a pretrained machine learning algorithm, the captured images to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determining a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generating an instruction to generate an alert indicative that the threshold threat requirement is met.

According to another aspect of the present invention, there is provided a computer program comprising program code configured to cause a controller of a monitoring apparatus to implement the method of the previous aspect.

According to another aspect of the present invention, there is provided a computer readable storage medium comprising the computer program of the previous embodiment.

According to another aspect of the present invention, there is provided a controller of a monitoring apparatus for monitoring for threats to underground assets within a geographic area, the controller comprising a memory having stored an image analyser according to a previous embodiment, wherein the controller is configured to: control an interfaced image capture device to capture images of an imaging area associated with the geographic area; process the captured images using the image analyser; and in response to generation of the instruction to generate an alert, generate an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

According to another aspect of the present invention, there is provided a monitoring apparatus for monitoring for threats to underground assets within a geographic area, comprising: a controller interfaced with an image capture device, the controller configured to: control the image capture device to capture images of an imaging area associated with the geographic area; process the captured images using a pretrained image analyser stored in a memory of the controller configured to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determine a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generate an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

Typically, the image capture device is a camera. The monitoring apparatus may further comprise a network interface operably interfaced with the controller, and the controller may be further configured to: communicate, via the network interface, with a network; and send, via said network, generated alerts to a processing server. Optionally, the network interface is adapted to communicate with a base station via a low-power wide-area network such as LoRaWAN™. Optionally, the network interface is adapted to communicate with a satellite constellation in order to access the processing server. Optionally, the network interface is adapted to communicate with a mobile broadband network. In an embodiment, the network interface is adapted to communicate with one or more other monitoring apparatuses.

The threat factor may correspond to the identification of any object as a threat. Alternatively, the threat factor may account for the threat profile or profiles of identified object(s). Optionally, the data structure of the alert further comprises data indicating a particular value of the determined threat factor.

The controller may be further configured to cease generation of further alerts after generation of a particular alert until a reset condition is satisfied. Optionally, one or more of the following reset conditions correspond to: the identification of a previously unidentified threat; the elapse of a predefined time since the most recent alert; and the determination of the identified threat or threats having exited the image area. The controller may be further configured to: require an identified object or objects to be present for a predefined period of time before determining a threat level meeting the threshold. The image analyser may be trained using a plurality of training images each comprising at least one object associated with a threat and annotated indicating the presence of said object or objects. The image analyser may also be trained using a plurality of additional training images each not comprising an object associated with a threat and annotated indicating that no threat is present.

The image analyser may be configured to process captured images in real-time or near real-time. The image analyser may utilise a machine learning algorithm, such as a deep convolutional neural network (DCNN). Optionally, after or during training and testing, the DCNN of the image analyser is modified by reducing a precision utilised during operation of the image analyser when identifying objects, thereby reducing required processing power with respect to a non-modified DCNN. Optionally, the DCNN is modified by one or more of the following: a) utilising a mixture of unmodified floating-point precision, such as FP32, and a reduced floating-point precision, such as FP16, during training and testing; b) utilising Quantisation Aware Training (QAT) during training and testing to enable conversion to an integer precision, such as INT8; and c) performing, after training and testing, quantisation to convert to an integer precision, such as INT8. Optionally, the DCNN is modified by pruning, after training and testing, the DCNN to remove nodes with a relatively small contribution to identifying threats.

The monitoring apparatus may be suitable for monitoring for threats of a pipeline.

The monitoring apparatus may comprise a power supply comprising a power generator, such as a solar panel, and/or a power storage, such as a battery.

An object’s threat profile may be based on a capacity of the object to damage the underground asset.

The controller may be configured for operation in at least an active state and a sleep state, such that, when operating in the sleep state, a total power usage of the monitoring apparatus is lower than when operating in the active state. Optionally, the controller is configured to enter the active state from the sleep state in response to an elapse of a predefined sleep period since entering the sleep state. Optionally, the controller is configured to enter the active state from the sleep state in response to a signal generated by an interfaced environment sensor indicative of the presence of an obj ect within the imaging area.

The controller may be updateable via transfer of a replacement image analyser program into its memory, such that the controller implements the replacement image analyser program after the transfer when implementing the image analyser. The monitoring device may comprise adjustable attachment means for attaching said device to a structure, such as to enable positioning of the monitoring device for an optimal field-of-view at a location.

According to another aspect of the present invention, there is provided a marker post located in a geographic area associated with an underground asset to which a monitoring device of the previous embodiment is affixed, wherein the marker post further comprises warning indicia indicating the presence of the underground asset.

According to another aspect of the present invention, there is provided a method comprising the step of affixing a monitoring device according to the previous embodiment to a marker post located in a geographic area associated with an underground asset, wherein the marker post further comprises warning indicia indicating the presence of the underground asset.

According to yet another aspect of the present invention, there is provided a remote monitoring system comprising a plurality of monitoring apparatuses each according to the above embodiment, wherein the monitoring apparatuses are arranged over a geographic area such that each monitoring apparatus images and thereby monitors an imaging region within the geographic area, preferably wherein each imaging region is substantially non-overlapping with any other imaging region.

At least one of the plurality of monitoring apparatuses may be affixed to a marker post comprising warning indicia indicating the presence of the underground asset.

According to still yet another aspect of the present invention, there is provided a method for monitoring for threats to underground assets within a geographic area, comprising the steps of: controlling an image capture device to capture images of an imaging area associated with the geographic area; processing, by a controller interfaced with the image capture device, the captured images using a pretrained image analyser stored in a memory of the controller configured to identify one or more objects, when present within the imaging area, within the captured images associated with a threat profile; determining a threat factor based on an analysis of the identified one or more objects; and in response to determining that the threat factor meets a threshold threat requirement, generating an alert, wherein the alert comprises a data structure storing data identifying the monitoring apparatus.

Typically, the image capture device is a camera. The method may further comprise the steps of: communicating, via network interface, with a network; and sending, via said network, generated alerts to a processing server. Optionally, the network interface is adapted to communicate with a base station to enable communication with the network via a low-power wide-area network such as LoRaWAN™ Optionally, the network interface is adapted to communicate with a satellite constellation in order to access the processing server. Optionally, the network interface is adapted to communicate with a mobile broadband network. Optionally, the network interface is adapted to communicate with one or more other network interfaces.

The threat factor may correspond to the identification of any object as a threat. Alternatively, the threat factor may account for the threat profile or profiles of identified object(s). Optionally, the data structure of the alert further comprises data indicating a particular value of the determined threat factor.

The method may further comprise the step of: ceasing generation of further alerts after generation of a particular alert until a reset condition is satisfied. Optionally, the reset condition corresponds to one or more of: the identification of a previously unidentified threat; the elapse of a predefined time since the most recent alert; and the determination of the identified threat or threats having exited the image area. Optionally, an identified object or objects are required to be present for a predefined period of time before determining a threat level meeting the threshold.

The image analyser may be trained using a plurality of training images each comprising at least one object associated with a threat and annotated indicating the presence of said object or objects. The image analyser may also be trained using a plurality of additional training images each not comprising an object associated with a threat and annotated indicating that no threat is present. The method may further comprise the step of: training the image analyser.

The image analyser may be configured to process captured images in real-time or near real-time. The image analyser may utilise a machine learning algorithm, such as a deep convolutional neural network (DCNN). Optionally, after or during training and testing, the DCNN of the image analyser is modified by reducing a precision utilised during operation of the image analyser when identifying objects, thereby reducing required processing power with respect to a non-modified DCNN. Optionally, the DCNN is modified by one or more of the following: a) utilising a mixture of unmodified floating-point precision, such as FP32, and a reduced floating-point precision, such as FP16, during training and testing; b) utilising Quantisation Aware Training (QAT) during training and testing to enable conversion to an integer precision, such as INT8; and c) performing, after training and testing, quantisation to convert to an integer precision, such as INT8. Optionally, the DCNN is modified by pruning, after training and testing, the DCNN to remove nodes with a relatively small contribution to identifying threats.

The method may be implemented by monitoring apparatus configured for monitoring an area in which the underground asset, such as a pipeline, is located. The monitoring apparatus may be according to the first described embodiment above.

An object’s threat profile may be based on a capacity of the object to damage the underground asset. The controller may be configured for operation in at least an active state and a sleep state, such that, when operating in the sleep state, a total power usage of the monitoring apparatus is lower than when operating in the active state. Optionally, the method further comprises the steps of: after entering the sleep state, the controller determining that a predefined sleep period has elapsed since entering the sleep state; and in response, the controller entering the active state. Optionally, the method further comprises the steps of: after entering the sleep state, the controller identifying the presence of a signal generated by an interfaced environment sensor indicative of the presence of an object within the imaging area; and in response, the controller entering the active state.

The method may further comprise the step of: updating the controller via transfer of a replacement image analyser program into its memory, such that the controller implements the replacement image analyser program after the transfer when implementing the image analyser.

As used herein, the word “comprise” or variations such as “comprises” or “comprising” is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments described.

Brief Description of the Drawings

One or more embodiments, incorporating (individually and/or in combination) all aspects of the invention, will now be described by way of example only with reference to the accompanying drawings in which:

Figures 1 A and IB show a remote monitoring system according to embodiments;

Figure 2 shows a representation of the electronic features of a monitoring apparatus;

Figure 3 A shows a controller;

Figure 3B shows a controller including separate processors;

Figure 4A and Figure 4B illustrate deployment of a monitoring apparatus;

Figure 5 shows a method implemented by a monitoring apparatus;

Figure 6 shows a method of training an image analyser;

Figure 7 shows a variation of the method of Figure 6; and

Figure 8 shows the controller further interfaced with an environment sensor, according to an embodiment. Description of Embodiments

Figures 1A and IB show variations of a remote monitoring system 10 according to an embodiment. The system 10 includes one or more monitoring apparatuses 11 configured for data communication with a processing server 12. In the figures, the data communication is shown as via a network 15. In Figure 1, an area in which monitoring is required (monitored area 90) comprises imaging areas 91. The monitored area 90 comprises an underground asset (or, assets) — the system 10 is suitable for monitoring threats to the underground asset due to, for example, third party activities above the assets. In the embodiments herein described, the underground asset is a pipeline 92 (shown as a black strip through monitored area 90 in Figures 1 A and IB).

For the purposes of this disclosure, a numerical reference may refer to a general feature, for example monitoring apparatuses 11. Where necessary to refer to specific instances of said feature, a lowercase suffix is appended — for example: monitoring apparatus 11a, monitoring apparatuses 11a and l ib, and monitoring apparatuses 11 a- 11c. Therefore, “monitoring apparatus 11” refers to monitoring apparatuses 11 in a general sense whereas specific monitoring apparatus 11a may be described differently to specific monitoring apparatus 1 lb.

Referring back to Figure 1, each imaging area 91 is associated with a monitoring apparatus 11 — that is, the imaging area 91 is defined by a geographic area imageable by the monitoring apparatus 11. As shown, imaging area 91a is associated with monitoring apparatus 11a, imaging area 91b is associated with monitoring apparatus l ib, and imaging area 91c is associated with monitoring apparatus 11c. Although the imaging areas 91 are shown as essentially contiguous with one another and essentially coinciding with the entirety of the monitored area 90, more generally, parts of the monitored area 90 may not be associated with an imaging area 91 — for example, certain parts of the monitored area 90 may be at sufficiently low risk of damage that monitoring is not required. Similarly, there may be substantial gaps between imaging areas 91 or, in fact, overlap.

Figure 2A shows a representation of electronic elements of a monitoring apparatus 11 according to an embodiment. The monitoring apparatus 11 includes an optical capture device 21, a power supply 22, a wireless network interface 23, and a controller 24. Herein, the optical capture device 21 is assumed to be a camera, and the term is used interchangeably (i.e. camera 21). The camera 21 can be monochromatic or colour or can optionally also, or alternatively, be configured for imaging parts of the electromagnetic spectrum outside of that visible to humans (e.g. infrared and/or ultraviolet). The camera 21 is interfaced with, and controllable by, the controller 24 and is configured for providing image data to the controller 24. The power supply 22 is configured for powering the electrical components of the monitoring apparatus 11, including the camera 21, the wireless network interface 23, and the controller 24. The wireless network interface 23 is interfaced with, and controllable by, the controller 24 and is configured for data communication with a network 15, with other of the monitoring apparatuses 11, and/or with satellites (discussed below) to thereby enable the controller 24 to communicate with the processing server 12. The controller 24 is configured for causing the monitoring apparatus 11 to implement the processing functionality herein described.

In an embodiment, the monitoring apparatus 11 further comprises an input/output (I/O) port 25 interfaced with the controller 24. The I/O port 25 typically comprises a display-out port for interfacing with an electronic display (not shown), which can be separate to or part of the monitoring apparatus 11. The I/O port 25 also typically comprises a local input port for interfacing with input means such as a keyboard, mouse, and/or touch interface (which can be a touch screen electronic display). One or more input means can be permanently provided with the monitoring apparatus 11 and/or one or more input means can be connectable to the I/O port 25 when required.

The power supply 22 can comprise electrical generation elements 22a and/or electrical storage elements 22b. For example, a solar power generator can be provided as an electrical generation element 22a for generating electrical power and a battery can be provided as an electrical storage element 22b for storing excess of said generated electrical power for use when required. Other generating 22a or storage 22b means can be provided: for example, a heat-based electrical generator or a capacitor. Generally, the power supply 22 should be configured to provide sufficient electrical power for the functions described herein. In an embodiment, one or more monitoring apparatuses 11 (when deployed) are powered, either additionally or alternatively, via an external power supply such as a national grid — this can be preferred where such power supply is locally available.

The wireless network interface 23 is configured according to one or more wireless communication standards suitable for the location of the monitoring apparatus 11. The wireless network interface 23 can therefore be operable to interface with a mobile data network, for example, as provided under 2G, 3G, 4G, or 5G mobile networks as defined by the International Telecommunications Union. Other suitable wireless data communication protocols can include LoRaWAN (as defined by the LoRa Alliance) or other low-power wide-area networks (LPWAN), the wireless protocol IEEE 802.11 ah (known as Wi-Fi HaLow), and other long range wireless networks.

It is also expected embodiments can be provided in which mesh-network technologies can be utilised — for example, enabling various of the monitoring apparatuses 11 to communicate with one another, where a subset (e.g. one) of the monitoring apparatuses 11 acts as a node for connection to a suitable data network 15. In this way, all communications between individual monitoring apparatuses 11 and the network 15 occur via the subset of monitoring apparatuses 11. The subset can be created in an ad-hoc manner and the particular mesh topology can be spontaneously organised, as according to known techniques.

According to an embodiment, the wireless network interface 23 is configured for data communication with a satellite internet constellation.

As a general principle, the wireless standard utilised should provide sufficient bandwidth for the particular embodiment — this can depend on, for example, an intended geographic area of deployment (i.e. the monitored area 90). In some cases, for example, a suitable pre-existing mobile broadband network is present and can be preferred. Alternative options such as the satellite constellation can be preferred, for example, this can be the case when a mobile broadband network is unavailable or not sufficiently reliable. Mesh topologies may be useful where a subset of monitoring apparatuses 11 are in range of an efficient network 15 and are able to undertake data communication with other monitoring apparatuses 11 (which can include “hops” between intervening monitoring apparatuses 11). This latter case assumes monitoring apparatuses 11 are deployed in close enough proximity to reliably form the mesh network.

Although the specific embodiments described herein utilise wireless data communications, an alternative embodiment utilises wired data communications (in whole or in part). For example, if practical, an individual monitoring apparatus 11 can be connected via a wired connection to the network 15. In another example, two monitoring apparatuses 11 are in wired data communication with one another, with one of these devices 11 configured for wireless communication (similar to the mesh topology already described).

Referring back to Figure 1 A, the monitoring apparatuses 11 are shown is direct communication with the network 15. This representation corresponds to the monitoring apparatuses 11 communicating with a pre-existing data provider directly (e.g. in the case of communication with a mobile data network). In Figure IB, the monitoring apparatuses 11 are shown in communication with a base station 16, provided as a component of the system 10, which is itself in communication with the network 15. This arrangement may suit a LPWAN arrangement, such as one utilising LoRaWAN. The base station 16 can communicate with the network 15 using means discussed herein, such as a mobile data network or a wired connection. It is envisaged that an implementation using a mesh network topology can utilise a base station 16.

Some or all of the components described with reference to Figure 2 are located within a housing (not shown). Generally, the housing is designed to be sufficiently robust for the intended deployment of the monitoring apparatus 11. In an embodiment, the monitoring apparatus 11 is configured for infrequent servicing — for example, where an electrical power generator 22a is provided, the monitoring apparatus 11 may only require servicing annually (or after even longer periods of time). The housing can be required to meet certain industry standards, such as those specified by the Ingress Protection Code (IP Code). In an embodiment, the housing requires protection from rain and dust and can be required to meet IP53, although IP65, IP66, or IP67 can be preferred. It can be preferred that the housing meets a higher standard than can be usually required, to account for unexpected events such as floods.

The housing is configured to house the controller 24 (typically in its entirety). The housing will usually house a portion of the wireless network interface 23 — for example, all parts of the interface 23 except a part or all of one or more antennae, which can extend from a surface of the housing. The housing will also house at least a portion of the power supply 22 — in an embodiment, only solar panels and connecting wires extend outside the housing (here, the solar panels can be permanently or removably mounted to the housing). In an embodiment, solar panels are provided on a controllable connector, also located (at least in part) outside of the housing, which can be rotated in one, two, or three dimensions to enable the solar panels to track the sun for improved power supply (not shown). The camera 21 can be mounted to an outside surface of the housing and interfaced via cabling extending through a surface of the housing with the controller 24. Alternatively, only a portion of the camera 21 (e.g. comprising its lens) extends outside of the housing, with an internal portion within the housing. It can be preferred that the housing includes seals at any point in which componentry extends through a surface of the housing. Advantageously, providing the camera 21 on or partially within the housing can provide for a relatively simple and sturdy setup. In another embodiment, the camera 21 is positionable separately to the housing, with an electrical coupling provided into the housing. Advantageously, a separately positionable camera 21 can enable greater versatility in deployment of the monitoring apparatus 11, in which only the relatively light camera 21 need be placed at a specific location for monitoring the relevant imaging area 91.

The housing (or camera 21 if positionable independently of the housing) can be adapted to attach to an existing structure in relation to an imaging region 91. For example, pipelines 92 are usually marked by spaced apart marker posts comprising warning indicia, such as signage on signposts, indicating the presence of the pipeline 92 — the housing or camera 21 can be adapted to attach to such existing marker posts. The housing can comprise attachment means for enabling attachment to an external structure, which may have a consistent profile (e.g. where all marker posts have the same shape) or may have a variable profile. In the latter case, the attachment means may preferably have one or more adjustable features to enable attachment to a relatively wide range of different structures.

Figure 3 A shows a generalised controller 24. The controller 24 comprises one or more processors 30 interfaced with a memory 31. The memory 31 typically comprises volatile and non-volatile memories. The controller 24 is operably interfaced with the wireless network interface 23. The controller 24 can take different formfactors depending on the particular design requirements and can be contained with a single physical package or distributed amongst several different physical packages with communication links therebetween.

Figure 3B shows a particular arrangement of a controller 24 according to an embodiment. A central processing unit (CPU) 30a is provided interfaced with memory 31. Additionally, an image analyser processor 30b is provided also interfaced with memory 31 (alternatively, the image analyser processor 30b can be interfaced with a physically or logically separate memory 31 (not shown)). The image analyser processor 30b can comprise an embedded graphics processing unit (GPU). The image analyser processor 30b is configured for specialised processing of received image data to identify threats (described in more detail below). Information determined by the image analyser processor 30b can be communicated to the central processing unit 30a, which is configured to undertake functionality not specifically implemented by the image analyser processor 30b. In one particular configuration, an Nvidia Jetson Nano 4GB is provided as the controller 24.

Referring back to Figure 2A, in an embodiment, the monitoring apparatus 11 further comprises a local data port 26. This can be configured to enable a separate computing device to interface with the controller 24 on an as-needed basis, for example, to provide means to provide instructions to the controller 24, such as to update the memory 31 (e.g. in order to provide a software update) and/or read from the memory 31. The local data port 26 can provide, for example USB, Ethernet, and/or Bluetooth.

Referring to Figures 4A and 4B, a monitoring apparatus 11 is shown deployed. It is located in relation to a monitored area 90 and its camera 21 is arranged to capture images of imaging area 91. It should be understood that the term “images” is used herein in a broad sense, unless context indicates otherwise — that is, the images can be captured continuously as video data or intermittently or periodically as still images. The imaging area 91 as shown is merely illustrative. Additionally, it is envisaged that the camera 21 can actually comprise multiple cameras 21 or a suitable lens such as to capture image data over a much wider area. For example, cameras 21 can be provided for an effectively continuous 360-degree view around the monitoring apparatus 11.

Generally, the camera(s) 21 should have sufficient resolution to obtain images of the imaging area 91 with sufficient detail to enable identification of threats present within the imaging area 91 with respect to the pipeline 92. For example, a single camera 91 can capture video at a resolution between Full HD (i.e. 1080p or 1920x1080 pixels) and 4K (e.g. 3840 x 2160 pixels). Higher resolutions generally require more processing power, and therefore, the resolution should be selected to account for the processing capabilities of the controller 24 — for example, testing can be performed to ensure that the controller 24 is capable of threat identification within required timeframes. Additionally, the camera(s) should cover a sufficiently wide field-of-view (FOV) to include the pipeline 92 within the imaging area 91 — for example, between 90° and 136°.

In both figures, a pipeline 92 is shown extending through the imaging area 91. The pipeline 92 is located underground (or at least, in portions within the imaging area 91 it is underground) and therefore at risk of unintended damage. Although the present embodiments are described with reference to a pipeline 92, it should be understood that the system 10 can be suitable for other installations in which components are at risk of damage by some, but not all, moveable objects that can come into the path of the components — in particular, where said components are located underground. In Figure 4B, an object 93 is present which is not present in Figure 4A (i.e. it is moveable such that it can enter the imaging area 91) — this object 93 is assessed by the monitoring apparatus 11 to determine a threat profile associated with the object 93. Generally, an object’s threat profile is based on a capacity of the object 93 to damage the underground asset (pipeline 92).

Generally, a monitoring apparatus 11 according to the embodiments described herein is configured for capturing image data of its imaging area 91 and to analyse the captured image data in order to identify one or more objects 93 within the image data. These objects 93 are then assessed in order to determine a threat factor being an assessment by the monitoring apparatus 11 of the risk, due to the presence of the identified objects 93, of damage to the pipeline 92. If the threat factor meets certain predefined risk rules, an alert is generated — the alert is therefore indicative of a threat being the presence of object(s) within the imaging area 91 which are associated with a sufficient risk of damage to the pipeline 92. For example, digging equipment can correspond to high risk objects 93 whereas cars, people, and animals can correspond to low (or no) risk objects 93. Depending on the embodiment, intermediate risk objects 93 can be detected — for example, a heavy truck can have a medium risk of damage in certain cases and therefore constitute an intermediate threat.

Figure 5 shows a method implemented by a monitoring apparatus 11 according to an embodiment. The controller 24 of the monitoring apparatus 11 operates camera 21 to capture images (periodically, intermittently, or otherwise as configured) of the imaging area, at step 100, which are provided as image data to the controller 24 for processing.

At step 101, the controller 24 is configured to apply a pretrained machine learning algorithm to analyse the image data to identify objects 93 within the image data associated with a threat profile selected from a profile set of one or more predefined threat profiles. For convenience, the pretrained machine learning algorithm is referred to herein as an “image analyser” and is discussed in more detail below. The output of the image analyser is then assessed by the controller 24 to determine the threat factor based on the identification of threat factors for the one or more identified objects 93, at step 102.

A check is then made as to whether the threat factor indicates a threshold threat has been met, at check step 103 — in a case where the threshold threat is met, the controller 24 then generates an alert indicative of the threshold threat being met, at step 104. Otherwise, in a case where the threshold threat is not met, the controller 24 returns to step 100.

Whether or not an alert is generated at step 104, the controller 24 can be configured to make a record of the identified objects and, optionally, their determined threat profile and/or a record of the determined threat factor, typically timestamped. Said records can be kept until the memory 31 of the controller 24 is full (or, at least, a portion of the memory 31 set aside for said records is full) or until a predefined time has elapsed, at which case records can be deleted (typically on a first in, first deleted basis) — records can be removed singly or in batches.

A generated alert is communicated to the processing server 12 at step 105. The alert is communicated via the wireless network interface 23 to the network 15 according to the relevant protocol. According to an embodiment, the alert comprises a data structure including a timestamp and an identifier associated with the particular monitoring apparatus 11 — this in itself can be sufficient to register at the processing server 12 that a threat has been detected and to identify the location (via the identifier). For implementations reliant on very low bandwidth and/or high cost communications, such a minimal data size of the alert can be advantageous.

In embodiments in which the wireless communication has a sufficient bandwidth, auxiliary information can be incorporated into the alert. The auxiliary information can include, for example, one or more still images captured by the camera 21 contemporaneously with the threat identification (e.g. the image that led to the threat detection). In an implementation, a series of images captured over a period of time after the threat identification are communicated — for example, images separated by a time period such as 1 minute, 5 minutes, 10 minutes, or any other suitable time. Advantageously, the series of images enables an operator to determine if the identified threats are a) present and b) moving in a manner consistent with potential damage — thus, a threat is only identified if the operator agrees with the monitoring apparatus 11. One or more still images captured by the camera 21 contemporaneously with the threat identification can also, or instead, be stored in memory 21, preferably a non-volatile memory such as a FLASH or magnetic memory. However, images communicated and/or stored can risk identification of persons or businesses within the image and may not be suitable where such identification contravenes local privacy laws. It is envisaged that an embodiment can be provided, where privacy concerns exist, in which the controller 24 has sufficient processing capability to undertake, for example, person or facial identification using known algorithms and to blur, remove, or otherwise obscure identified persons or faces (respectively). It is also envisaged that the receiving processing server 12 may automatically perform blurring, removal, or other obscuring or received images, when the controller 24 is not provided such functionality.

Additionally, an indication of the determined threat factor can be communicated — this can be applicable where the threat factor can take one of multiple values, despite each of said multiple values meeting the threshold. A level of confidence of the image analyser that it has correctly identified the threat can be communicated — this can be a metric generated by the image analyser when analysing images.

According to an embodiment, if an alert is generated at step 104, the controller 24 can be configured to not generate a further alert until a reset condition is satisfied. The reset condition can be an elapse of an amount of time, for example, 24 hours. The reset condition can also, or instead, by the determination by the image analyser that the objects 93 associated with the threat determination have exited the imaging area 91. The reset condition can also, or instead, correspond to the entrance into the imaging area 91 of a new threat — in this way, only one alert is generated per threat, but multiple threats can lead to multiple alerts.

According to an embodiment, the determination of a threat profile is dependent on a length of time in which a particular object 93 is present within the imaging area 91 — advantageously, this may mitigate against threats being identified which are not in fact operating (e.g. excavating or drilling) within the imaging area 91. In this embodiment, the image analyser is further configured to identify instances of the same object 93 being present in a sequence of frames. Therefore, the image analyser is configured to tag an identified object 93 and determine in later frames that the object 93 remains present. A predefined time period can be utilised by the controller 24 when assessing the threat posed by the object. That is, the threat profile may not indicate a threat until the obj ect has been present for the predefined time.

According to an embodiment, the threat factor is at least in part based on identification of a class of the, or each, object 93 identified by the image analyser. That is, the image analyser provides information about a class of each object 93. The class defines a type for the object, for example, classes can include digger, drill, jackhammer, etc. Different classes can be associated with different levels of risk to the pipeline 92, and therefore, the calculated threat factor will depend on the determined classes of the identified objects 93.

Referring back to step 101, a suitable image analyser must be provided in the memory 31 of the controller 24 for execution by the controller 24. A pretrained machine learning algorithm is considered to be particularly suitable — for example, a deep convolutional neural network (DCNN). An example engine for running the DCNN is the NVIDIA TensorRT. The image analyser is pretrained with training set data configured specifically to enable the functionality of identifying particular objects associated with a higher threat.

In one embodiment, a plurality of training images each including at least one object 93 that can be viewed as a potential threat to the pipeline 92 are provided. Each training image is annotated such as to identify the relevant object(s) 93 associated with a potential threat. Images can also be resized where necessary. Preferably, each object 93 will be associated with a plurality of training images, for example, at least 1000 (although, other numbers can be used as a minimum depending on the particular requirements for the particular application). An object 93 as used herein can indicate a particular class of similar items — for example, a number of different models of diggers can be imaged and labelled as being a “digger” without reference to model. The training images for a particular object 93 should cover various different views of the object 93, for example one or more of: different times of the day, different weather conditions, different seasons, different lighting conditions, different scales of the object, different colours of the object, different points of view of the object, etc. In an embodiment, the training images are simply annotated indicating the presence of a threat, without specifying a class; this embodiment can be useful where it is not required to distinguish between different threat levels (i.e. it is assumed that all provided objects 93 are of equal threat). The training images can be augmented by randomly applying transformations to the annotated images — for example, selected from one or more of: flip, translation, addition of noise, inverting or otherwise modifying colours, etc.

In an embodiment, the annotations both identity that there is an object 93 present within the training image associated with a threat and identify the location of said object 93 (for example, using a bounding box technique). Only objects 93 identified with a certain minimum threat are labelled (this may be a judgement call by a user training the image analyser and/or based on historical instances of classes of objects damaging pipelines 92). Although it may be that other features will be present in at least some of the training images, if these features are not an object 93 associated with a threat to the pipeline 92, they will not be annotated.

In an embodiment, a plurality of additional training images in which an object 93 is not present is provided. These training images are annotated to identify the lack of objects (i.e. threats). Advantageously, this set enables imager analyser to be trained to identify frames that are free from threats. It should be noted that these additional training images will comprise identifiable features, however, these are not considered to be potential threats and are therefore not referred to as “objects”.

Figure 6 shows a method for training the image analyser according to an embodiment. The training images are randomly assigned to a training set, a test set, and a validation set, at step 200. The training images can be assigned according to a ratio in which more images are assigned to the training set than either of the other sets; for example, a ratio of 70:15:15 (training:test:validation) can be employed (i.e. for every 70 training images which are assigned to the training set, 15 training images are assigned to the test set and 15 training images are assigned to the validation set). It can be preferable to ensure that each set comprises exactly or roughly the same ratio of threat labelled training images to no-threat labelled training images.

At step 201, the training set and the validation set are utilised in training one or more deep neural networks, depending on the embodiment. The training set is utilised for learning — that is, the, or each, deep neural network processes the training set to learn to distinguish training images comprising threats to those without. The validation set is utilised as a test of the deep neural networks as they learn — it provides guidance as to tuning parameters of the deep neural networks, such as the number of hidden units. Generally, the training of the one or more deep neural networks follows known techniques.

At step 202, the now trained one or more deep neural networks are tested against the test set — therefore, the test set provides an independent test of the one or more deep neural networks (that is, independent with respect to the validation set).

In an embodiment, the DCNN is modified to enable a mixed use of 16-bit floating point (FP16) data and 32-bit floating point (FP32) data when training and testing the DCNN (whereas the unmodified DCNN may only utilise FP32 data). More generally, the FP32 is an example of an unmodified floating-point precision for which the DCNN is designed and the FP16 is an example of a reduced floating-point precision with respect to the unmodified precision. In this implementation, the DCNN model is converted to use FP16 data where possible — e.g., FP32 data is still utilised for master parameters of the DCNN to accumulate per-iteration parameters updates, while FP16 can be utilised elsewhere in the training regime. Additionally, error loss scaling can be utilised to preserve small gradient values (i.e. to prevent a gradient becoming flat thus making the optimisation intractable). Advantageously, such a modification may reduce the processing requirements of the deployed image analyser.

In an alternative or complementary embodiment, Quantisation Aware Training (QAT) is utilised during training in order to reduce the precision of the resulting image analyser to an integer format such as 8-bit integer (INT8). In QAT, a quantisation error is considered when training the model of the DCNN. The DCNN is modified to simulate the lower precision behaviour of INT8 (while being trained at, for example, FP32 or a mixture of FP32 and FP16). This introduces quantisation errors as part of a training loss, which the training optimiser tries to minimise during the training. QAT may advantageously help in modelling the quantisation errors during training and thereby mitigating a negative effect on the accuracy of the model when converted to INT8 for deployment (i.e. as the image analyser). It should be noted that not all layer types of a DCNN are amenable to quantisation, which may be preferably accounted for when training the DCNN.

Figure 7 shows an extension to the method of Figure 6, applicable to an embodiment. Steps 200-202 are equivalent to those of Figure 6. Additional step 203 constitutes a post training modification to the trained DCNN in order to reduce processing resource requirements while retaining sufficient accuracy to identify threats. An advantage may be that an image analyser modified accordingly may be more suitable for real-time monitoring by the controller 24, which may have limited processing capability.

In an embodiment, step 203 comprises reducing the precision of calculations and/or data utilised by the image analyser when undertaking image analysis. The training of the DCNN can be undertaken at full precision — e.g. using 32-bit floating point (FP32) data or a mixed FP32 and FP16 mode, as discussed. Subsequent to training and testing, post-training quantisation is applied to the trained DCNN to reduce the precision to an integer format — here, for example, the DCNN model is scaled and calibrated using an entropy calibrator in order to minimise the information loss. For example, the post-training quantisation can be configured to result in an image analyser suitable for operation at INT8 precision. An advantage of this embodiment may be reduced processing requirements of the controller 24 while retaining sufficient accuracy in threat identification.

In an embodiment, step 203 comprises a step of pruning the trained and tested DCNN (whether the DCNN is configured at FP32, mixed FP32 and FP16, FP16, INT8, or any other precision). Elements (nodes) of the DCNN are pruned — that is, removed — if they meet a threshold metric indicating a relatively small contribution to the threat identification computations. For example, all the nodes whose contribution is less than a threshold can be removed from the network. However, care must be taken — if the threshold is set too high, then too many nodes will be removed and the effectiveness of the image analyser is reduced.

In embodiments in which the image analyser corresponds to a reduced precision which respect to the initial training data, further testing may be required to ensure sufficiently accurate threat identification is retained after precision reduction.

Generally, reduced processing requirements due to the modification to the image analyser may advantageously enable implementation of the controller 24 with lower power processor(s) 30 when compared to that required for an unmodified image analyser. Lower power processor(s) may provide an advantage in reduced electrical power requirements of the monitoring apparatuses 11, which may beneficially enable use of the monitoring apparatuses 11 in remote locations (e.g. where power supply is via a solar power generator rather than the national electrical grid) while retaining sufficient capability for image analysis (e.g. “real-time” analysis). Therefore, such embodiments may be particularly suited for remote pipeline 92 monitoring, where stable external power supplies are not present.

The processing server 12 is configured to receive alerts and optionally record the content and timing of the alert. Typically, the processing server 12 is also configured to produce a user notification regarding the alert and its content. For example, the notification can be communicated to a user device such as a smartphone or laptop computer. The user device is configured to present the notification (or a notification derived therefrom) to the user, who can then choose to action it (e.g. by arranging for a technician to visit the imaging area 91 associated with the monitoring apparatus 11 that generated the alert).

In an embodiment, the controller 24 is configured to change between an active state and a low-power state (“sleep state”). In the active state, the controller 24 is configured to perform actions related to identifying threats, as described herein — that is, the active state is a fully operational state in respect of object identification and analysis. In the sleep state, the controller 24 is configured to halt functionality related to object identification and threat detection — for example, by reducing or halting power supply to the camera 21. The wireless network interface 23 can be halted, or alternatively, wireless network interface 23 can be operated in a low-power mode in which it is still configured to receive communications — in this latter case, the wireless network interface 23 can be configured to provide an interrupt signal on receipt of certain data via wireless communication to cause the controller 24 to return to its active state. This can enable, for example, a system operator at the processing server 12 to cause the controller 24 to switch from the sleep state to the active state.

An advantage of providing a sleep state may be that the total power consumption of the monitoring apparatus 11 over time is reduced, without substantially affecting the consistency of threat identification. The controller 24 is typically therefore configured to enter the active state sufficiently often to ensure any threat is identified if present in the imaging area 91. Such an embodiment may be particularly advantageous for remote locating of the monitoring apparatus 11 where a stable external electrical supply is not present (e.g. where power is supplied via a solar power generator).

In an embodiment, the controller 24 is configured to enter the active state after a predetermined period of time has elapsed while in the sleep state (“sleep period”). For example, the sleep period can be 1 minute, 5 minutes, 10 minutes, or longer, depending on requirements. The sleep period can be set by a user (e.g. via the I/O port 25, via interfacing a user device with the local data port 26, and/or via a command communicated via the network 15). For example, if there is a certain minimum setup time for threats (i.e. no potential threat can cause damage to the pipeline 92 before the minimum setup time), it can be that the sleep period is set to be less than this setup time. Referring to Figure 8, in an embodiment, the controller 24 is further interfaced with an environment sensor 27. The environment sensor 27 is configured to produce a signal in response to certain environmental stimuli within the imaging area 91 that can be indicative of the presence of an object associated with a threat. The signal is configured to cause the controller 24 to enter the active state. It can be generally preferred that the environment sensor 27 is operable such that the total electrical power requirement of the monitoring apparatus 11 remains lower when the controller 24 is in the sleep state when compared to the active state. The environment sensor 27 can comprise one suitable detector or a plurality.

In one example, the environment sensor 27 comprises a motion detector configured to determine movement within the imaging area 91. For example, the motion detector can be a passive infrared (PIR) detector. In another example (which can be combined with the previous example), the environment sensor 27 comprises a microphone configured to detect noise levels above a threshold, the threshold with respect to the environment’s usual ambient sound. The microphone can comprise its own circuitry configured to generate the signal for the controller 24 only when the noise levels meet the threshold, or alternatively, the controller 24 is configured to monitor the microphone signal to determine when the noise levels meet the threshold (assuming that the controller 24 is enabled to undertake such monitoring when operating in a low power mode). Thus, the microphone can detect noise levels that can be indicative of machinery within the imaging area 91. In a particular implementation, the threshold is adjusted as the ambient sound changes (effectively meaning that the threshold remains constant with respect to a current ambient sound volume) — therefore, effectively, the signal to the controller 24 is based on relatively quick changes in environmental volume (e.g. within a 10-minute period), which correlates with machinery being turned on.

In this embodiment, it can be preferred that a sleep period is also utilised — therefore, the controller 24 is configured to enter the active state if either the sleep period elapses or a signal is generated by the environment sensor 27.

The functionality of the controller 24 can be specified by a computer program configured to cause the processor 30 of the controller 24 to undertake said functions. In particular, the image analyser can be produced (e.g. including use of the method of Figure 6 or Figure 7) separately to the controller 24 — e.g. at a training computer (not shown). Once trained and tested, and generally ready for use with remote devices 11, the image analyser can be transferred to the memory 31 of each remote device 11 — thus, the image analyser itself can be defined by an image analyser program.

The image analyser program can be stored on a computer readable storage medium, such as a FLASH memory (e.g. an SD card), a magnetic memory, an optical memory, or any other suitable non-volatile storage. The computer readable storage medium can then be interfaced with a remote device 11 (e.g. locally via its local data port 26 or remotely via its wireless network interface 23) to cause the image analyser program to be copied into the memory 31 of the remote device 11.

In an embodiment, a remote device 11 already setup to monitor an imaging area 91 can be updated with a new image analyser by copying a new image analyser program into its memory 31. For example, the new image analyser can correspond to additional training of an existing image analyser using newly acquired training images. In another example (which can be complementary to the previous example), the new image analyser comprises a different machine learning algorithm can be utilised. Therefore, according to this embodiment, the remote device(s) 11 can advantageously be updated for improved operation.

In an embodiment, one or more remote devices 11 can be configured for storing selected captured images in their memory 31 for later retrieval by a user. The selected captured images can preferably be a reduced number when compared to all capture images — for example, a selected captured image can correspond to every n th capture image (where ‘n’ is a predefined number) or the first captured image after a certain predefined elapsed time since a previously stored select captured image. The remote devices 11 can be configured to also favour captured images in which an object has been identified (whether determined to be a threat or not) — for example, by storing and keeping select images in a ratio of images comprising identified objects to images not comprising identified objects (as it can be expected that there are many more captured images in which no object is identified). For example, the ratio (object identified to no object identified) can be 1:1, 10:1, 100:1, 1000:1, or any other suitable ratio. An advantage of this embodiment can be that the select captured images may be utilised as training images for further training of the image analyser or creation of an entirely new image analyser, as the select captured images are directly applicable to the use-environment of the remote devices 11. Thus, the select captured images can be intermittently moved from the memories 31 of the remote devices 11, for example, to a training computer where utilised.

An advantage of embodiments herein described may be that the threat factor is determined by a controller 24 of a monitoring apparatus 11 rather than at a separate processing site (e.g. at a server such as processing server 12). Therefore, alerts can be issued over the network 15 at a relatively low bandwidth requirement when compared to the communication of captured images over a network 15. Therefore, relatively low-bandwidth communication protocols or high data cost protocols can be utilised while still providing the ability to signal an alert to the processing server 12. Low-bandwidth protocols are typically utilised in the remote geographic areas for which embodiments of the present invention are expected to be useful. In addition, embodiments herein described may allow for the redeployment labour currently dedicated to pipeline patrol activities. For example, in Australia in 2021, a single technician, with tools and vehicle, dedicated to pipeline patrol may cost approximately A$200k p.a. The frequency of encroachments by objects which provide a threat to the pipeline can be as low as 0.3 incursions per month. Embodiments may provide an advantage of reducing the instances of technicians visiting a particular location — that is, the technician can respond to alerts generated by monitoring apparatuses 11 rather than regularly patrolling the monitored area 90. A benefit may be that the described embodiments can provide effective 24-hour observation whereas known techniques, such as pipeline patrol (which may include using drone technology) are intermittent and may miss particular instances of damage — in particular, a benefit may be that such 24-hour observation is achieved while only requiring, in certain embodiments, minimal data bandwidth.

Additionally, a benefit of embodiments may be a reduced probability of significant physical damage (which can lead to significant commercial damage) to the structure being monitored. According to some existing techniques described herein, a particular section of pipeline 92 is only checked intermittently (i.e. not continuously). It is possible for relatively minor (at the time) damage to occur to the pipeline 92 which may be undetected simply because the particular section was not monitored when the damage actually occurred. This minor, at the time, damage can lead to catastrophic damage later on — for example, damage that weakens the pipeline 92 may, over time, lead to structural failure of the pipeline 92. This can lead to significant environmental damage in the vicinity of the pipeline 92 (e.g. due to leakage of the material being piped) and may disrupt downstream provision of the piped material (e.g. disrupting oil or gas supply to a region). Therefore, the embodiments may provide an advantage in eliminating, or at least substantially reducing, instances of damage to the pipeline 92 being missed.

As should be understood, portions of the underground asset can be aboveground (e.g. a pipeline 92 can have portions above and portions below ground), and the monitoring apparatuses 11 described herein can be preferably also suitable for monitoring the aboveground portions — thus, where a plurality of monitoring apparatuses 11 are arranged to monitor different imaging areas 91 associated with the underground asset, it is understood some imaging areas 91 can partially, or entirely, comprise aboveground portions of the asset.

An advantage of certain embodiments herein described may be that the monitoring apparatus 11 is provided as a unitary physical device — that is, each physical component is either housed within or attached to a housing. In such embodiments, installation advantageously may have a reduced difficulty or complexity, and/or may be relative quick. Further modifications can be made without departing from the spirit and scope of the specification.

For example, certain functionality of the controller 24 herein described can be implemented by one of a group of remote devices 11 (the “processing device”), wherein the group is defined by said member remote devices 11 being in data communication with at least the processing device (e.g. via wireless network interface 23, although alternatively a specifically provided wired data connection can be utilised). Alternatively, the certain functionality can be implemented in a separate processing device to all of the remote devices 11 of the group — in this case, this separate processing device is part of the group also and is in data communication with the remote devices 11 of the group. In one particular example, the certain functionality comprises the image analyser function — therefore, each remote device 11 is configured to provide image data to the processing device (in the case of a remote device 11 corresponding to the processing device, this is simply via reference to the image data in memory 31). In this case, the remote devices 11 not corresponding to the processing device can utilise a lower power processor as they are not required to implement an image analyser — this may be advantageous in cases where a data connection can be created between the remote devices 11 and the processing device of sufficient bandwidth to enable transfer of image data, as the total power consumption averaged over all members of the group may be lower than a case where each remote device 11 of the group implements an image analyser.

Although embodiments herein a described primarily with reference to monitoring of underground assets (such as a pipeline 92) at remote locations where a stable external power supply is not present, it is anticipated that certain embodiments may also be suitable for use in nominally non-remote locations. For example, at a construction site where an imaging area 91 is associated with the site but the field-of-view requirement for monitoring means that the monitoring apparatus 11 must be located away from an external power supply. This may be particularly relevant within a new development without an extensive electric supply grid (e.g. on the fringes of an urban area), although it is also expected that certain well-developed locations, such as inner city, may benefit from use of embodiments herein described. Therefore, the self-powered monitoring apparatus 11 can provide monitoring at a location having an optimal field-of-view as described herein without requiring external electrical supply. In such cases, the housing may preferably be provided with adjustable attachment means.

Advantageously, certain embodiments may be portable and attachment means are provided which enable attachment and detachment from structures as needed.

In an embodiment, the controller 24 is interfaced with a locator module suitable for determining a location of the monitoring apparatus 11. For example, the locator module comprises a Global navigation satellite system (GNSS) unit such as a Global Position System (GPS) receiver. In another example, the locator module utilises triangulation, for example, via reference to received cellular transmitted signals. The location as determined can be represented as location data, which may be sent with the alert (along with, or alternatively to, the identifier), thereby enabling identification of the source of the alert. This may be particularly suitable where it is expected that the monitoring apparatus 11 is moved relatively often (e.g. when portable).




 
Previous Patent: METHODS OF TREATMENT

Next Patent: COLLAGEN IV BIOINKS