Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETECTION METHOD AND APPARATUS
Document Type and Number:
WIPO Patent Application WO/2014/096756
Kind Code:
A1
Abstract:
The present invention relates to a method and associated apparatus for detecting and identifying acoustic events comprising simultaneous collection of acoustic data from a plurality of acoustic sensors, the combination of the acoustic sensor data to form a plurality of beams, the processing of the beam data for each respective beam so as to produce a signature for each beam, the tagging of each beam signature with the associated bearing data and the combination of substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and also combining a plurality of the combined outputs from consecutive time intervals to indicate any acoustic event(s). The method of the invention is particularly useful for sonar applications and, consequently, the invention also relates to sonar systems which combine outputs from a plurality of acoustic sensors.

Inventors:
JOHNSTON JAMES (GB)
Application Number:
PCT/GB2013/000551
Publication Date:
June 26, 2014
Filing Date:
December 18, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SECR DEFENCE (GB)
International Classes:
G01S3/801; G01S3/84
Foreign References:
GB2390160A2003-12-31
GB2484196A2012-04-04
Other References:
D. HANSON: "Auditory-Based Time-Frequency Representations and Feature Extraction Techniques for Sonar Processing", PROCEEDINGS OF ACOUSTICS 2008, 24 November 2008 (2008-11-24), Geelong, Australia, XP055056778, Retrieved from the Internet [retrieved on 20130315]
PRESTON J R: "Using Triplet Arrays for Broadband Reverberation Analysis and Inversions", IEEE JOURNAL OF OCEANIC ENGINEERING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 32, no. 4, 1 October 2007 (2007-10-01), pages 879 - 896, XP011203630, ISSN: 0364-9059
DE THEIJE P A M: "STARE, a sonar data post-processing and visualisation software package", OCEANS 2005 - EUROPE BREST, FRANCE 20-23 JUNE 2005, PISCATAWAY, NJ, USA,IEEE, US, vol. 1, 20 June 2005 (2005-06-20), pages 481 - 488, XP010838535, ISBN: 978-0-7803-9103-1
Attorney, Agent or Firm:
FAMSWORTH, Alastair Graham (Poplar 2#2214MOD Abbey Wood, Bristol. BS34 8JH, GB)
Download PDF:
Claims:
Claims

1. A method for the detection and identification of an acoustic event comprising; simultaneously collecting acoustic data from a plurality of acoustic sensors; combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data;

processing the beam data for each respective beam so as to produce a signature for each beam;

tagging each beam signature with the associated bearing data;

combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and

combining a plurality of the combined outputs from consecutive time intervals to indicate any acoustic event(s).

2. A method for the detection and identification of an acoustic event comprising; simultaneously collecting acoustic data from a plurality of acoustic sensors; combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data;

tagging each beam with the associated bearing data;

processing the beam data for each respective beam so as to produce a signature for each beam;

combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and

combining a plurality of the combined outputs from consecutive time intervals to indicate any acoustic event(s).

3. A method according to claim 1 or 2 in which the acoustic data is collected from an array of acoustic sensors.

4. A method according to any of claims 1 to 3 wherein the acoustic sensor is a passive acoustic sensor.

5. A method according to claim 4 wherein the acoustic sensor is a hydrophone, a microphone, a geophone or an ionophone.

6. A method according to any of the proceeding claims wherein the beam data is processed using DEMON processing.

7. A method according to any of the proceeding claims wherein each beam

signature is tagged using a particular colour to indicate a particular bearing.

8. A method according to claim 7 wherein a different colour is used to indicate each bearing across a substantial part of a sonar array. 9. A method according to claim 7 wherein a different colour is used to indicate each bearing across a portion of a sonar array.

10. A method according to any of claims 1 to 6 wherein each beam signature is tagged using a particular symbol to indicate a particular bearing.

11. A method according to any of claims 1 to 6 wherein the combined output for a particular time interval displays is populated from beam signatures obtained from acoustic sensors located across a substantial part of an acoustic array. 12. An apparatus for the detection and identification of an acoustic event

comprising:

A plurality of acoustic sensors;

means for simultaneously collecting acoustic data from the acoustic sensors; processing means for combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data; producing a signature for each beam and tagging each beam or beam signature with the associated bearing data;

means for combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and combining a plurality of the combined outputs from consecutive time intervals, to produce output data sets, and

display means for displaying the output data sets.

13. An apparatus according to claim 12 which comprises an array of acoustic

sensors

14. An apparatus according to either of claims 12 or 13 wherein the acoustic sensor is a passive acoustic sensor.

15. An apparatus according to claim 14 wherein the acoustic sensor is a

hydrophone, a microphone, a geophone or an ionophone.

16. An apparatus according to any of claims 12 to 15 wherein the means for

collecting audio data is an analogue to digital converter. 17. An apparatus according to any of claims 12 to 16 wherein signature for each beam is produced using DEMON processing.

18. An apparatus according to any of claims 12 to 17 wherein each beam signature is tagged using a particular colour to indicate a particular bearing.

19. An apparatus according to claim 18 wherein a different colour is used to

indicate each bearing across substantial all of a sonar array.

20. An apparatus according to claim 19 wherein a different colour is used to

indicate each bearing across a portion of a sonar array.

21. An apparatus according to any of claims 12 to 17 wherein each beam signature is tagged using a particular symbol to indicate a particular bearing. 22. An apparatus according to any of the proceeding claims wherein the display displays an output data set populated from beam data obtained from acoustic sensors located across a substantial part of an acoustic array.

Description:
Detection Method and Apparatus

The present invention relates to detection systems, in particular acoustic detection systems and a detection method combining multiple acoustic inputs from acoustic sensors to provide improved detection performance. The method of the invention is particularly useful for sonar applications and, consequently, the invention also relates to sonar systems which combine outputs from a plurality of acoustic sensors.

In many types of acoustic detection system, an acoustic event is presented or displayed visually to an operator who is then responsible for detecting, tracking and identifying the event using this visual information. Whilst detection of some events may be readily determined from one or more visual images using various digital processing techniques, it is often the case that visual analysis of an acoustic event is not sufficient on its own especially if the event is transient. Such events are usefully interrogated further by an auditory output and operators typically rely on listening to identify the , source of the event. Thus, many acoustic detection systems rely on auditory and visual analysis of sensor outputs. Whilst this demonstrates the excellent ability of the human visual and auditory system to detect and identify transient sounds in the presence of noise, it nevertheless has the disadvantages that it is time consuming and requires highly skilled and trained personnel.

This is particularly true in the field of sonar, where acoustic detection may be facilitated by large numbers of acoustic sensors. For example, submarine sonar systems usually comprise a number of different sonar arrays which, in theory, can be arranged in any orientation and on any part of the submarine. A typical submarine will have a number of sonar arrays with hydrophone elements ranging from a single hydrophone, to line arrays and complex arrays of many hundreds or even thousands of hydrophone elements. In a submarine these arrays are often arranged to form one or more of a Bow Array (BA) and/or Flank Array (FA) and some submarines may also tow some hydrophones behind the submarine to form a Towed Array (TA). Each signal from each of the hydrophones can be used individually or with one or more other hydrophones to form a beam which has bearing data and/or depression/elevation data can individually or as beams be tagged, such as by using colour coding, to indicate the bearing and/or elevation of an acoustic event in relation to the hydrophone. Collectively the large numbers of acoustic detectors are commonly used to produce a large amount of data for processing. Submarine sonar systems typically collect vastly more data than operators are able to analyse in real time; whilst many loud and/or close sound events, such as a casing rattle or an explosion might be very readily identified, many other types of sound are routinely only identified with post-event analysis.

Current sonar systems typically present the acoustic data to the operator in a number of different visual display formats. These include Broadband displays and Lofar displays.

In the case of a Broadband display the Data from the hydrophones in an array is presented to the operator as a visual display of broadband energy intensity versus direction and time. Visual detection of a broadband acoustic event can be determined by a line marking, above the background noise, on a display of time against bearing; a typical example of a broadband passive sonar display is shown in Figure 1.

Lofar displays are a spectral display collected over time, showing the temporal evolution of the spectral content of a signal. They generally employ high levels of temporal integration with large Fractional Fourier Transfer (FFT) sizes in order to extract accurate spectral content from low signal levels. These are typically displayed as waterfall spectrograms.

Existing sonar systems require a sonar operator to analyse each of the signals from the various hydrophones, individually or in beams, by training a bearing cursor on a Broadband display across the hydrophones or beams in a sonar array. After recognising an acoustic event, such as another sea borne vehicle, on the Broadband display at a particular bearing, the operator will then interrogate the event further to identify it. This interrogation can be done by various processing techniques including using an aural beam, known as 'listening down the bearing' or by applying digital processing such as DEMON processing to that signal.

DEMON (DEModulation Of Noise / Detection of Envelope Modulation On Noise) processing is a demodulation technique which is used in sonar systems to measure signal amplitude fluctuation rates and to determine periodic repeats. DEMON processing is typically used to measure such things as the number of blades and blade rates of ship/boat propellers. DEMON processing can be processor intensive and requires substantial processing capacity to generate a DEMON signature. DEMON signals are typically presented in a Frequency / Time waterfall style display. A typical DEMON display is shown at Figure 2.

The use of these further processing techniques, such as DEMON, requires time to build up a recognisable acoustic event signature for analysis purposes. This is generally therefore performed in parallel with the operator continuing to investigate other elements of the acoustic detector array.

To provide effective all round surveillance about the submarine the operator needs to consecutively analyse each of the hydrophones and/or beams around the submarine and at the same time use the further processing techniques to identify any detected acoustic event. This places a large burden on the operator and may cause the workload of the operator to increase to a point which is unmanageable or ineffective thus resulting in a delay in detecting, identifying and classifying events or even events being missed or lost. This is most likely to be the case for weak, fast moving or close range events and also multiple events.

Ideally, analysis would be replaced, or at the very least be complemented by, automatic digital processing of the data collected by the acoustic sensor to reduce the burden on the operator and create the potential for complete real-time monitoring of acoustic events across all or a substantial part of an acoustic detector array.

Accordingly, the present inventors have created a method and apparatus which provides the operator with a combined 'view 1 across a substantial part of an acoustic array by simultaneously combining outputs from multiple acoustic detectors tagged to show the bearing of any acoustic event in relation to the detector. This negates the current need for the operator to consecutively analyse each of the hydrophones and/or beams around the submarine. In doing so the present invention allows the operator to simultaneously see all acoustic events across all or at least a substantial part of an acoustic array and thus allows them to concentrate on those events rather than constantly scanning across an array. This therefore reduces the workload of the operator or the number of operators required, which may be of considerable value where space is restricted e.g. in a submarine. The invention also reduces the time taken to detect, track and subsequently identify acoustic events. Additionally the present invention presents the operator with an output which is more constant than a Broadband display or audible monitoring and as a result faint, obscured or fleeting acoustic events which may not be marked on a Broadband display or missed audibly when scanning across a acoustic array will be presented more constantly enabling early detection and reducing the likelihood that an event goes undetected.

Improvements in digital processing mean that the invention has the potential to operate in real time and thus provide an operator with an improved display.

Accordingly, in a first aspect the present invention provides a method for the detection and identification of an acoustic event comprising;

simultaneously collecting acoustic data from a plurality of acoustic sensors; combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data;

processing the beam data for each respective beam so as to produce a signature for each beam;

tagging each beam signature with the associated bearing data;

combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and

combining a plurality of the combined outputs from consecutive time intervals to indicate any acoustic event(s). In a further aspect there is provided a method for the detection and identification of an acoustic event comprising;

simultaneously collecting acoustic data from a plurality of acoustic sensors; combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data;

tagging each beam with the associated bearing data;

processing the beam data for each respective beam so as to produce a signature for each beam;

combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and combining a plurality of the combined outputs from consecutive time intervals to indicate any acoustic event(s).

The method is suitable for collecting and processing data obtained from a plurality of individual acoustic sensors but is equally well suited to collecting and processing audio data which has been collected from an array of acoustic sensors.

The method is suitable for both passive and active sound detection, although a particular advantage of the invention is the ability to process large volumes of sound data in "listening mode" i.e. passive detection. Preferably, therefore, the method utilises acoustic data collected from a passive acoustic sensor.

Such acoustic sensors are well known in the art and, consequently, the method is useful for any application in which passive sound detection is required e.g. in sonar or ground monitoring applications or in monitoring levels of noise pollution. Thus the acoustic data may be collected from acoustic sensors such as a hydrophone, a microphone, a geophone or an ionophone.

The method of the invention is particularly useful in sonar applications, i.e. wherein the acoustic sensors are hydrophones. The method may be applied in real time on each source of data, and thus has the potential for real-time or near real-time processing of sonar data. This is particularly beneficial as it can provide the sonar operator with a very rapid visual representation of the collected audio data. It will be well understood by the skilled person that the data obtained from such acoustic arrays can be subjected to techniques such as beam forming, as is standard in the art, to establish directionality of the data from the acoustic sensor array. The acoustic data from individual acoustic sensors will comprise data defining the acoustic event and that data will be associated with specific time interval data indicating the time the acoustic data was received. By combining the acoustic data from more than one acoustic sensor to form beams the bearing of the acoustic event from the acoustic sensor, the bearing data, can be established.

Methods for processing the acoustic data for each of the respective beams (the beam data) so as to produce a signature for each beam are well known in the art however the use of DEMON processing to process the beam data is particularly useful as this provides linear traces at specific frequencies for individual acoustic events. These linear traces are clearly distinguishable to the operator and the periodic repeat of the trace can be used to help categorise the acoustic event.

Methods for the tagging of the processed data will be apparent to those skilled in the art but, conveniently, different colours, or symbols, may be applied to the processed data to indicate the bearing data. The data may also be tagged by varying the image intensity of a display on which the data is displayed.

In the current method each beam signature can advantageously be tagged using a particular colour to indicate the relevant bearing for that beam signature. The colours may, advantageously, be used to indicate each bearing across a substantial part of an acoustic array and can vary across the colour spectrum, such as from red to violet as the bearing changes across the sonar array. For increased accuracy in bearing the colour can be used to indicate each bearing across a portion of an acoustic array with the colour spectrum being repeated in each portion of the acoustic array.

Once processed and tagged the tagged beam signatures from a particular time interval can be combined to provide a combined output for a particular time interval and a plurality of the combined outputs from consecutive time intervals are then combined to indicate any acoustic event(s).

The combined outputs from consecutive time intervals are advantageously combined in a continuous fashion so as to provide an output to the operator which varies with time and further advantageously the combined outputs are populated from beam signatures obtained from acoustic sensors located across a substantial part or the whole of an acoustic array. Such an output will thus allow an operator to monitor all acoustic events across a substantial part, or the whole, of an acoustic array simultaneously.

The method may advantageously be performed in real time as the data is collected or may be performed on the data at a later time. This allows the data to be used for real time monitoring and also for deferred data analysis. In the case of a submarine such deferred data analysis may be done after the submarine has left an area of interest or indeed after it has returned to port. The method may be used on board a vessel such as a submarine which collects the data or may be used 'on shore' remotely to the vessel that collects the data.

In a second aspect, the present invention also provides an apparatus for the detection and identification of an acoustic event comprising:

A plurality of acoustic sensors;

means for simultaneously collecting acoustic data from the acoustic sensors; processing means for combining the acoustic sensor data to form a plurality of beams, each beam having associated beam data, bearing data and time interval data; producing a signature for each beam and tagging each beam signature with the associated bearing data;

means for combining substantially all the tagged beam signatures from a particular time interval to provide a combined output for a particular time interval, and combining a plurality of the combined outputs from consecutive time intervals, to produce output data sets, and

display means for displaying the output data sets.

Conveniently the apparatus comprises an array of acoustic sensors, which may be formed in any format as is required or as is standard in the relevant application. , Arrays of sensors are known in the art and may be arranged in any format, such as line arrays, conventional matrix arrays or in complex patterns and arrangements which maximises the collection of data from a particular location or direction.

The acoustic sensor may be any type which is capable of detecting sound, as are well known in the art. Preferred sensor types include, but are not limited to, hydrophones, microphones, geophones and ionophones.

A particularly preferred acoustic sensor is a hydrophone, which finds common use in sonar applications. Sonar hydrophone systems range from single hydrophones to line arrays to complicated arrays of particular shape which may be used on the surface of vessels or trailed behind the vessel. Thus, a particularly preferred application of the apparatus of the invention is as a sonar system and even more preferred a sonar system for use in submarines. The skilled person will understand, however, that the same apparatus may be readily adapted for any listening activity including, for example, monitoring undersea activity such as oil exploration or, through the use of a geophone, for listening to ground activity, to detecting transient or unusual seismic activity, which may be useful in the early detection of earthquakes or the monitoring of earthquake fault lines. Broadband passive acoustic sensors, such as broadband hydrophone arrays, which operate over the 1 to 10 kHz frequency range are well known in the art and the theory whereby such sensors collect audio data is well known.

After collection of the audio data, the data is processed to ensure it is provided in digital form for further analysis. Accordingly the means for collecting audio data in the apparatus includes an analogue to digital converter (ADC). The ADC may be a separate component within the apparatus or may be an integral part of the acoustic sensor. Once collected and converted to digital form, the data is then processed, combined and tagged using the processes discussed above. Conveniently, the processing means may be a standard microcomputer programmed to perform the processing on the data in parallel and then combine the output data sets to provide a visual output. The processing means may be connected to the collecting means, either directly or wirelessly, however alternatively the processing means may be located separately to the collecting means with the collecting means storing the data for later processing by the processing means.

In a preferred embodiment the computer is programmed to run the processing in real time, on the data collected from a substantial proportion of the individual sensors, or may be programmed to process data from any particular sensor or groups of sensors.

The display means may be any type which is capable of displaying the data sets, as are well known in the art. The display means will advantageously be located in proximity to the acoustic sensors and preferably on the platform, such as a submarine, which is collecting the data. The display may however be located remotely to the other elements of the apparatus.

The invention will now be described by way of example, with reference to the Figures in which: Figure 1 is a typical broadband passive sonar display showing the bearing of a number of contacts over time alongside the forward bearing of the sonar marked as 'Ahead'. Figure 2 is a typical Demon display showing a signature for a single beam of acoustic data and a periodic repeat for an acoustic event indicated by numbers 1 to 12.

Figure 3 shows the use of colour coding of the DEMON display to discriminate between the bearings on a substantial part of an acoustic sonar array and shows two contacts are present. The colour coding key shows the colours used to tag the data with 1 indicating green, 2 indicating blue, 3 indicating indigo, 4 indicating purple, 5 indicating pink, 6 indicating orange and 7 indicating yellow.

Figure 4 is a schematic showing a possible concept for an apparatus according to the present invention.

The outputs from an acoustic array as used in a submarine typically include 1000's of individual passive acoustic detectors in the form of hydrophones Individual outputs from these hydrophones are simultaneously obtained and specific groups of outputs are combined with appropriate weights and time delays to form a beam which indicates an acoustic event in a particular direction from several hydrophones. By changing the weights and time delays a beam can also be formed in a different direction. This process is repeated tens or hundreds of times to form a set of beams which displays the signals from multiple directions.

Typically such beams are displayed on a broadband display as shown in figure 1. Acoustic events are indicated by line markings above background noise, as indicated by contact 1 and contact 2 in figure 1.

The beams resulting from the above process have beam data relating to the acoustic event, bearing data (as well as depression/elevation data) and time data relating to the point in time at which the data was received by the hydrophones. All or a substantial part of the beam data is then individually further processed using DEMON processing to produce a representative signature for each beam. Figure 2 shows an example of a signature for a particular beam formed by the DEMON process. The signature of an acoustic event can been seen on the figure as the repeating vertical lines indicated by numbers 1 to 12. The signature shows a periodic repeat which indicates an acoustic event and can be used to aid in its identification.

Each beam is then tagged using a colour coding to show a bearing for that beam. Figure 3 shows one example of colour coding across a substantial part of the array. The colours are shown in the colour coding key and vary from green through to yellow as shown by the numbers 1 through 7, each colour being used as a tag for a particular bearing.

All or substantially all of the beams for a particular time interval are then combined to form an output for that time interval and then consecutive time intervals are combined to form a combined output as shown in figure 3. This output and the tagging indicating bearing provides the user with the ability to have a 'view' of acoustic events across a substantial part of an array. The signatures for specific acoustic events are indicated as contact 1 and contact 2 and can be recognised by their regular periodic repeat. The change in bearing for contact 2 with time can be seen by the change in tagging colour from blue at (a) to pink at (b). Whilst contact 1 can be seen to be on a constant bearing as the colour does not change with time.

To give greater bearing accuracy this colour ' coding can be repeated in smaller portions of the array with the variation of colour being repeated in each portion of the array.

A schematic drawing of an example of an apparatus according to the current invention is shown in figure 4. The acoustic sensors in the form of a hydrophone Array individually collect data by converting sound from an acoustic event into electrical signals. An analogue to digital (A D) converter converts the electrical signals to digital data. The data is passed to a beamformer to provide a set of tagged beams. Each tagged beam is passed through a DEMON processor to provide tagged DEMON data. Alternatively, as shown in figure 4b the beams can be processed before being tagged. The tagged DEMON data is combined and passed to a display to show directional colour-coded DEMON data. The display will typically be a LCD screen.