Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ANALYSING A VEHICLE SURFACE
Document Type and Number:
WIPO Patent Application WO/2021/078666
Kind Code:
A1
Abstract:
Disclosed is a method of analysing a surface of an aircraft. The method comprises receiving image data representing a first image of a surface region of an aircraft, the image data captured by a first aircraft-mounted image sensor of a plurality of aircraft-mounted image sensors, the first image sensor having a field of view comprising said surface region. The image data is processed to identify the presence of any unexpected surface features in the first image of the surface region; and responsive to determining the presence of an unexpected surface feature, generating an output indicating the presence of the unexpected surface feature.

Inventors:
TULLOCH WILLIAM (GB)
Application Number:
PCT/EP2020/079313
Publication Date:
April 29, 2021
Filing Date:
October 19, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AIRBUS OPERATIONS LTD (GB)
International Classes:
G06T7/00
Domestic Patent References:
WO2018229709A12018-12-20
Foreign References:
US20090212976A12009-08-27
EP3553741A12019-10-16
Other References:
TOUBA MALEKZADEH ET AL: "Aircraft Fuselage Defect Detection using Deep Neural Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 26 December 2017 (2017-12-26), XP081771021
MIRANDA JULIEN ET AL: "Machine learning approaches for defect classification on aircraft fuselage images aquired by an UAV", PROCEEDINGS OF SPIE; [PROCEEDINGS OF SPIE ISSN 0277-786X], SPIE, 1000 20TH ST. BELLINGHAM WA 98225-6705 USA, vol. 11172, 16 July 2019 (2019-07-16), pages 1117208 - 1117208, XP060124882, ISSN: 0277-786X, ISBN: 978-1-5106-3927-0, DOI: 10.1117/12.2520567
WANG CONGQING ET AL: "The Aircraft Skin Crack Inspection Based on Different-Source Sensors and Support Vector Machines", JOURNAL OF NONDESTRUCTIVE EVALUATION, PLENUM PUBLISHING CORPORATIONT. NEW YORK, US, vol. 35, no. 3, 11 August 2016 (2016-08-11), pages 1 - 8, XP036028492, ISSN: 0195-9298, [retrieved on 20160811], DOI: 10.1007/S10921-016-0359-3
Attorney, Agent or Firm:
HUCKER, Nerys (GB)
Download PDF:
Claims:
CLAIMS:

1. A method of analysing a surface of an aircraft, comprising: receiving image data representing a first image of a surface region of an aircraft, the image data captured by a first aircraft-mounted image sensor of a plurality of aircraft-mounted image sensors, the first image sensor having a field of view comprising said surface region; processing the image data to identify the presence of any unexpected surface features in the first image of the surface region; and responsive to determining the presence of an unexpected surface feature, generating an output indicating the presence of the unexpected surface feature.

2. A method according to claim 1 , wherein processing the first image data comprises using a trained machine classifier to identify the presence of any unexpected surface features in the image of the surface region.

3. A method according to claim 2, wherein the trained machine classifier has been trained using images of surface regions comprising no unexpected surface features.

4. A method according to claim 2 or claim 3, wherein the trained machine classifier is trained to identify expected surface features of the surface region of the aircraft.

5. A method according to claim 4, wherein the trained machine classifier identifies the presence of any unexpected surface features by reference to expected surface features.

6. A method according to any one of claims 2 to 5, wherein the trained machine classifier is trained to identify the presence of any unexpected surface features by recognising one or more of discontinuities in the surface region and/or an abrupt change in brightness, colour or tone of the surface region.

7. A method according to any one of the preceding claims, comprising processing image data representing a second image of the surface region of the aircraft captured by a second aircraft-mounted image sensor of the plurality of aircraft-mounted image sensors to identify the presence of the unexpected surface feature in the second image of the surface region, the second image sensor mounted on the aircraft spaced from the first image sensor and having a field of view that overlaps with the field of view of the first image sensor and comprises the surface region.

8. A method according to claim 7, wherein processing the image data representing a second image of the surface region of the aircraft is triggered by the output indicating the presence of the unexpected surface feature.

9. A method according to claim 7 or claim 8, wherein the output indicating the presence of the unexpected surface feature triggers the capture of the second image by the second image sensor.

10. A method according to claim 9, wherein the second image sensor has a first mode that captures images at a first image resolution and the output indicating the presence of the unexpected surface feature triggers the second image sensor to capture the second image in a second mode at a second image resolution that is higher than the first image resolution.

11. A method according to any one of claims 7 to 10, comprising, responsive to determining the presence of the unexpected surface feature in the second image, generating an output confirming the presence of the unexpected surface feature.

12. A method according to claim 11, comprising triangulating the location of the confirmed unexpected surface feature on the surface region by using the first image and the second image.

13. A method according to any one of the preceding claims, comprising mapping the unexpected surface feature to a location on a digital model of the aircraft.

14. A method according to any one of the preceding claims, comprising triggering capture of at least the first image in response to a detected condition.

15. A method according to claim 14, wherein the detected condition comprises the aircraft having an altitude that is below a threshold altitude and/or the aircraft entering a take-off or landing phase of flight.

16. A method according to claim 15, wherein the detected condition comprises a sensor sensing a sound or vibration characteristic.

17. A method according to claim 16, wherein the sound or vibration characteristic is consistent with an impact between the aircraft and an object.

18. A method according to any one of the preceding claims, wherein at least the first image sensor comprises a 360° degree camera.

19. A method according to any one of the preceding claims, wherein an unexpected surface feature comprises at least one of surface corrosion and/or impact damage.

20. A method according to any preceding claim, wherein the image data is captured in at least one of the visible, infrared, near-infrared, ultraviolet, or x-ray wavelength.

21. A method according to any one of the preceding claims, comprising generating an alarm in response to at least one of an output indicating the presence of the unexpected surface feature and/or an output confirming the presence of the unexpected surface feature.

22. A system comprising a processor and at least a first aircraft mounted image sensor, the system arranged to perform the method of any one of claims 1 to 21.

23. A system according to claim 22, comprising an array comprising a plurality of aircraft mounted image sensors, including at least a first aircraft mounted image sensor and a second aircraft mounted image sensor.

24. A system according to claim 22 or claim 23, comprising a sound or vibration sensor.

25. An aircraft comprising a system according to any one of claims claim 22 to 24.

26. A machine-readable storage medium comprising instructions executable by a processor to implement the method according to any one of claims 1 to 21.

Description:
ANALYSING A VEHICLE SURFACE

TECHNICAL FIELD

[0001] The present invention relates to analysing a surface of a vehicle. Particularly, although not exclusively, the present invention relates to the detection of surface damage to an aircraft.

BACKGROUND

[0002] During flight or while taxiing, an aircraft is at risk of structural damage due to impact with objects such as birds, unmanned aerial vehicles such as drones, ground structures, or other vehicles. While such impacts are often benign and may not often damage the aircraft, structural damage poses potential safety issues.

SUMMARY

[0003] A first aspect of the present invention provides a method of analysing a surface of an aircraft, comprising: receiving image data representing a first image of a surface region of an aircraft, the image data captured by a first aircraft-mounted image sensor of a plurality of aircraft-mounted image sensors, the first image sensor having a field of view comprising said surface region; processing the image data to identify the presence of any unexpected surface features in the first image of the surface region; and responsive to determining the presence of an unexpected surface feature, generating an output indicating the presence of the unexpected surface feature. Such a method provides a warning to the flight crew, a pilot and/or even ground crew that there may be surface damage present on the surface of the aircraft.

[0004] Optionally, the method may process the first image data using a trained machine classifier to identify the presence of any unexpected surface features in the image of the surface region. [0005] Optionally, the trained machine classifier has been trained using images of surface regions comprising no unexpected surface features. Conveniently, libraries of images exist and can be used to train such a machine classifier.

[0006] Optionally, the trained machine classifier is trained to identify expected surface features of the surface region of the aircraft.

[0007] Optionally, the trained machine classifier identifies the presence of any unexpected surface features by reference to expected surface features. The machine classifier will ‘know’ what an expected surface of an aircraft should look like and therefore be able to determine if there are unexpected features.

[0008] Optionally, the trained machine classifier is trained to identify the presence of any unexpected surface features by recognising one or more of discontinuities in the surface region and/or an abrupt change in brightness, colour or tone of the surface region. The image sensor will be able to capture such changes and the presence of an unexpected surface feature will be determined.

[0009] Optionally, the method comprises processing image data representing a second image of the surface region of the aircraft captured by a second aircraft-mounted image sensor of the plurality of aircraft-mounted image sensors to identify the presence of the unexpected surface feature in the second image of the surface region. The second image sensor may be mounted on the aircraft spaced from the first image sensor and having a field of view that overlaps with the field of view of the first image sensor and comprises the surface region. The second image is an alternate view of the unexpected surface feature detected by the first image sensor. This may improve the determination of the presence of an unexpected surface feature.

[0010] Optionally, processing the image data representing a second image of the surface region of the aircraft is triggered by the output indicating the presence of the unexpected surface feature.

[0011] Optionally, the output indicating the presence of the unexpected surface feature triggers the capture of the second image by the second image sensor. In the event that the output is generated, the second image sensor with a second field of view may perform a confirmation step.

[0012] Optionally, the second image sensor has a first mode that captures images at a first image resolution and the output indicating the presence of the unexpected surface feature triggers the second image sensor to capture the second image in a second mode at a second image resolution that is higher than the first image resolution. The second image sensor may already be capturing low resolution images and is triggered to capture improved image data based on the trigger. This may provide more-detailed data to be processed.

[0013] Optionally, responsive to determining the presence of the unexpected surface feature in the second image, generating an output confirming the presence of the unexpected surface feature. An output may indicate to the cabin crew and/or pilot the presence of the unexpected surface feature. In addition, or alternatively, the output may be communicated to a ground crew for assessment.

[0014] Optionally, the location of the confirmed unexpected surface feature is triangulated on the surface region by using the first image and the second image. This may provide an accurate and precise location of any unexpected surface feature.

[0015] Optionally, comprising mapping the unexpected surface feature to a location on a digital model of the aircraft. The digital model of the aircraft may comprise the positions of expected surface features and therefore improve the location of the unexpected surface features.

[0016] Optionally, triggering capture of at least the first image in response to a detected condition where the detected condition comprises the aircraft having an altitude that is below a threshold altitude and/or the aircraft entering a take-off or landing phase of flight. These times may be identified as ‘high-risk’ and so the probability of detecting unexpected surface damage is increased. The method may provide an additional set of data in these high-risk phases of a flight.

[0017] Optionally, the detected condition comprises a sensor sensing a sound or vibration characteristic. Such a sound or vibration characteristic may be consistent with an impact between the aircraft and an object and therefore improve the data available to the crew and pilot in the event of such an impact.

[0018] Optionally, at least the first image sensor comprises a 360° degree camera. This provides a wide field of view, and may reduce the number of image sensors required to have a field of view encompassing the entire surface of the aircraft. Moreover, there may be a greater number overlapping fields of views between image sensors and consequently an increased chance that two or even more than two sensors can capture images with unexpected surface damage in.

[0019] Optionally, an unexpected surface feature comprises at least one of surface corrosion and/or impact damage.

[0020] Optionally, the image data is captured in at least one of the visible, infrared, near- infrared, ultraviolet, or x-ray wavelength. Each part of the electromagnetic spectrum will capture different features of the surface, and as such unexpected surface features not visible to the human eye may be captured and identified.

[0021] Optionally, an alarm is generated in response to at least one of an output indicating the presence of the unexpected surface feature and/or an output confirming the presence of the unexpected surface feature. The alarm will inform the cabin crew and/or pilot and/or ground crew and provide them with an improved set of data to inform their response.

[0022] A second aspect of the present invention provides a system comprising a processor and at least a first aircraft mounted image sensor, the system arranged to perform the method of the first aspect of the present invention.

[0023] Optionally, the system comprises a sound and/or vibration sensor. Such a sound or vibration sensor can detect an event, such as an impact, and trigger the image sensors to record and identify the present of unexpected features. The unexpected features may be indicative of surface damage on the aircraft. [0024] A third aspect of the present invention provides an aircraft comprising the system of the second aspect of the present invention.

[0025] A fourth aspect of the present invention provides a machine-readable storage medium comprising instructions executable by a processor to implement the method according to the first aspect of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

[0027] Figure 1A is a schematic diagram of three views of an aircraft, according to an example, including an array of image sensors;

[0028] Figure IB is a plan view of the aircraft of Figure 1A showing an area covered by image sensors mounted on the aircraft, according to an example;

[0029] Figure 2A is a schematic diagram of a part of an aircraft wing, according to an example;

[0030] Figure 2B is a schematic diagram of the part of the aircraft wing including surface damage, according to an example;

[0031] Figure 3 is a flowchart, according to an example; and

[0032] Figure 4 is a digital map of a part of an aircraft wing corresponding to the wing in Figure 2A, according to an example.

DETAIFED DESCRIPTION [0033] Figure 1A illustrates an example of an aircraft 102 when viewed from three different positions, A, B and C. Position A is a plan view of the aircraft 102, position B is a front elevation of the aircraft 102, and position C is a side elevation of the aircraft 102. The figures show image sensors mounted at various positions on the aircraft. In the example shown, an image sensor 104, 107 is mounted at a position on the leading edge of each horizontal stabiliser and another image sensor 106 is mounted at a position at the top of the vertical stabiliser. Another image sensor 108, 116 is mounted on the leading edge of each wing tip. An image sensor 110, 118 is mounted on the underside of each wing and an image sensor 112, 119 is mounted atop each engine. There is also an image sensor 114 mounted on the underside of the fuselage toward the cockpit. The image sensor 114 may be mounted on the centre line, or at least near to the centre line, of the aircraft. An additional image sensor 128 is mounted on the top of the fuselage above the cabin area. The multiple image sensors may be referred to herein collectively as an array of image sensors.

[0034] As used herein, an image sensor is any kind of device that is able to capture an image. The device may operate in colour or monochrome, and may operate in visible or near IR (or IR) regions of the electromagnetic spectrum. Such a device typically captures and stores images digitally, and is controllable to communicate captured image data to a local or remote processor for image processing. Known image sensors, for example in digital cameras that are adapted for use in adverse (i.e. in flight) conditions, are suitable for use in examples herein.

[0035] The views A, B, and C in Figure 1A also illustrate examples of the fields of view of some of the image sensors. For example, a field of view 120 of image sensor 106 can be seen in the three views A, B, and C. A field of view 122 and a field of view 124 of image sensors 108 and 116 respectively can also be seen. Of course, the other image sensors not illustrated will have corresponding fields of view. In practice, a range of the field of view of each image sensor extends further than is depicted in the views. For example, the range of an image sensor may be at least as far as the furthest part of the aircraft that is within an angular field of view of a respective image sensor, to provide an overall field of view. Figure IB, for example, depicts a resultant field of view 126 that encompasses the entire area of the aircraft 102.

[0036] In some examples, the fields of view of the image sensors may be directed towards high risk surfaces and areas of the aircraft 102. For example, it may be beneficial to direct the field of view of an image sensor toward the leading edges of the aircraft and/or the underside of the fuselage and wings.

[0037] At least some of the image sensors may have a wide or a panoramic field of view, for example greater than 160° horizontally and/or greater than 75° vertically. What each image sensor can see for any given field of view is of course dictated by where the image sensor is mounted on the aircraft and in which direction it is directed. At least one of the image sensors may have a 360° field of view horizontally and 90° or greater vertically. Image sensors may be fixed, for example as applied in catadioptric cameras, and derive their wide fields of view from fixed elements, such as lenses or mirrors. Other image sensors may be movable, such as rotatable, to achieve their fields of view. In any case the image sensors may be interconnected and be in communication with one another either directly or via a central system. Connectivity may use a wireless protocol, such as an Internet of Things (IoT) protocol such as Bluetooth, WiFi, ZigBee, MQTT IoT, CoAP, DSS, NFC, Cellular, AMQP, RFID, Z-Wave, EnOcean and the like.

[0038] In some examples, the image sensors mounted along the centre of the aircraft 102 may be image sensors with a 360° field of view and the image sensors mounted on wing tips need not be. The image sensors mounted on the wing tips may only need to have a field of view such that all surfaces, or at least the high risk surfaces and areas of the aircraft 102, is within their field of view. The advantage of using a 360° image sensor for the centrally mounted cameras is that the image sensor is not required to move or scan the position to have a field of view that encompasses the whole aircraft 102. The image sensor could be fixed, and thus reduce moving parts.

[0039] In other examples, many or all image sensors mounted on the aircraft may be 360° degree cameras having a view of the external area of the aircraft 102 as well as the entire surface of the aircraft 102. Accordingly, certain of the image sensors may in addition serve to provide images of the areas around and/or away from the aircraft.

[0040] According to examples, the fields of view of various image sensors mounted on the plane 102 overlap. For example, the field of view 122 of the image sensor 108 overlaps with the field of view 120 of the image sensor 106. Put another way, according to examples, each external part of the aircraft 102 can be imaged by at least two image sensors; or, each external part of the aircraft 102 falls within the field of view of at least two image sensors.

[0041] The plurality of image sensors may be controlled by one or more processors (not pictured). The processor, or processors, may be mounted within the fuselage of the aircraft 102. In some examples, each image sensor may include a co-located processor that performs at least some control and/or image processing. For example, such an image sensor may be an IOT device, in respect of having a degree of processing autonomy. In other examples, the image sensors may be controlled centrally. The image sensors may be powered by local power connections taken from the aircraft power network. Control signals and image data may be communicated to and from image sensors via wired or wireless connections. In some examples, the processor(s) may be arranged to process images and/or video captured by the plurality of image sensors to identify external aerial vehicle candidates, such as UAVs.

[0042] Figure 2A is an illustration of an example captured image 200 from one of the image sensors mounted on aircraft 102. A part of the starboard wing surface 202 is depicted in the captured image 200 and may comprise different features on a part of wing surface 202. A leading edge 208 of this part of the wing may be considered to be a high risk surface of the aircraft 102. In any case, such surface features, which may be referred to as ‘expected surface features’, may include, but are not limited to, trailing edges 204, seams 206, leading edges 208, and no step lines (not pictured). While the example in Figure 2A shows a wing surface, other surfaces may be similarly captured from a variety of orientations and locations. More than one image may be captured by other image sensors that have overlapping views of the same surface. [0043] Figure 2B is an example illustration of another captured image 210 from the same image sensor as used for Figure 2A. The surface and surface features represented in the captured image 210 are the same as Figure 2A with like reference numerals representing the like features. However, in this captured image 210, there is a surface defect 212 present on the leading edge of the wing 202. Some surface defects, which may be referred to as ‘unexpected surface features’, may be, but are not limited to, surface corrosion, impact damage, and other forms of material wear.

[0044] The images may be processed by the previously mentioned processors to identify any unexpected features to detect potential surface damage to the aircraft. The unexpected surface features in an image may be identified using a trained machine classifier. For example, the images may be processed with reference to actual images of the surface regions captured by the aircraft- mounted image sensors. The images may be images of surface regions comprising no unexpected features (that is, before any unexpected surface features were detected). The images may form a corpus of images which are applied in a known way to train a machine classifier to ‘understand’ what undamaged surfaces look like and to identify expected surface features. In addition, or alternatively, the corpus of images may comprise a library of aircraft images rather than images of the actual aircraft. In any event, the images may include images captured in different operating conditions and under different lighting conditions. In other examples, the corpus of images may comprise both actual images of an aircraft and library images.

[0045] Such images may be used to train the classifier in their raw form, or the images may be pre-processed using known feature extraction techniques to identify expected surface features, such as edges, corners, ridges, etc.

[0046] Training under different operating conditions may for example comprise when an aircraft is on the ground and when the aircraft is in flight, to account for differences in aircraft form, especially, for example, the up-curvature of wings when in flight. The images, in relation to at least some aircraft surface regions, may comprise different versions captured from alternative locations or angles by different image sensors. [0047] Training under different lighting conditions may for example include different positions of the sun relative to the aircraft, cloudy conditions, and when the aircraft is illuminated during hours of darkness by fixed or flashing lights.

[0048] In effect, according to examples, the machine classifier is trained to ‘understand’ what an undamaged surface looks like and which features are expected to be present. In this case, the corpus of images of undamaged surfaces would tend to include smooth, continuous surfaces, clear/straight lines and smooth shade, colour or tone gradients. In an alternative approach to training, a machine classifier may be trained using a corpus of images of damaged aircraft surfaces, such as impact or other forms of surface damage. From an image processing perspective, such damage would tend to be classified by discontinuities in surfaces, abrupt changes in brightness, colour or tone, and the like, as opposed to smooth surfaces representing undamaged surfaces.

[0049] Figure 3 is a flow chart of an example method 300 of analysing a surface of an aircraft. At block 302 image data is captured by a first aircraft-mounted image sensor, which is one of plural image sensors in an array of image sensors. The image data represents a first image of a surface area or region of the aircraft. At block 304, the captured image is processed to identify surface features using the above described methods. At block 306, it is determined if there are any unexpected surface features present in the image. If it is determined that unexpected surface features are present in the image, the method generates an output indicating the presence of the unexpected feature at block 308. This output may indicate the presence of surface damage to the pilot, cabin crew, or ground crew. In some relatively simple examples the method may end at this point, or, if no unexpected surface features are detected at block 306, then the method returns to the start.

[0050] In other examples, the method does not end at block 308 and continues to perform a confirmation step. At block 310, the method detects that an output has been generated. In response, image data representing a second image of the surface region of the aircraft captured by a second image sensor is processed in block 312 to identify the presence of the unexpected surface feature in the second image of the surface region. The second image sensor is mounted on the aircraft and is spaced from the first image sensor to have a field of view that overlaps with the field of view of the first image sensor. The second image sensor may be triggered by the output at block 308 to capture an image. The second image sensor may capture image data normally using a first mode having a first image resolution or, if triggered, may capture image data in a second, higher resolution mode, to facilitate finer image analysis. If an unexpected surface feature is detected at block 314, an output indicating and indeed confirming the presence of the unexpected feature is generated at block 316. If there is no unexpected surface feature detected the method returns to the start at block 302.

[0051] In another example, if no surface feature is detected at block 314, then further image data representing a further image of the surface region of the aircraft is captured and processed. This further confirmation step may be performed by repeating the steps 302 to 306 with the first image sensor, or step 312 with the second image sensor. If the first and/or second image sensors are triggered to re-capture image data, the image sensor may use a second operating mode to obtain a different set of data for processing. For example, the second operating mode may capture images at a second resolution, or with a different wavelength, or using a different optical magnification. The further image data may also be captured by a third aircraft-mounted image sensor (if one is available) of the plurality of aircraft-mounted image sensors having a different field of view of the surface region. In other examples, the second sensor may already be capturing images but in low resolution to reduce storage requirements and processing power. The trigger caused by detection of an output indicating the presence of an unexpected surface feature may cause the second image sensor to begin higher resolution scanning. Alternatively, the damage may occur much closer to the second sensor than the first, in which case the former may spot it first and trigger the other to capture improved data (e.g. by getting it to use a higher resolution mode or a longer focal length, or a lower noise mode (e.g. lower ISO or slower shutter speed, etc.)). In other examples, both image sensors may spot the damage at roughly the same time, in which case there would be less/no requirement for a trigger to trigger a second image capture event. [0052] Processing the image to identify the surface features may be simplified by assuming the location of the image sensors that capture the image data. For example, this could include calibrating the image sensors so that their exact positions and fields of view are known.

[0053] In a variant to this process, in which a machine classifier has been trained using raw images that have not been pre-processed, captured images are not pre-processed either and the comparison is conducted using the raw images.

[0054] In another examples, image data from plural image sensors, the positions of which are known relative to one another and/or to the respective surfaces of the aircraft, may be used triangulate the location of an unexpected surface feature so that its location relative to the image sensors on the respective surface can be determined more accurately. Knowledge of the surface location may then be used to map the location of the surface feature to a digital model of the aircraft. As previously mentioned, such an approach benefits from knowledge of the positions of image sensors, determined, for example, by calibration. Triangulation in this way provides further location information of suspected surface damage on an aircraft to facilitate a more accurate indication to crew and groundcrew.

[0055] Some examples may not operate the image sensors at all times during the flight, for instance, to save energy and/or reduce the processing overhead. The image sensors may, for example, be controlled to operate at perceived relatively high-risk phases of the flight such as during one or more of: take-off, landing, up to and below 5000ft, and while taxiing on the ground. Control of the image sensors in this way may be via the processor(s) in response to signals from an avionics system of the respective aircraft and/or in response to altimeters and the like. Additionally, or alternatively, the image sensors may be activated during any time of the flight in response to a suspected impact. Such an event may be determined by other aircraft-mounted sensors, such as vibration sensors, sound sensors, motion detectors and an unexpected change in fuel consumption. A detected vibration characteristic may be consistent with an impact between the aircraft and an object. In some examples, the image sensors may be activated in response to detection of a foreign object near to the aircraft while on the ground or in flight. Detection of the foreign object may be by aircraft itself, using aircraft- mounted detectors such as radar or proximity detectors, by ground-based systems, such as radar systems at an airport, or even by detection systems on other aircraft or even satellites. In some examples, such off-aircraft systems may trigger image sensors to activate on all aircraft in the vicinity of a detected foreign object.

[0056] As previously mentioned, the one or more image sensors may record images and/or videos at any wavelength. For example, the image sensors may be a visible, infrared, near-infrared, ultraviolet, broadband, or x-ray image sensor. Beneficially, this will allow different parts of the electro-magnetic spectrum to be used when imaging a surface, and thus capture different features that may not be visible to the human eye. One example would be the use of infrared image sensors to build up an expected digital map of the infrared profile of the aircraft. Images which contain features that differ from the expected digital map will be flagged up. This could indicate internal structural damage, for example resulting in so-called ‘hot spots’ or ‘cold spots’ on the surface of an aircraft, that are currently not visible in the visible part of the electromagnetic spectrum.

[0057] Over time, as images of damage are captured by the plurality of image sensors, training of the machine classifiers can be augmented and/or continued to include training to identify damage and not just determine the differences compared to undamaged surfaces. For example, the determination that there is an unexpected surface feature in the image captured by the first image sensor (blocks 302-308) may be a false positive due to the ambient conditions such as lighting, cloud cover, reflections or other lighting conditions affecting the captured image. In any case, these images may still be used to train the classifier.

[0058] Figure 4 is an example illustration of a digital map 400 of a region or part of the surface of the aircraft, for example the leading edge of a wing, which is more prone to impact damage. The digital map 400 comprises features of the wing surface 202 identified from the captured images. Such identified and expected surface features that are included in the digital map 400 may include, but are not limited to, trailing edges 404, seams 406, leading edges 408, and no step lines (not pictured). The digital map may include information about surface contours and expected shapes of surfaces.

[0059] A digital map according to Figure 4 may be generated in various different ways. For example, the map may be generated by a processing system by reference to actual images of the surface regions captured by the aircraft- mounted image sensors.

[0060] In any case, according to examples, a resulting map comprises a digital representation of the surfaces of an aircraft and comprises the location and form of expected surface features thereon.

[0061] A captured image may be compared to the digital map and may comprise inputting the captured image data to the trained machine classifier to determine if surface damage is detected. The trained classifier may identify surface damage from the captured image by identifying either that the captured image includes an unexpected surface feature, when the classifier is trained using a corpus of undamaged aircraft surfaces, or by identifying surface damage, when the classifier is trained using a corpus of images of surface damage.

[0062] The examples described in the foregoing use an array of image sensors to determine surface damage to an aircraft. However, it will be appreciated that the methods described can be used for other applications. One example may be the use of the array of image sensors and respective processors to determine the presence of an unmanned aerial vehicle in the vicinity of the aircraft 102.

[0063] It is noted that the term “or” as used herein is to be interpreted to mean “and/or”, unless expressly stated otherwise. Throughout, the term aircraft has been used but it should be appreciated that any other vehicle could be used. Such as cars, lorries, UAVs, satellites, ships, and the like.