Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
UNMANNED AERIAL VEHICLE
Document Type and Number:
WIPO Patent Application WO/2019/076758
Kind Code:
A1
Abstract:
The present invention relates to unmanned aerial vehicle for agricultural field assessment. It is described to fly (210) an unmanned aerial vehicle to a location in a field containing a crop. A camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body of the unmanned aerial vehicle. The vertical separation between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle. The body of the unmanned aerial vehicle is positioned (220) in a sub- stantially stationary aspect above the crop at the location such that the camera is at a first posi- tion above the crop. The unmanned aerial vehicle is controlled (230) to fly vertically at the loca- tion such that the camera is at a second position below that of the first position. The camera acquires (240) at least one image relating to the crop when the camera is between the first posi- tion and the second position.

Inventors:
PETERS OLE (DE)
HOFFMANN HOLGER (DE)
Application Number:
PCT/EP2018/077899
Publication Date:
April 25, 2019
Filing Date:
October 12, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BASF SE (DE)
International Classes:
B64C39/02; A01B1/00; B64D47/08; G06F17/40; G06K1/00
Domestic Patent References:
WO2016123201A12016-08-04
WO2017041304A12017-03-16
Foreign References:
EP2772814A22014-09-03
Other References:
CÓRCOLES JUAN I ET AL: "Estimation of leaf area index in onion (Allium cepaL.) using an unmanned aerial vehicle", BIOSYSTEMS ENGINEERING, ACADEMIC PRESS, UK, vol. 115, no. 1, 17 March 2013 (2013-03-17), pages 31 - 42, XP028544816, ISSN: 1537-5110, DOI: 10.1016/J.BIOSYSTEMSENG.2013.02.002
SUGIURA R ET AL: "Remote-sensing Technology for Vegetation Monitoring using an Unmanned Helicopter", BIOSYSTEMS ENGINEE, ACADEMIC PRESS, UK, vol. 90, no. 4, 1 April 2005 (2005-04-01), pages 369 - 379, XP004820063, ISSN: 1537-5110, DOI: 10.1016/J.BIOSYSTEMSENG.2004.12.011
N.JJ. BREDA: "Ground-based measurements of the leaf area index: a review of methods, instruments and current controversies", JOURNAL OF EXPERIMENTAL BOTANY, vol. 54, no. 392, 2003, pages 2403 - 2417, XP055034924, DOI: doi:10.1093/jxb/erg263
Attorney, Agent or Firm:
BASF IP ASSOCIATION (DE)
Download PDF:
Claims:
Claims:

An unmanned aerial vehicle (10) for agricultural field assessment, comprising: a control unit (20); and

a camera (30); wherein, the camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body (40) of the unmanned aerial vehicle, and wherein the vertical sepa ration between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle; wherein, the control unit is configured to fly the unmanned aerial vehicle to a location in the field containing the crop; wherein, the control unit is configured to position the body of the unmanned aerial vehicle in a substantially stationary aspect above the crop at the location such that the camera is at a first position above the crop; wherein, the control unit is configured to control the unmanned aerial vehicle to fly vertical ly at the location such that the camera is at a second position below that of the first position; and wherein, the control unit is configured to control the camera to acquire at least one image relating to the crop when the camera is between the first position and the second position.

Unmanned aerial vehicle according to claim 1 , wherein the at least one image relating to the crop comprises at least one image acquired when the camera is within the canopy of the crop.

Unmanned aerial vehicle according to any of claims 1 -2, wherein the at least one image comprises a plurality of images and wherein the control unit is configured to control the camera to acquire the plurality of images at a corresponding plurality of different positions between the first position and the second position.

Unmanned aerial vehicle according to any of claims 1 -3, wherein the control unit is configured to control the camera to acquire one or more images of the at least one image relating to the crop when the camera is in the second position.

Unmanned aerial vehicle according to claim 4, wherein the second position comprises the ground.

6. Unmanned aerial vehicle according to any of claims 1 -5, wherein the control unit is configured to control the camera to acquire one or more images of the at least one image relating to the crop when the camera is in the first position.

7. Unmanned aerial vehicle according to any of claims 1 -6, wherein a processing unit (50) is configured to analyse the at least one image to determine a leaf area index for the crop.

8. Unmanned aerial vehicle according to any of claims 1 -7, wherein a processing unit (60) is configured to analyse the at least one image to determine at least one weed, and/or determine at least one disease, and/or determine at least one pest, and/or determine at least one insect, and/or determine at least one nutritional deficiency.

9. Unmanned aerial vehicle according to any of claims 7-8, wherein the unmanned aerial vehicle comprises the processing unit (50) and/or the processing unit (60).

10. Unmanned aerial vehicle according to any of claims 1 -8, wherein the unmanned aerial vehicle comprises at least one leg (70) attached to the body of the unmanned aerial vehicle, and wherein the camera is mounted on a leg of the at least one leg.

1 1 . Unmanned aerial vehicle according to claim 10, wherein vertical flight to position the camera at the second position comprises the control unit landing the unmanned aerial vehicle on the at least one leg at the location. 12. Unmanned aerial vehicle according to any of claims 9-1 1 , wherein the camera

is configured to acquire at least one image relating to the field, and wherein the processing unit is configured to analyse the at least one image relating to the field to determine the location in the field.

Unmanned aerial vehicle according to any of claims 1 -12, wherein the unmanned aerial vehicle comprises location determining means (90).

14. A method (200) for agricultural field assessment, comprising: a) flying (210) an unmanned aerial vehicle to a location in a field containing a crop, wherein a camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body of the unmanned aerial vehicle, and wherein the vertical separation between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehi- cle;

b) positioning (220) a body of the unmanned aerial vehicle in a substantially stationary aspect above the crop at the location such that the camera is at a first position above the crop; c) controlling (230) the unmanned aerial vehicle to fly vertically at the location such that the camera is at a second position below that of the first position; d) acquiring (240) by the camera at least one image relating to the crop when the camera is between the first position and the second position.

15. A computer program element for controlling an unmanned aerial vehicle according to any of claims 1 to 13, which when executed by a processor is configured to carry out the method of claim 14.

Description:
UNMANNED AERIAL VEHICLE

FIELD OF THE INVENTION The present invention relates to an unmanned aerial vehicle for agricultural field assessment, and to a method for agricultural field assessment, as well as to a computer program element.

BACKGROUND OF THE INVENTION The general background of this invention is the assessing of a field status in terms of weeds, diseases and pests, as well as assessing ecophysiology through for example the determination of a leaf area index (LAI). Presently remote sensing and unmanned aerial vehicles such as drones do not acquire imagery at the required resolution and quality in order to perform the required image diagnostics. Additionally, it is very time consuming for a farmer to enter a field and acquire the necessary image data.

SUMMARY OF THE INVENTION

It would be advantageous to have improved means for agricultural field assessment.

The object of the present invention is solved with the subject matter of the independent claims, wherein further embodiments are incorporated in the dependent claims. It should be noted that the following described aspects and examples of the invention apply also for the unmanned aerial vehicle for agricultural field assessment, the method for agricultural field assessment, and for the computer program element.

According to a first aspect, there is provided an unmanned aerial vehicle for agricultural field assessment, comprising:

a control unit; and

- a camera.

The camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body of the unmanned aerial vehicle. The vertical separation between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle. The control unit is configured to fly the unmanned aerial vehicle to a location in the field containing the crop. The control unit is configured also to position the body of the unmanned aerial vehicle in a substantially stationary aspect above the crop at the location such that the camera is at a first position above the crop. The control unit is configured to control the unmanned aerial vehicle to fly vertically at the location such that the camera is at a second position below that of the first position. The control unit is configured also to control the camera to acquire at least one image relating to the crop when the camera is between the first position and the second position. In other words, an unmanned aerial vehicle (UAV) such as a drone flies to a part of a field, and flies vertically at that location such that a camera fixed below the body of the UAV is moved toward the crop and can be lowered into the crop through the UAV descending, and because the camera is located sufficiently below the body of the UAV the body of the UAV can be main- tained at all times above the crop. Whilst being lowered and/or raised, including when the camera is not being move via movement of the UAV, the camera can take pictures relating to the crop. These pictures, can then be appropriately analysed to determine if there are weeds, diseases, pests at that part of the field. Also, the pictures can provide information on the leaf canopy area and height, and this way a leaf area index (LAI) for the crop at that location can be de- termined. The image data, which can then be analysed is acquired automatically in a reproducible manner, enabling results of for example the LAI to be accurately determined at that location and compared with values calculated at other parts (or locations) in the crop. Furthermore, by lowering the camera into the crop in this manner images can be acquired at various heights, including at ground level and above the canopy, and again with respect to LAI non-randomness in the canopy such as leaves sitting one on top of the other that leads to a reduction in the LAI determined can be mitigated because data can be acquired at various heights. Also, all parts of the plant can be interrogated from the top of the plant all the way to the plant at ground level, ensuring that a disease or pest affecting any part of the plant is not missed. Also, a weed can be determined with better accuracy, because imagery can be acquired at different heights as- sociated with different parts of the weed, ensuring that image recognition analyses can operate optimally in that it is more likely to obtain a fit between acquired data and reference imagery associated with that weed.

The UAV can acquire data around a field, for example in a square like 20m x 20m pattern, or determine itself from image processing where to position itself to acquire data, or could be directed to a location by a user.

Thus, in addition to images being acquired that can be used to determine LAIs, the images can be used to determine weeds and insect damage, enabling remedial action to be taken. In this way, this data can be acquired quicker, more accurately, and with greater accuracy than present techniques requiring a human operator to enter a field and manually acquire the required relevant data.

In an example, the at least one image relating to the crop comprises at least one image ac- quired when the camera is within the canopy of the crop.

In this way, a disease, and/or insect damage can be more effectively detected on the basis of image processing of acquired images, and weeds determined and identified more accurately. Also, by acquiring images within the canopy, a leaf area index (LAI) can be determined from the acquired imagery. In an example, the at least one image comprises a plurality of images and wherein the control unit is configured to control the camera to acquire the plurality of images at a corresponding plurality of different positions between the first position and the second position. Thus, by acquiring images at different heights localised disease and/or insect damage for example within particular plants of the crop can be detected from such imagery. Also, a leaf area index can be more accurately determined because it can be based on more than one image from within the crop at different heights, and mitigate effects such as leaf overlap that can otherwise lead to an underestimation of a LAI value.

In an example, the control unit is configured to control the camera to acquire one or more images of the at least one image relating to the crop when the camera is in the second position.

In other words, imagery can be acquired when the UAV has descended as much as it can such that the camera is as close to the ground as it can be. In an example, the second position comprises the ground.

In other words, the camera attached to the UAV touches the ground because the UAV has flown vertically downwards. In other words, the camera is fixed to a lowest or one of the lowest points of the UAV, and the UAV in effect lands on the ground and in doing so the camera is brought into contact with the ground. Thus, imagery can be acquired at all locations relating to a crop including that above a crop, that within the crop and imagery can also be acquired from the ground position. In this way, not only can imagery be acquired at all points within a crop from above the crop to all the way to the ground, by acquiring imagery from the ground a reference height for this image and all the other images can be determined with reference to the ground. To put this another way, the height of all images above the ground can be determined. Thus, the height of the crop can be determined and the height at which diseases and/or insect damage can be determined. Also, the height of images acquired to be used for a LAI measurement can be determined providing for a more accurate LAI value determination. Additionally, images at all heights of the crop can be acquired, providing for the ability to acquire more accurate LAI values and to ensure that all areas of plants are imaged, meaning that diseases, insect damage etc is less likely to be missed.

In an example, the control unit is configured to control the camera to acquire one or more imag- es of the at least one image relating to the crop when the camera is in the first position.

In this way, imagery from above the crop can be acquired to provide a reference value used in determining a LAI value. Reference values for LAI measurements can also be acquired when the camera has been moved away from the first position as the UAV descends. The image from above the crop can also be used in determining is there are weeds, diseases, pest and insects and/or insect damage to vegetation. In an example, a processing unit is configured to analyse the at least one image to determine a leaf area index for the crop.

In an example, a processing unit is configured to analyse the at least one image to determine at least one weed, and/or determine at least one disease, and/or determine at least one pest, and/or determine at least one insect, and/or determine at least one nutritional deficiency.

In an example, the unmanned aerial vehicle comprises the processing unit and/or the processing unit.

In an example, the unmanned aerial vehicle comprises at least one leg attached to the body of the unmanned aerial vehicle, and wherein the camera is mounted on a leg of the at least one leg. In an example, vertical flight to position the camera at the second position comprises the control unit landing the unmanned aerial vehicle on the at least one leg at the location.

In this manner, imagery of the crop can be acquired when the UAV has stopped or is feathering or has otherwise reduced the downdraught, and thus stream blowing of the leaves does not occur, and imagery can be used to more accurately determine diseases, weeds, insect damage and LAI values etc.

In an example, the camera is configured to acquire at least one image relating to the field, and wherein the processing unit is configured to analyse the at least one image relating to the field to determine the location in the field.

In this manner, the UAV can acquire imagery of the field and determine the locations or location to which it should fly, descend and acquire imagery. In this way, the UAV can operate in a completely autonomous manner.

In an example, the unmanned aerial vehicle comprises location determining means.

According to a second aspect, there is provided a method for agricultural field assessment, comprising: a) flying an unmanned aerial vehicle to a location in a field containing a crop, wherein a

camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body of the unmanned aerial vehicle, and wherein the vertical separation between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle;

b) positioning the body of the unmanned aerial vehicle in a substantially stationary aspect above the crop at the location such that the camera is at a first position above the crop; c) controlling the unmanned aerial vehicle to fly vertically at the location such that the camera is at a second position below that of the first position;

d) acquiring by the camera at least one image relating to the crop when the camera is between the first position and the second position.

According to another aspect, there is provided a computer program element for controlling the UAV of the first aspect, which when executed by a processor is configured to carry out the method of the second aspect. Advantageously, the benefits provided by any of the above aspects equally apply to all of the other aspects and vice versa.

The above aspects and examples will become apparent from and be elucidated with reference to the embodiments described hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

Exemplary embodiments will be described in the following with reference to the following draw- ings:

Fig. 1 shows a schematic set up of an example of an unmanned aerial vehicle for agricultural field assessment; Fig. 2 shows a method for agricultural field assessment;

Fig. 3 shows a schematic representations of locations in a field;

Fig. 4 shows a schematic representation of a detailed example of the unmanned aerial vehicle of Fig. 1 ;

Fig. 5 shows a schematic representation of a detailed example of the unmanned aerial vehicle of Fig. 1 ; Fig. 6 shows a schematic representation of a detailed example of the unmanned aerial vehicle of Fig. 1 .

DETAILED DESCRIPTION OF EMBODIMENTS Fig. 1 shows an example of an unmanned aerial vehicle 10 for agricultural field assessment, where dashed boxes represent optional features. The unmanned aerial vehicle 10 comprises a control unit 20 and a camera 30. The camera 30 is mounted on the unmanned aerial vehicle at a location vertically separated from a body 40 of the unmanned aerial vehicle. The vertical sep- aration between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle. Thus the camera 30 is mounted below the bottom of the UAV's body 40 by a distance greater than an average height of plants of the crop. The control unit 20 is configured to fly the unmanned aerial vehicle to a location in the field containing the crop. The control unit 20 is configured also to position the body 40 of the unmanned aerial vehicle 10 in a substantially stationary aspect above the crop at the location such that the camera 30 is at a first position above the crop. The control unit 20 is configured also to control the unmanned aerial vehicle 10 to fly vertically at the location such that the camera 30 is at a second position below that of the first position. The control unit 20 is configured to control the camera 30 to acquire at least one image relating to the crop when the camera is between the first position and the second position.

In an example, the camera is a 360 degree all around camera. In an example, the camera comprises a fisheye optical sensor, useable to enable a leaf area index to be calculated from the imagery. In an example, the camera acquires image data at a number of different angles enabling canopy light interception to be determined at those number of different angles, from which a LAI can be computed. In an example, the camera comprises a normal imaging sensor useable to image foliage at a resolution to enable image processing to determine a disease, weed, insect damage, or insect itself. The normal imaging sensor can also be used to determine an LAI.

In an example, the camera comprises both a fisheye optical sensor and a normal imaging sen- sor, and this way the camera is optimized for determining a LAI and for determining weeds, diseases, insect damage etc at the same time.

In an example, the camera is configured to acquire data below 500nm, and in this way LAIs can be determined from the images more accurately.

In an example, the camera is configured to operate over the visible wavelength range. In an example, the camera is configured to operate in the near Infrared range. In an example, the camera is monochromatic. In an example, the camera is configured to acquire colour information such RGB. In an example, the camera is configured to acquire hyperspectral infor- mation. In this way, the analysis of the imagery to automatically detect diseases, pests, soil nutrients, yield factors (kernel size, number of spikes, numbers of ears of corn), weeds, insect damage and insects can be improved.

In an example, the control unit is configured to determine the distance the camera has moved away from the first position, for example the control unit can determine how far the UAV have flown in moving such that the camera moves from the first position towards the second position. This can be via an inertial sensing system that detects movement, or through the use of for example radar or ultrasound systems detecting a height above the ground. According to an example, the at least one image relating to the crop comprises at least one image acquired when the camera is within the canopy of the crop. According to an example, the at least one image comprises a plurality of images. The control unit is configured to control the camera to acquire the plurality of images at a corresponding plurality of different positions between the first position and the second position.

According to an example, the control unit is configured to control the camera to acquire one or more images of the at least one image relating to the crop when the camera is in the second position.

According to an example, the second position comprises the ground. According to an example, the control unit is configured to control the camera to acquire one or more images of the at least one image relating to the crop when the camera is in the first position.

According to an example, a processing unit 50 is configured to analyse the at least one image to determine a leaf area index for the crop.

Information on the determination of leaf area index can be found for example in N.JJ. Breda, "Ground-based measurements of the leaf area index: a review of methods, instruments and current controversies, Journal of Experimental Botany, Vol. 54, No. 392, pages 2403-2417 (2003), and from the following website www.licor.com/env/products/leaf_area.

According to an example, a processing unit 60 is configured to analyse the at least one image to determine at least one weed, and/or determine at least one disease, and/or determine at least one pest, and/or determine at least one insect, and/or determine at least one nutritional deficiency.

In an example, the processing unit 50 is the processing unit 60.

In an example, the processing unit is configured to analyse the at least one image to determine at least one type of weed, and/or determine at least one type of disease, and/or determine at least one type of pest, and/or determine at least one type of insect, and/or determine of at least one type of nutritional deficiency.

Thus, an unmanned aerial vehicle such as a drone can fly around a field, and at a location de- scend and a camera acquires images of plants of the crop and on the basis of image processing of those images a determination can be made that there are weeds, and what the type of weed. The same applies for determination that there are pests, diseases, insects, nutritional deficiencies etc. In an example, analysis of the at least one image comprises utilisation of a machine learning algorithm. In an example, the machine learning algorithm comprises a decision tree algorithm.

In an example, the machine learning algorithm comprises an artificial neural network.

In an example, the machine learning algorithm comprises an artificial neural In an example, the machine learning algorithm has been taught on the basis of a plurality of images. In an example, the machine learning algorithm has been taught on the basis of a plurality of images containing imagery of at least one type of weed, and/or at least of type of plant suffering from one or more diseases, and/or at least one type of plant suffering from insect infestation from one or more types of insect, and/or at least one type of insect (when the imagery has sufficient resolu- tion), and/or at least one type of plant suffering from one or more pests, and/or at least one type of plant suffering from one or more types of nutritional deficiency. In an example, the machine learning algorithm has been taught on the basis of a plurality of images containing such imagery- The imagery acquired by the camera 40 is at a resolution that enables one type of weed to be differentiated from another type of weed. The imagery can be at a resolution that enables pest or insect infested crops to be determined, either from the imagery of the crop itself or from acquisition of for examples insects themselves. The drone can have a Global Positioning System (GPS) and this enables the location of acquired imagery to be determined. The drone can also have inertial navigation systems, based for example on laser gyroscopes. The inertial navigation systems can function alone without a GPS to determine the position of the drone where imagery was acquired, by determining movement away from a known or a number of known locations, such as a charging station. The camera passes the acquired imagery to the processing unit. Image analysis software operates on the processing unit. The image analysis software can use feature extraction, such as edge detection, and object detection analysis that for example can identify structures such in and around the field such as buildings, roads, fences, hedges, etc. Thus, on the basis of known locations of such objects, the processing unit can patch the acquired imagery to in effect create a synthetic representation of the environment that can in effect be overlaid over a geographical map of the environment. Thus, the geographical location of each image can be determined, and there need not be associated GPS and/or inertial navigation based information associated with acquired imagery. In other words, an image based location system can be used to locate the drone 10. However, if there is GPS and/or inertial navigation information available then such image analysis, that can place specific images at specific geographical locations only on the basis of the imagery, is not required. Although, if GPS and/or inertial navigation based information is available then such image analysis can be used to augment the geographical location associated with an image. The processing unit therefore runs image processing software that can be part of the image processing that determines vegetation location on the basis of feature extraction, if that is used. This software comprises a machine learning analyser. Images of specific weeds are acquired, with information also relating to the size of weeds being used. Information relating to a geo- graphical location in the world, where such a weed is to be found and information relating to a time of year when that weed is to be found, including when in flower etc. can be tagged with the imagery. The names of the weeds can also be tagged with the imagery of the weeds. The machine learning analyser, which can be based on an artificial neural network or a decision tree analyser, is then trained on this ground truth acquired imagery. In this way, when a new image of vegetation is presented to the analyser, where such an image can have an associated time stamp such as time of year and a geographical location such as Germany or South Africa tagged to it, the analyser determines the specific type of weed that is in the image through a comparison of imagery of a weed found in the new image with imagery of different weeds it has been trained on, where the size of weeds, and where and when they grow can also be taken into account. The specific location of that weed type on the ground within the environment, and its size, can therefore be determined.

The processing unit has access to a database containing different weed types. This database has been compiled from experimentally determined data. The image processing software, using the machine learning algorithm, has also been taught to recognize insects, plants infested with insects, plants suffering from pests, and plants that are suffering from nutritional deficiencies. This is done in the same manner as discussed above, through training based on previously acquired imagery. According to an example, the unmanned aerial vehicle comprises the processing unit 50 and/or the processing unit 60.

In an example the control unit and the processing unit 50 and/or processing unit 60 are the same unit.

According to an example, the unmanned aerial vehicle comprises at least one leg 70 attached to the body of the unmanned aerial vehicle, and wherein the camera is mounted on a leg of the at least one leg. In an example, the camera is mounted on the leg at a distal end of the leg to an end of the leg that is mounted to the body of the unmanned aerial vehicle.

According to an example, vertical flight to position the camera at the second position comprises the control unit landing the unmanned aerial vehicle on the at least one leg at the location. In an example, the at least one leg comprises three legs.

In an example, the leg(s) can be made from lightweight carbon sticks. In an example, the leg(s) can be 1 m long, or other lengths enabling the body of the UAV to be above the canopy of the crop when the UAV lands. Different length legs could be used, and could be of different length with respect to different crops being interrogated. According to an example, the camera is configured to acquire at least one image relating to the field. The processing unit is configured to analyse the at least one image relating to the field to determine the location in the field.

In an example, the UAV can acquire imagery and analyse that imagery to determine a regular grid, or for example 20m by 20m, and acquire imagery at locations associated with such a grid.

In an example, the UAV can acquire imagery, and analyse the imagery to determine areas that could for example be suffering from a disease or insect damage, or wjether there is a weed at that location. The UAV can then fly to that location, and acquire imagery that can be analysed to provide an accurate determination as to whether there is a disease, insect damage or weed.

According to an example, the unmanned aerial vehicle comprises location determining means 90. In an example, the location determining means is configured to provide the control unit with at least one location associated with the camera when the at least one image relating to the crop was acquired.

The location can be a geographical location, with respect to a precise location on the ground, or can be a location on the ground that is referenced to another position or positions on the ground, such as a boundary of a field or the location of a drone docking station or charging station. In other words, an absolute geographical location can be utilized or a location on the ground that need not be known in absolute terms, but that is referenced to a known location can be used.

In an example, the location is an absolute geographical location.

In an example, the location is a location that is determined with reference to a known location or locations.

In other words, an image can be determined to be associated with a specific location on the ground, without knowing its precise geographical position, but by knowing the location where an image was acquired with respect to known position(s) on the ground the location where imagery was acquired can be logged. In other words, absolute GPS derived locations of where the UAV has acquired imagery of a crop could be provided, and/or the locations of where imagery was acquired relative to a known position such as a field boundary or position of a charging station for the UAV could be provided, which again enables the farmer to determine the exact positions where imagery was acquired because they would know the absolute position of the filed boundary or charging station.

In an example, a GPS unit (92) is used to determine, and/or is used in determining, the location, such as the location of the camera when specific images were acquired.

In an example, an inertial navigation unit (94) is used alone, or in combination with a GPS unit, to determine the location, such as the location of the camera when specific images were acquired. Thus for example, the inertial navigation unit, comprising for example one or more laser gyroscopes, is calibrated or zeroed at a known location (such as a drone docking or charging station) and as it moves with the at least one camera the movement away from that known location in x, y, and z coordinates can be determined, from which the location of the at least one camera when images were acquired can be determined. Fig. 2 shows a method 200 for agricultural field assessment in its basic steps, where dashed boxes represent optional steps. The method 200 comprises: in a flying step 210, also referred to as step c), flying an unmanned aerial vehicle to a location in a field containing a crop, wherein a camera is mounted on the unmanned aerial vehicle at a location vertically separated from a body of the unmanned aerial vehicle, and wherein the vertical separation between the camera and the body is greater than an average vertical height of plants of a crop in a field to be interrogated by the unmanned aerial vehicle;

in a positioning step 220, also referred to as step d), positioning the body of the unmanned aerial vehicle in a substantially stationary aspect above the crop at the location such that the cam- era is at a first position above the crop;

in a controlling step 230, also referred to as step e), controlling the unmanned aerial vehicle to fly vertically at the location such that the camera is at a second position below that of the first position;

in an acquiring step 240, also referred to as step f), acquiring by the camera at least one image relating to the crop when the camera is between the first position and the second position.

In an example, a control unit of the unmanned aerial vehicle is configured to control the UAV to carry out step c).

In an example, a control unit of the unmanned aerial vehicle is configured to control the UAV to carry out step d).

In an example, a control unit of the unmanned aerial vehicle is configured to control the UAV to carry out step e).

In an example, a control unit of the unmanned aerial vehicle is configured to control the UAV to carry out step f).

In an example step f) comprises acquiring 242 one or more images of the at least one image when the camera is within the canopy of the crop. In an example, in step f) the at least one image comprises a plurality of images and wherein step f) comprises acquiring 244 the plurality of images at a corresponding plurality of different positions between the first position and the second position.

In an example, step f) comprises acquiring 246 one or more images of the of the at least one image relating to the crop when the camera is in the second position.

In an example, the second position comprises the ground.

In an example, step f) comprises acquiring 248 one or more images of the at least one image relating to the crop when the camera is in the first position.

In an example, the method comprises step a), acquiring 250 at least one image relating to the field with the camera.

In an example, a control unit of the unmanned aerial vehicle is configured to control the UAV to carry out step a). In an example, following step a) the method comprises step b), analysing 260 by a processing unit the at least one image relating to the field to determine the location in the field.

In an example, the UAV comprises the processing unit. In an example, the processing unit is the control unit of the UAV.

In an example, the processing unit is external to the UAV and step b) comprises transmitting 262 by a transmitter of the UAV the at least one image to the processing unit, and transmitting 264 the determined location from the processing unit to the UAV by a transmitter associated with the processing unit, to be used by the UAV in carrying out step c).

In an example, the method comprises step g) analysing 270 by a processing unit the at least one image to determine a leaf area index for the crop.

In an example, the UAV comprises the processing unit. In an example, the processing unit is the control unit of the UAV.

In an example, the processing unit is external to the UAV and step g) comprises transmitting 272 by a transmitter of the UAV the at least one image to the processing unit. In an example, the method comprises step h) analysing 280 by a processing unit the at least one image to determine at least one weed, and/or determine at least one disease, and/or determine at least one pest, and/or determine at least one insect, and/or determine at least one nutritional deficiency. In an example, the UAV comprises the processing unit. In an example, the processing unit is the control unit of the UAV. In an example, the processing unit is external to the UAV and step h) comprises transmitting 282 by a transmitter of the UAV the at least one image to the processing unit.

In an example, step e) comprises landing 232 the unmanned aerial vehicle on at least one leg of the unmanned aerial vehicle at the location.

In an example, the camera is mounted on the end of a leg of the at least one leg.

In an example, of the method the unmanned aerial vehicle comprises location determining means.

The unmanned aerial vehicle for agricultural field assessment and method for agricultural field assessment are now described in with respect to Figs. 3-6, relating to an embodiment of the UAV that has comprehensive functionality, not all of which is essential. Fig. 3 shows a schematic representation of a rectangular field with a crop (not shown), with Fig. 4 showing the UAV flying over the crop and acquiring imagery. What is shown is a grid of solid dots indicating locations to which the UAV will fly, position itself in a substantially stationary aspect above that location, fly vertically to lower its camera and acquire imagery. The schematic "map" shown in Fig. 3 has been generated by the UAV itself, through processing of acquired imagery, where that imagery is being acquired in Fig. 4. This processing used edge detection to determine the boundaries of the field, which is not necessary if the information relating to the positions of the field boundaries has been uploaded to the UAV, for example as a series of GPS coordinates. This particular UAV lands in order to acquire imagery (as shown in Fig. 6), but also acquires imagery during the landing process (as shown in Fig. 5). However, the UAV need not land, but can fly to a location and fly vertically downwards to acquire imagery without landing (as shown in Fig. 5) and then fly vertically upwards and then proceed to another location to acquire imagery, which could involve landing if necessary but need not do so.

The control (processing) unit of the UAV that controls its flight and the camera, also processes the imagery. The processing unit on the basis of the acquired imagery determines the grid where to land. The grid in this example, has been determined to be 20m x 20m. The processing unit carries out image processing of acquired imagery, and has determined that a part of the field had a crop that was not normal. The crop was not the same colour as the rest of the crop, and the plants at this part of the field were slightly stunted. Therefore, the processing unit de- termined that the UAV should land at an increased fidelity level of acquiring imagery at a grid spacing of 5m x 5m over this part of the field. The UAV can first fly over the field and determine the landing positions, or can start in one part of the field and gradually fly over and land at an appropriate level of fidelity based on the image processing. The grid of where to land, can how- ever be determined through remote sensing for example from satellite imagery or from imagery acquired from another UAV or drone, or a farmer can input to the UAV the fidelity of a grid where image data should be acquired (20m x 20m, or 15m x 15m, or 20m x 15m etc). As shown in Figs 4-6, the UAV actually has two cameras one attached to the bottom of a leg and one at- tached to a leg some distance above the bottom of the leg. Both cameras are however a distance down the leg away from the bottom of the body of the UAV such that in acquiring imagery the bottom of the UAV is always expected to be above the top of the plants of the crop, in order that the UAV does not become entangled in the crop, even if it lands. The UAV need not have two cameras, and can operate with one camera that can be at the bottom of a leg, but can be for example as shown for the other camera that is not at the bottom of a leg.

Figs. 5 and 6 show that the UAV has flown to one of the locations, and flown vertically downwards to position its camera within the canopy of the crop and acquire images. Indeed images were acquired above the crop too, and the UAV continued to descend and acquire images until it landed at that location, and it continued to acquire imagery when the camera was on the ground. The UAV has landed on three carbon fibre legs, where only two are shown. The legs are 1 m long, which enables the body of the UAV to sit above the canopy of the crop. The housing of the cameras has a structure that is designed to be "snag free" such that it can be lowered into crop and not snag on the plants of the crop. Thus it has minimal corners, and the top has a slight roof like structure with an apex in the centre of the top, such that when being raised within the crop the camera will not snag. A communication and control cable is attached to the camera^). Each camera has an inertial sensor to detect movement. Thus, as the UAV descends the movement of the cameras can be determined as they acquire imagery and when the UAV lands, the processing unit can determine the distance of the camera above the ground when imagery was acquired. Also, an ultrasonic sensor is attached to the bottom of the camera mounted on the bottom of the leg, and this is used to determine a height above the ground. A laser sensor, or radar sensor can also be used for this purpose. However, no such ultrasonic, laser or radar sensor is essential and imagery can be acquired above and within the crop canopy without needing to know the height at which the images were acquired, but this can help ensure that all the crop is covered and can help identify a weed because the height of the weed can be taken into account when determining its identity - a weed that is taller than a specific type of weed can be determined not to be that specific weed for example.

The UAV has a GPS, enabling the position of the UAV to be logged in association with imagery acquired at a location. As discussed above, the UAV also has inertial navigation sensors, based on laser gyros, in one of the cameras which are used to augment the accuracy of the GPS derived locations. The inertial navigation sensors are zeroed when the UAV is located at its docking/charging station and the relative movement from that location can be determined. The inertial navigation sensors need not be in the camera. The UAV can however have just the GPS or the inertial navigation system, and indeed can process imagery to render a synthetic landscape from which its position can be determined without recourse to a GPS or inertial navigation system. The camera has several image sensors.

The camera has an image sensor that can focus at 3-50m and this is used to acquire the imagery of the field discussed above with respect to Figs 3-4 for determining where to land.

The camera also has an upward looking "fisheye" sensor. This sensor is housed in the top of the camera and acquires imagery useable to determine a LAI. The sensor need not be housed in the top of the camera, and can be in the side of the camera. Indeed, there can be more than one imaging sensor that is acquiring the imagery. The sensor acquires imagery substantially over 360 degrees (centred around the vertical) and over a number of angles to be used for determining an LAI. When, the sensor(s) are located in the side of the camera a number of different sensors are used to acquire this imagery. The number of angles can be 3, 4, 5, 6, 7 etc. In Figs. 5-6, for simplicity, the camera is shown acquiring imagery over 3 sets of angles. But, the camera is actually acquiring imagery over 5 sets of angles and over substantially 360 degrees as discussed above, and as such the sets of angles as shown in Figs. 5-6 are in effect rotated about the vertical to provide a series of solid angle at different angles to the vertical over which imagery is acquired. By having more than one camera, one camera could be optimized for acquiring imagery for determination of a LAI and one camera used for acquiring imagery to detect weeds, insects, pests, and diseases etc. The sensor used to acquire the "LAI" imagery acquires imagery at wavelengths less than 500nm, because vegetation has minimal transmittance over these wavelengths and this imagery is best suited for the determination of LAIs. However, the sensors can operate at different wavelengths. The camera acquires imagery before it enters the crop canopy and acquires imagery at different heights including that on the ground, and from this imagery LAIs for the crop at that location can be determined. Not all of this imagery needs to be acquired. Reference has been made above to a document and website with respect to calculation of an LAI, and the skilled person can refer to this or other state of the art material relating to LAI determination in order to process the acquired imagery.

The camera also has 4 sideways looking, "normal" imaging sensors, where in Figs. 5-6 only one is shown imaging vegetation to the left of the camera for one camera and to the right of the camera for the other camera. The sensors are angularly spaced at 90 degree intervals around the camera, and enable the crops all around the camera to be imaged. However, there need only be one, two or three sensors. The sensors can focus at relatively short distances 5cm - 100cm. The sensors acquire high resolution imagery of the vegetation, which also enables in- sects to be imaged and then identified. The sensors operate over the visible wavelength range and into the near infrared, and provide hyperspectral imagery in that data over different wavelength ranges can be differentiated from each other, and in this manner over the visible range the sensors are in effect providing RGB data. This image data, over this (defined) wavelength range and at this resolution is then suitable for processing by an image processing algorithm to determine if the plant is a weed, and/or if the crop has a disease, pest, insect damage, and what insects are present. In the UAV shown in Figs. 5-6, the image processing is carried out by the processor of the UAV that is also controlling its flight and the cameras. In this way, a fully autonomous system is provided. When the UAV flies back to a docking station for charging, the data relating to the field is downloaded and made available to the farmer.

However, the UAV can transmit the analysed data to the farmer in real time, such that at the position where the drone has just landed the farmer is provided with information, such as LAI, and whether there are weeds, insect damage, diseases, pests immediately for that location. The UAV can also transmit the acquired imagery to a remote processor, that carries out the image analysis to determine LAIs and whether there are weeds, insect damage, diseases, pests, and in this way the UAV does not have to be as sophisticated and is less expensive and less power hungry, and the on-board processing and power consumption is reduced, although power is used through data transmission. In the above detailed example, three sets of imaging sensors of the camera are described: i) for imaging the field; ii) for acquiring image data useable to determine a LAI and iii) for acquiring image data useable to determine if there are weeds, disease, pests, insects, insect damage. However, the same sensor can be used for ii) and iii), and indeed if required the same sensor can be used for i), ii) and iii), for example when a variable focus capability is employed.

Image processing to enable analysis to determine a weed type

A specific example of how an image is processed, and determined to be suitable for image processing in order that a type of weed can be determined is now described:

1 . A digital image - in particular a colored image - of a weed is captured.

2. Areas with a predefined color and texture within the digital image are contoured within a boundary contour. Typically, one may expect one contoured area from one weed plant. However, there may also be more than one contoured area from different, potentially not connected leafs, from two weed plants, or the like. - Such a detection or determining process detects boundaries of green areas of the digital image. During this process at least one contoured area - e.g., one or more leafs, as well as one or more weed plants - may be built comprising pixels relating to the weed within a boundary contour. However, it may also be possible, that the digital image has captured more than one leaf and/or the stem. Consequently, more than one contoured area may be determined.

3. Determining if the boundary contour covers a large enough area, and determining a

sharpness (e.g. degree of focus) of the image data within the boundary contour. This firstly ensures that there will be sufficient image data upon which a determination can be made as to the type of weed, and secondly determines that a minimum quality of the digi- tal image will be satisfied in order that the type of weed can be made.

4. If both criteria in 3) are satisfied, the digital image, and specifically that within the boundary contour is sent to the processing unit for image analysis by the artificial neural network to determine the type of weed as described above. In another exemplary embodiment, a computer program or computer program element is provided that is characterized by being configured to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.

The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment. This computing unit may be configured to perform or induce performing of the steps of the method described above. Moreover, it may be configured to operate the components of the above described apparatus and/or system. The computing unit can be con- figured to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method according to one of the preceding embodiments.

This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and computer program that by means of an update turns an existing program into a program that uses invention.

Further on, the computer program element might be able to provide all necessary steps to fulfill the procedure of an exemplary embodiment of the method as described above.

According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, USB stick or the like, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.

A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.

However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer pro- gram element is arranged to perform a method according to one of the previously described embodiments of the invention.

It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims.

However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application. However, all features can be combined providing synergetic effects that are more than the simple summation of the features.

While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items re-cited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be con- strued as limiting the scope.