Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATED SYSTEM FOR CROP POLLINATION
Document Type and Number:
WIPO Patent Application WO/2020/251477
Kind Code:
A1
Abstract:
Described is an automated system for crop pollination. The system includes one or more micro aerial vehicles (MAV) comprising a pollen applicator for collecting pollen from a pollen source and depositing it at a target. The system includes an image capture system (ICS) positioned separately from the MAV to capture images of a field of view including the MAV, the pollen source and the target, and a path computation system (PCS). The PCS is for computing a desired path, based on the location of the MAV and of the target in the images captured by the ICS, from the MAV to the target, and transmitting the desired path to the MAV for the MAV to navigate, using the desired path, to deposit pollen at the target

Inventors:
JADHAV SIDDHARTH SUNIL (SG)
Application Number:
PCT/SG2020/050335
Publication Date:
December 17, 2020
Filing Date:
June 15, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NAT UNIV SINGAPORE (SG)
International Classes:
A01H1/02; B64C39/02; B64D1/08; G05D1/10; G06K9/62; G06T7/00
Foreign References:
US20180065749A12018-03-08
US20100299016A12010-11-25
EP3376332A12018-09-19
CN109792951A2019-05-24
KR20190062872A2019-06-07
US20160260207A12016-09-08
Attorney, Agent or Firm:
DAVIES COLLISON CAVE ASIA PTE. LTD. (SG)
Download PDF:
Claims:
Claims

1. An automated system for crop pollination, comprising :

at least one micro aerial vehicle (MAV) comprising a pollen applicator for collecting pollen from a pollen source and depositing it at a target, the pollen source and target being located in a cropped space;

an image capture system (ICS) positioned separately from the MAV to capture images of a field of view including the MAV, the pollen source and the target; and

a path computation system (PCS) for:

computing a desired path, based on the location of the MAV and of the target in the images captured by the ICS, from the MAV to the target; and

transmitting the desired path to the MAV for the MAV to navigate, using the desired path, to deposit pollen at the target.

2. The automated system of claim 1, wherein the ICS comprises a plurality of image capture devices positioned to have visibility of the MAV throughout the cropped space, the field of view of the ICS comprising the fields of view of the image capture devices.

3. The automated system of claim 1, wherein computing a desired path comprises computing a path for the MAV from the pollen source to the target.

4. The automated system of claim 3, wherein computing a desired path comprises computing a path for the MAV to the pollen source.

5. The automated system of claim 1, further comprising an identification module for receiving the images from the ICS and lighting data describing lighting conditions in the cropped space, and applying a perception algorithm to the images to identify a location of the pollen source and target in the cropped space. 6. The automated system of claim 5, wherein a neural network within the perception algorithm is trained from a plurality of temporally spaced target identification images captured by the ICS in varied lighting conditions, to identify features corresponding to pollen sources and targets in the target identification images.

7. The automated system of claim 6, wherein by identifying features corresponding to pollen sources and targets, the identification module identifies a pose of each of the pollen sources and targets in the images.

8. The automated system of claim 7, wherein the neural network in the perception algorithm is further trained from the plurality of temporally spaced target identification images to identify features corresponding to a pose of the MAV in the target identification images

9. The automated system of claim 8, wherein the PCS maps the pose of the MAV and the pollen sources and targets.

10. The automated system of claim 9, wherein the perception algorithm further identifies features corresponding to one or more obstacles in the cropped space, and the PCS computes the desired path to avoid the one or more obstacles.

11. The automated system of claim 1, wherein the target is one of a plurality of targets, and computing a desired path comprises computing a path between the target and another target of the plurality of targets.

12. The automated system of claim 11, wherein the pollen source is one of one or more pollen sources in the cropped space, the PCS being configured to identify the location of each of the one or more pollen sources and of each target, and to compute a desired path between the one or more pollen sources and the targets.

13. The automated system of claim 1, wherein the PCS is configured to: receive a feedback signal from one of the MAV and ICS; and

determine from the feedback signal whether contact has been made between the pollen applicator and target.

14. The automated system of claim 13, wherein the PCS is further configured to transmit a revised path to the MAV to reposition the MAV if the PCS determines contact has not been made between the pollen applicator and target.

15. The automated system of claim 13, wherein the feedback signal comprises one of:

one or more images from the ICS from which the PCS can visually detect contact between the pollen applicator and target; and

force feedback from the pollen applicator on the MAV, or confirmation by the MAV of contact between the pollen applicator and target, by which contact between the pollen applicator and target can be inferred.

16. The automated system of claim 13, wherein the MAV comprises at least one image capture device and the feedback signal comprises one or more images from the at least one image capture device of the MAV.

17. The automated system of claim 1, wherein the MAV comprises a beacon and a location of the MAV can be determined from a location of the beacon.

18. The automated system of claim 17, wherein the location of the MAV is determined by identifying the beacon in the images. 19. The automated system of claim 17, wherein the location of the MAV is determined by triangulating a signal from the beacon, through image capture devices positioned in the cropped space.

20. The automated system of claim 2, wherein at least one of the image capture devices is a stereocamera that captures stereo images comprising depth data for perceiving a depth of objects.

21. The automated system of claim 20, wherein each of the image capture devices is a stereocamera for capturing stereo images.

22. A method for automated crop pollination, comprising:

providing at least one micro aerial vehicle (MAV) comprising a pollen applicator for collecting pollen from a pollen source and depositing it at a target, the pollen source and target being located in a cropped space;

capturing, using an image capture system (ICS) positioned separately from the at least one MAV, images of a field of view including the MAV, the pollen source and the target; and

computing, using a path computation system (PCS), a desired path, based on the location of the MAV and of the target in the images captured by the ICS, from the MAV to the target; and

transmitting, using the PCS, the desired path to the MAV for the MAV to navigate, using the desired path, to deposit pollen at the target.

23. The method of claim 22, wherein computing a desired path comprises computing a path for the MAV from the pollen source to the target.

24. The method of claim 23, wherein computing a desired path comprises computing a path for the MAV to the pollen source.

25. The method of claim 22, further comprising receiving, at an identification module, the images from the ICS and lighting data describing lighting conditions in the cropped space, and applying a perception algorithm to the images to identify a location of the pollen source and target in the cropped space.

26. The method of claim 25, wherein applying a perception algorithm comprises pre-training a neural network within the perception algorithm using a plurality of temporally spaced target identification images captured by the ICS in varied lighting conditions, to identify features corresponding to pollen sources and targets in the target identification images.

27. The method of claim 26, wherein the identification module identifies features corresponding to pollen sources and targets, by identifying a pose of each of the pollen source and target in the images.

28. The method of claim 27, wherein further comprising mapping, at the PCS, the pose of the MAVs and the targets within the cropped space.

29. The method of claim 28, further comprising identifying, using the neural network, features corresponding to one or more obstacles in the cropped space, and computing, at the PCS, the desired path to avoid the one or more obstacles.

Description:
AUTOMATED SYSTEM FOR CROP POLLINATION

Technical Field

The present invention relates to an automated system for crop pollination. In particular, the system controls a micro aerial vehicle for automated pollination of a crop.

Background

With the destruction and simplification of habitats, natural pollinators are rapidly declining in number. By contrast, the human population, that relies on the natural pollinators, is increasing. This has driven the need to complement natural pollinators.

Recently, Robotic pollinators have been developed. Robotic pollinators are wheel-mounted or track-mounted devices that traverse along rows of a crop and pollinate flowers of the crop. High precision movements are required to pollinate a flower. Given a robotic pollinator is heavy and has the potential to apply more force than a natural pollinator it is critical that the robotic pollinator is steady during pollination. For this reason, crops that are pollinated by robotic pollinators are typically indoors.

Each automatic pollinating robot has a maximum operational height, above which it may become unstable or simply not be able to reach. This hinders the use of robotic pollinators for vertical farms.

Robotic pollination systems typically use cloud computing systems and internet servers to process images and to monitor and control the robotic pollinator. As a consequence, crops that are automatically pollinated tend to be in areas with stable internet and data connections. This reduces the flexibility of such systems to operate in remote areas, where larger crops can be produced. Instead, those systems are often located in or close to urban environments where the cost of floor space is considerably higher.

It would be useful to provide an automated system for crop pollination that avoids or ameliorates at least one of the aforementioned drawbacks of the prior art, or at least provides a useful alternative.

Summary

This invention automates pollination of crops, such as strawberry flowers and other flowering plants, in indoor vertical farms and other cropped spaces. In embodiments, the system comprises one or more micro aerial vehicles MAVs with an external array of cameras for localization of MAVs and target (i.e. a flower), and a mobile ground station for real-time computation and path planning of the MAVs. The MAVs are equipped with a payload, such as a vibrating bristled brush, for pollination. In a vertical farm, the MAVs fly to the target flowers with the help of vision-based navigation and perform pollination. This invention will be of great use to vertical farming companies, since use of bees indoors is challenging - bees cannot navigate and find flowers easily without UV spectrum, which is abundantly available in sunlight. UV light cannot be artificially replicated indoors without significant cost and posing health risk to the workers.

Described herein is an automated system for crop pollination, comprising : at least one micro aerial vehicle (MAV) comprising a pollen applicator for collecting pollen from a pollen source and depositing it at a target, the pollen source and target being located in a cropped space;

an image capture system (ICS) positioned separately from the MAV to capture images of a field of view including the MAV, the pollen source and the target; and

a path computation system (PCS) for: computing a desired path, based on the location of the MAV and of the target in the images captured by the ICS, from the MAV to the target; and

transmitting the desired path to the MAV for the MAV to navigate, using the desired path, to deposit pollen at the target.

The ICS may comprise a plurality of image capture devices positioned to have visibility of the MAV throughout the cropped space, the field of view of the ICS comprising the fields of view of the image capture devices.

Computing a desired path may comprise computing a path for the MAV from the pollen source to the target. Computing a desired path may comprise computing a path for the MAV to the pollen source.

The automated system may further comprise an identification module for receiving the images from the ICS and lighting data describing lighting conditions in the cropped space, and applying a perception algorithm to the images to identify a location of the pollen source and target in the cropped space. A neural network within the perception algorithm may be trained from a plurality of temporally spaced target identification images captured by the ICS in varied lighting conditions, to identify features corresponding to pollen sources and targets in the target identification images. By identifying features corresponding to pollen sources and targets, the identification module may identify a pose of each of the pollen sources and targets in the images. The neural network in the perception algorithm may further trained from the plurality of temporally spaced target identification images to identify features corresponding to a pose of the MAV in the target identification images. The PCS may map the pose of MAV, pollen sources and targets (flowers) each of the detected targets - the pose comprises a location of the relevant entity - MAV, pollen source or target - in 3-dimensional (3D) space, and may comprise all or a combination of (x,y,z) coordinates of the relevant coordinate space, roll angle, pitch angle and yaw angle. The perception algorithm may further identify features corresponding to one or more obstacles in the cropped space, and the PCS can then compute the desired path to avoid the one or more obstacles.

The target may be one of a plurality of targets, and computing a desired path may then comprise computing a path between the target and another target of the plurality of targets. Similarly, the pollen source may be one of one or more pollen sources in the cropped space, the PCS being configured to identify the location of each of the one or more pollen sources and of each target, and to compute a desired path between the one or more pollen sources and the targets.

The PCS may be configured to:

receive a feedback signal from one of the MAV and ICS; and

determine from the feedback signal whether contact has been made between the pollen applicator and target. The PCS may further be configured to transmit a revised path to the MAV to reposition the MAV if the PCS determines contact has not been made between the pollen applicator and target. The feedback signal may be one (or more) of:

one or more images from the ICS from which the PCS can visually detect contact between the pollen applicator and target; and

force feedback from the pollen applicator on the MAV, or confirmation by the MAV of contact between the pollen applicator and target, by which contact between the pollen applicator and target can be inferred. The MAV may comprise at least one image capture device and the feedback signal comprises one or more images from the at least one image capture device of the MAV.

The MAV may comprise a beacon and a location of the MAV can be determined from a location of the beacon. The location of the MAV may be determined by identifying the beacon in the images. The location of the MAV may be determined by triangulating a signal from the beacon, through receivers positioned in the cropped space. At least one of the image capture devices may be a stereocamera that captures stereo/3D images comprising depth data for perceiving a depth of objects. In other embodiments, each of the image capture devices captures stereovision images.

Also disclosed herein is a method for automated crop pollination, comprising: providing at least one micro aerial vehicle (MAV) comprising a pollen applicator for collecting pollen from a pollen source and depositing it at a target, the pollen source and target being located in a cropped space;

capturing, using an image capture system (ICS) positioned separately from the at least one MAV, images of a field of view including the MAV, the pollen source and the target; and

computing, using a path computation system (PCS), a desired path, based on the location of the MAV and of the target in the images captured by the ICS, from the MAV to the target; and

transmitting, using the PCS, the desired path to the MAV for the MAV to navigate, using the desired path, to deposit pollen at the target.

Computing a desired path may comprise computing a path for the MAV from the pollen source to the target. Computing a desired path may comprise computing a path for the MAV to the pollen source.

The method may further comprise receiving, at an identification module, the images from the ICS and lighting data describing lighting conditions in the cropped space, and applying a perception algorithm to the images to identify a location of the pollen source and target in the cropped space. Applying a perception algorithm may comprise pre-training a neural network within the perception algorithm using a plurality of temporally spaced target identification images captured by the ICS in varied lighting conditions, to identify features corresponding to pollen sources and targets in the target identification images. The identification module may identify features corresponding to pollen sources and targets, by identifying a pose of each of the pollen source and target in the images. The method may further comprise mapping, at the PCS, the pose of the MAV and identified targets within the cropped space. The method may further comprise identifying, using the perception algorithm, features corresponding to one or more obstacles in the cropped space, and computing, at the PCS, the desired path to avoid the one or more obstacles.

The PCS may be described as a ground station. The ground station may be mobile. The ground station may comprise the identification module and/or the ICS.

The ICS may be described as an array of cameras or other image capture devices. The cameras may transmit captured images to a central process for onward transmission to the PCS. The central processor may instead form part of the PCS.

Advantageously, embodiments of the present invention use MAVs with cameras. MAVs (generally considered to be less than 30 cm in size in any dimension) capable of precise control, are able to access flowers - e.g. to pollinate them. Moreover, when compared with robotic arms, use of MAVs for pollination significantly reduces the bill of materials, as well as cost of operation, yet the agility of MAVs increases the speed of operation.

Advantageously, embodiments of the present invention use external array cameras. In such embodiments, the system will create a 3D point cloud of the growing/cropping space. As a result, the pose of every single flower can be accurately tracked to ensure an MAV can pollinate a high fraction of the flowers.

Brief description of the drawings

Embodiments of the present invention will now be described, by way of nonlimiting example, with reference to the drawings in which : Figure 1 method of automatically pollinating a crop;

Figure 2 is a schematic of a computer system for implementing the method of Figure 1; and

Figure 3 provides a representation of 3D stereo-reconstruction for detection and localization of target flowers.

Detailed description

The present disclosure proposes a fully autonomous method and system for pollination of horticultural crops. This solution is especially critical in those agricultural environments where natural pollinators are either scarce, or absent, such as indoor vertical farms. The technology may be relevant in the seed industry. In the production of hybrid seeds, pollen is currently manually dispersed from one variety to another.

Figure 1 illustrates a method 1 for automated crop pollination in accordance with present teachings. The method broadly includes;

102: providing at least one MAV;

104: capturing images of a field of view, including the MAV(s), pollen source and target;

106: computing a desired path based on the images; and

108: transmitting the desired path to the MAV(s).

Step 102 involves providing one or more MAVs. In general, it is envisaged multiple MAVs will be used to increase the throughput of pollination. Flowever, for the purpose of illustration, the description below will be made with reference to a single MAV. Each MAV may take the form of a 4 or 8 propeller aerial drone, or any other configuration that enables hovering, and fine control of flight path. Each MAV includes a pollen applicator such as a vibrating brush for collecting pollen from a pollen source (generally a flower) and depositing it at a target (also generally a flower). The pollen source and target are located in a cropped space through which the MAV navigates.

In general, the cropped space will be an indoor, vertical setup with crops growing on vertical columns or horizontal shelves. It is envisaged that the same method and system can be used to pollinate outdoor cropped spaces, as well as indoor spaces that are not vertical columns or horizontal shelves

Under step 104, images are captured. Those images are captured using an image capture system (ICS). The ICS is positioned separately from the MAV to capture images of a field of view including the MAV, the pollen source and the target. This enables the system to determine the location of the MAV relative to other objects in the field of view, such as obstacles (e.g. structural components of the building in which the cropped space is located), targets and pollen sources.

This information in the images captured by the ICS are used by a path computation system (PCS), to compute the desired path per step 106, based on the location of the MAV and of the target in those images. The desired path is the path from the MAV to the target, in instances where the MAV already carries pollen to deliver to the target. The MAV may start navigating that path from the pollen source, or the desired path may also include a path between the MAV and the pollen source to pick up the pollen prior to delivery. The desired path may further include the path between the MAV and multiple pollen sources and/or targets.

Once the desired path has been computed, the PCS transmits the desired path to the MAV, for the MAV to navigate to deposit pollen at the target - step 108. The MAV then autonomously navigates between its current position and its various destinations, those being the target or targets, and pollen source or pollen sources.

Notably, changes in the environment of the cropping space can affect the ability of the PCS to identify the MAV, and any pollen sources and targets. Such variation in the environment may take the form of changes in lighting conditions or where the air contains heavy humidity. To that end, the system also includes an identification module that, according to 110, receives the images from the ICS and environmental data describing environmental conditions in the cropped space - e.g. lighting data describing lighting conditions in the cropped space. The identification module then analyses the images to identify the location of the pollen source and target in the cropped space, and may also identify the location of the MAV 201. Without knowing the location of at least the MAV 201 and target, it would be difficult if not impossible to compute a path between the two.

The identification module may be able to locate the pollen source and target by applying a perception algorithm to the images received from the ICS, per step 112. This may occur prior to step 102 or 104, or after step 104 in instances where the images captured by the ICS are also to be used to train a neural network which is a part of the perception algorithm. For the neural network to be effective, it must be trained (or "pre-trained" in the sense that it needs to be trained prior to practical application) to identify the pollen source and target. That training involves the identification module receiving a number of target identification images from the ICS, ideally being temporally spaced and/or in varied lighting conditions. The neural network can use the annotated images to train to identify features corresponding to pollen sources and targets in the target identification images. Those features may include colour, shape or any other features of the pollen source and/or target by which it can be differentiated from its surroundings. In an example, the identification module may identify a pose of each of the pollen source and target in the images. The pose is the position, as well as the orientation of the flower from which the anther or stigma is accessible - e.g. the position of the flower in 3D space (i.e. (x,y,z) coordinates), where the orientation of the flower or the direction it is facing is specified in terms of pitch angle, roll angle and yaw angle relative to the reference coordinates of the 3D space. Critically, existing robotic pollination systems are often limited to using shaker end effectors to pollinate self-pollinating flowers. This is because flowers hang at different angles from there stems and are thus difficult to pollinate using a robotic arm that is insufficiently articulated to be able to access the flowers from all angles to enable pollination. Since the present invention employs an MAV, that can freely move vertically, variations in height of the flowers do not affect the ability of the MAV to pollinate. Similarly, due to the size of MAVs, they are often able to navigate to the side of the flower from which the anther is accessible for pollen sources that are flowers (rather than a fixed vessel or source in which pollen is provided), to the stigma of target flowers. This largely avoids the need for excessive articulation such as would be otherwise required in robotic arms.

Therefore, ideally, embodiments of the present invention will perform vision- based navigation using cameras mounted externally, and not on the MAVs 201. The system then calibrates the entire field of view, which includes the targets as well as the MAV 201. Performing graphic-intensive computation on-board a MAV 201 is not possible with state-of-the-art processors. This is therefore performed by the PCS 204 as, or as part of, a ground station (i.e. the system of components mounted externally of the MAV 201, which may in some instances include the ICS 202).

The ICS 202, with its array of cameras, covers a certain 3D volume of space in the vertical farm, which covers a particular section (cropped space) of the indoor growing environment. Each camera may have or contribute to stereovision for depth perception. Deep neural networks will be trained to recognize the flowers in the lighting conditions of a vertical farm in various poses. After detecting the relevant features, classifying their pose, and recognizing their location in the images, these pixels of relevant objects (e.g. pollen sources, targets, MAVs 201 and obstacles) will then be mapped onto 3D co-ordinates using 3D reconstruction. This will be implemented in C++/Python using OpenCV. A point cloud of the field of view will be generated in real-time. The MAVs 201 have a beacon, such as an LED, for identification by the algorithm. Once both the targets and MAV 201 are identified in each frame of the images, their location is determined in real-time in the calibrated space by the ground station using a suitable camera model. The objects now being localized (i.e. their locations having been identified), these co-ordinates are now used to navigate the MAVs 201 towards the target flowers.

Once the pose of pollen sources and/or targets has been determined, the PCS can map the location and pose onto a three-dimensional (3D) coordinate map of the cropped space, per step 114. The neural network may also identify features corresponding to one or more obstacles in the cropped space per step 116, the locations of those features or of the obstacles themselves, being received by the PCS. The PCS can then compute the desired path to avoid those obstacles.

As a result of the method 100, a MAV can be controlled, or controlled itself, through the cropped space between pollen sources and targets, while avoiding obstacles. Particularly in embodiments where pollen sources, targets, and their pose and/or other features are identified in images captured by the ICS, human intervention in the pollination process can be largely avoided.

The autonomous method may be employed, for example, on a computer system 200 as shown in Figure 1. The block diagram of the computer system 200 will typically be a desktop computer or laptop. However, the computer system 200 may instead be a mobile computer device such as a smart phone, a personal data assistant (PDA), a palm-top computer, or multimedia Internet enabled cellular telephone.

As shown, the computer system 200 includes the following components in electronic communication via a bus 212 :

(a) MAV(s) 201;

(b) ICS 202;

(c) PCS 204;

(d) identification module 206;

(e) a display 208;

(b) non-volatile (non-transitory) memory 210;

(c) random access memory ("RAM") 214;

(d) N processing components embodied in processor module 216;

(e) a transceiver component 218 that includes N transceivers; and

(f) user controls 220.

Although the components depicted in Figure 2 represent physical components, Figure 2 is not intended to be a hardware diagram. Thus, many of the components depicted in Figure 2 may be realized by common constructs or distributed among additional physical components. Moreover, it is certainly contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to Figure 2.

The three main subsystems described herein in detail are the MAVs 201, an array of cameras forming the ICS 202 and a ground station comprising the PCS 204. In other embodiments, the PCS is the ground station or the ground station forms part of the PCS - as mentioned above, the division between modules and components can be adapted to suit a particular application, provided the functions specified for each module are achieved. The ground station (or PCS 204) also supplies power to the MAVs 201, along with receiving data from the ICS 202 as well as the MAVs 201, and sending control signals to the MAV 201. The ground station also houses a microprocessor, graphics processing unit (GPU) and power supply. In totality, the microprocessor, GPU and power supply run three different processes or algorithms in a sequence to compute the following :

i) identification and localization of the target and the MAVs 201 in realtime - performed by the PCS 204 based on images captured by, or information from, the ICS 202;

ii) path planning for MAVs - performed by the PCS 204 after identifying and localising the target and the MAVs 201; and

iii) feedback control to ensure that the payload (i.e. the pollen applicator with pollen thereon) contacts the reproductive parts (stigma) of the target flower and disperses pollen

Thus, the task of automated pollination is performed in 3 steps: localization of targets and MAVs 201, path planning for the MAVs 201, and pollination using the requisite payload.

As stated with reference to Figure 1, the MAV(s) acted as end effectors that perform the task of pollination. Each MAV 201 includes a pollen applicator collecting pollen from a pollen source and depositing it at a target located in the cropped space. Each MAV 201 may also comprise a beacon the location of which is readily detectable, e.g. by being readily identifiable in images captured by the ICS 202, such that the location of the MAV 201 can be determined by reference to the location of the beacon.

The MAVs carry the requisite payload - i.e. pollen applicator - that distributes the pollen from the anther to the stigma of, for example, the strawberry flower. This may be a vibrating bristled brush or other applicator. It does so by physically contacting the reproductive parts of the flower gently. Each of these are controlled by a flight-controller, which also has an inertial measurement unit (IMU) to ensure fine movement and control is achieved. The ICS 202, as shown in Figure 2, is a system of cameras that calibrates the field of view (of each camera) and collects images to localize the MAV and the target, and transmits that information to the ground station. The ICS 202 is positioned separately from the MAV 201 to capture images of the field of view that includes the MAV 201, the pollen source and the target. To that end, the ICS 202 includes an array of cameras positioned either individually, or in combination, to have visibility of the MAV throughout the cropped space. In this instance, the field of view of the ICS 202 will be the combined fields of view of the image capture devices.

Each camera of the ICS 202 is an image capture device. In this sense, the image capture devices are essentially sensors that could be any desired sensor, such as optical, infrared (IR) or thermal cameras. The number and the technical specifications of the cameras will vary depending on the field of view of the stack of crops (the stacks forming a vertical garden) in the cropped space, the required depth of perception, and the resolution of the image required for identification of target by algorithms implemented by the identification module 206. The array of cameras is such positioned that it has both the target flowers and the MAVs 201 in its field of view at all times.

The ICS 202 may also capture environmental conditions such as lighting conditions.

The PCS 204 computes the desired path from the MAV 201 to the target, based on the location of the MAV 201 and targeting the images captured by the ICS 202. The PCS 204 then transmits the desired path via transceivers 218 to the MAV 201, so that the MAV 201 can navigate using that path to deposit pollen at the target.

The computation of the position of the target, and the path of the MAV to the targets is performed on the processors on the ground station using the data from cameras (ICS 202), and not on-board the MAVs 201. This allows for high- fidelity identification and classification of targets with complex features since sufficient computational capacity is available off-board the MAVs 201.

Once the ground station has determined the location of the target flowers and the MAV 201, it runs a path planning algorithm. The algorithm plans the optimal path (e.g. shortest distance or least time), specifies the state parameters (position and velocity) of the MAVs 201 at a sufficient temporal resolution along the path to enable desired fine control to facilitate pollination without damage to target flowers, and communicates this information to the flight controller of the MAV 201 via tether (either wired, if the MAV receives data when stationary, or wireless if the MAV receives data in-flight). The localization algorithm (previous step) is also trained to identify the spatial orientation of the flower - i.e. its pose. Based on the direction of its growth, a path is planned using graph search based algorithms such as grass-fire, Djikstra and A-star. The path planning algorithm for MAVs 201 minimizes the flight time to reach the target while taking a smooth and stable trajectory, and also accounts for the orientation of the flower. The ground station identifies each of the target flowers in the 3D space (cropped space), and labels them. A path is then planned by the algorithm such that the MAVs 201 visit each of the target flowers in a sequential or otherwise logical fashion. On reaching the target, an algorithm takes over to perform pollination, Once pollination is performed, the location of the flower is flagged (e.g. the PCS 204 or ground station updates a target flower map to identify those targets that have been pollination from those targets that are yet to be pollinated), and the MAV 201 follows the path to the next target flower. On flagging all of the identified flowers in the field of view, the MAVs 201 return to a home location in the calibrated 3D space (cropped space), and one cycle of the automated pollination is completed. The cameras are then exposed to a new stack of crops, and the steps are repeated.

On navigating to the target flower, the MAV 201 performs pollination using the payload - vibrating bristled brush or other pollen applicator. To disperse pollen, the payload gently touches the reproductive parts of the flower. However, path planning involves uncertainty, and so does the localization of the MAVs 201. To ensure that the payload for pollination precisely contacts the reproductive parts of the flower, or is sufficiently close to achieve pollination, a feedback algorithm takes over.

The feedback algorithm may take many forms. In one embodiment, the feedback algorithm takes over the control of the MAV 201 and runs a feedback loop using the live images from cameras on-board the MAV 201. This method is called visual-servoing. This camera or cameras of the MAV 201 are in close proximity to the target flower, and also have the position of the payload in the field of view. The reproductive parts of the flower are located in the centre, and may have a distinct colour depending on the type of plant being pollinated. The algorithm stored on-board the MAV 201 is pre-trained to identify when the payload and the reproductive parts of the flower are in contact based on realtime visual data. Once the payload has completed the task, the path planning algorithm takes over and the MAVs 201 fly to respective next target flowers and repeat the process.

The PCS 204 may instead, or in addition, be configured to receive a feedback signal from one or both of the MAV 201 and ICS 202. That feedback signal (received by transceiver 218) can be used to determine whether contact has been made between the pollen applicator of the MAV 201, and the target. The feedback signal may be visual confirmation by the MAV 201 that it has contacted the target flower using the pollen applicator. In other embodiments, the feedback signal may be generated when, for example, the MAV 201 detects a force on the pollen applicator corresponding to contact with the target - e.g. force feedback inferring contact between the pollen applicator and target. Alternatively, the feedback signal may be generated by the ICS 202, e.g. in the form of image(s), by which the PCS 204 visually detects that the MAV 201 has reached the target or has made contact with the target.

If the PCS 204 detects that contact has not been made between the pollen applicator and target - e.g. by comparing a depth or location of the MAV 201 with a depth or location of the target - it may then transmit a revised path to the MAV 201 to facilitate that contact. Alternatively, if the PCS 204 detects that contact has been made between the pollen applicator and target, it may transmit a revised path to the MAV 201 by which it can pollinate the next target.

The display 208 generally operates to provide a presentation of content, such as the path calculated by the PCS for each MAV in the cropped space, the location of pollen sources and targets, the number or proportion of targets that have been visited and so forth. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro-projector and OLED displays).

In general, the non-volatile data storage 210 (also referred to as non-volatile memory) functions to store (e.g., persistently store) data and executable code, such as the perception algorithm's code used by the identification module, path computing planning code used by the PCS and so forth. The executable code in this instance comprises instructions enabling the system 200 to perform the methods disclosed herein, such as that described with reference to Figure 1.

In some embodiments for example, the non-volatile memory 210 includes bootloader code, modem software, operating system code, file system code, and code to facilitate the implementation components, well known to those of ordinary skill in the art that, for simplicity, are not depicted nor described.

In many implementations, the non-volatile memory 210 is realized by flash memory (e.g., NAND or ONENAND memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory 210, the executable code in the non-volatile memory 210 is typically loaded into RAM 214 and executed by one or more of the N processing components 216.

The N processing components 216 in connection with RAM 214 generally operate to execute the instructions stored in non-volatile memory 210. As one of ordinarily skill in the art will appreciate, the N processing components 216 may include a video processor, modem processor, DSP, graphics processing unit, and other processing components. The N processing components 216 may form a central processing unit (CPU), which executes operations in series. However, to ensure all the necessary features in the cropped space are identified, and the paths for the MAVs can be computed or updated rapidly, it can be desirable to use a graphics processing unit (GPU). Whereas a CPU would need to perform the actions using serial processing, a GPU can provide multiple processing threads to identify features or compute path segments of the desired path in parallel.

The transceiver component 218 includes N transceiver chains, which may be used for communicating with external devices via wireless networks. Each of the N transceiver chains may represent a transceiver associated with a particular communication scheme. For example, each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a CDMA network, a GPRS network, a UMTS networks), and other types of communication networks such as WiFi and radio transceivers.

Reference numeral 224 indicates that the computer system 200 may include physical buttons, as well as virtual buttons such as those that would be displayed on display 208. Moreover, the computer system 200 may communicate with other computer systems or data sources over network 226.

It should be recognized that Figure 2 is merely exemplary and that the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on, or transmitted as, one or more instructions or code encoded on a non- transitory computer-readable medium 210. Non-transitory computer-readable medium 210 includes both computer storage medium and communication medium including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer, such as a USB drive, solid state hard drive or hard disk.

The MAVs 201 may communicate with the rest of the system 100 via the transceiver component 218, or over network 226. Similarly, apps 222 may be installed on a mobile device of an operator of the cropped space, to facilitate provision of control commands to the MAVs or to view information identified by the identification module. The system 200 itself may also communicate with other crop pollination systems - e.g. to aggregate data such as pollination rate (proportion of total targets pollinated, or number of targets pollinated) - one or more further systems or remote servers via port 224.

In embodiments where the MAV 201 itself includes one or more image capture devices, the feedback signal mentioned above may include images from those image capture devices. Moreover, one or more, and potentially all, image capture devices of the ICS 202, and potentially the image capture devices of the MAV 201, captures stereovision images comprising depth data for perceiving a depth of objects in the stereovision images.

Such an embodiment is shown in Figure 3. Two cameras, each represented by their respective coordinate systems 300 (for the left camera) and 302 (for the right camera), are shown. The same function can be achieved in some embodiments using a single stereocamera - given a stereocamera comprises multiple lenses. In practice, a cropped space will contain more than two cameras or stereocameras, to ensure the full field of view is captured.

The cameras each cover a certain 3D volume of space in the vertical farm - the space having (a typically fixed) world coordinate system 304 - which covers a particular section (cropped space 306) of the indoor growing environment. Deep neural networks are trained, using labelled, target training images to recognize the flowers (pollen sources and targets) 308 in the prevailing lighting conditions of a vertical farm in various poses. After detecting the relevant features in images captured by the cameras (i.e. features indicating the pixels in an image are indicative of a flower), classifying their pose (i.e. deriving the (x, y, z) coordinates of the flower in the world coordinate system 304 from the coordinate systems 300, 302 of the cameras, the location of the flower 308 in images captured by the cameras, and the vector offset 310 between, or relative positions of, the cameras), the pixels of relevant objects are mapped onto 3D co-ordinates 304 using 3D reconstruction. The same process applies to determining the location of both pollen sources and pollen targets, MAVs 201 and obstacles such as structural features or pieces of plants that are not flowers.

The images processing steps may be completed by the ICS 202, on the ground. As such, the MAV 201 does not need to house computer resources sufficient for image analysis. Similarly, after the pollen sources, targets and MAV 201 are located in the images and located in 3D space 304, the PCS 204 then plans a path from the MAV to the relevant one or ones of the pollen source(s) and target(s).

The communication between the MAVs 201, ICS 202 and PCS 204 can be managed using wireless connections without the requirement for a global or local network.

Embodiments, of the present invention therefore provide a standalone system that does not involve usage of computer networks, either local or global. Contrastingly, prior art may rely on cloud computing and internet for its operations, which may be unavailable in remote locations.

The PCS may be described as a ground station. The ground station may be mobile. The ground station may comprise the identification module and/or the ICS. The ICS may be described as an array of cameras or other image capture devices. The cameras may transmit captured images to a central process for onward transmission to the PCS. The central processor may instead form part of the PCS.

It will be appreciated that many further modifications and permutations of various aspects of the described embodiments are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.

Throughout this specification and the claims which follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.

The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.