Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR CONTROLLING A GROUP OF DRONES
Document Type and Number:
WIPO Patent Application WO/2023/285935
Kind Code:
A1
Abstract:
A method of controlling a group of drones (1) comprises the steps of a) of providing a group of drones, b) sending a primary control instruction to the drones in the formation, representative of at least one of a trajectory and/or an action to be performed; c) estimating, for the first drone (A), a motion state of the drone, the motion state being defined by a plurality of motion parameters comprising at least position, speed and attitude; d) identifying by the second drone (B) a control state for the first drone, the control state being defined by a plurality of control parameters comprising at least position, speed and attitude; e) comparing the motion state and the control state associated with the first drone (A) and generating, based on the comparison of the motion and control parameters of the corresponding states; a state confirmation signal (Scon) if the difference between the motion and control parameters falls within a preset first range of values; or a state correction signal (Scor) if the difference between the motion and control parameters differs from the first preset range of values; f) correcting the motion state of the first drone (A) based on the correction signal (Scor); g) repeating the steps c)- f) until a confirmation signal (Scon) is generated; h) after the generation of a confirmation signal (Scon), generating a control signal (Scorn) for the second drone based on the motion state of the first drone and the control instruction; i) performing a motion maneuver for the second drone based on the control signal (Scorn); l) cyclically and iteratively repeating the steps c)-i) for each drone of the formation from the lead drone (T) to the tail drone (C) based on the control instruction.

Inventors:
MENSI ALFREDO (IT)
Application Number:
PCT/IB2022/056340
Publication Date:
January 19, 2023
Filing Date:
July 08, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
X ENDER S R L (IT)
International Classes:
B64C39/02; G05D1/10
Foreign References:
CN109213200A2019-01-15
GB2476149A2011-06-15
CN107121986A2017-09-01
KR20180076997A2018-07-06
Other References:
YUAN WEN ET AL: "Multi-UAVs formation flight control based on leader-follower pattern", 2017 36TH CHINESE CONTROL CONFERENCE (CCC), TECHNICAL COMMITTEE ON CONTROL THEORY, CAA, 26 July 2017 (2017-07-26), pages 1276 - 1281, XP033148907, DOI: 10.23919/CHICC.2017.8027526
BIAO WANG ET AL: "Formation flight of unmanned rotorcraft based on robust and perfect tracking approach", AMERICAN CONTROL CONFERENCE (ACC), 2012, IEEE, 27 June 2012 (2012-06-27), pages 3284 - 3290, XP032244483, ISBN: 978-1-4577-1095-7, DOI: 10.1109/ACC.2012.6315049
Attorney, Agent or Firm:
CERVINO, Stefano Matteo et al. (IT)
Download PDF:
Claims:
CLAIMS

1. A method of controlling a group of drones (1), the method (1) comprising the steps of: a) providing a group of drones configured to move in formation one after the other from a lead drone (T) to a tail drone (C) and comprising at least a first drone (A) and a second drone (B) wherein the second drone (B) follows the first drone (A) along the formation from the lead drone to the tail drone;

(b) sending a primary control instruction to drones in formation, representative of at least one of a trajectory and/or an action to be performed; c) estimating, for the first drone (A), a drone-specific motion state, the motion state being defined by a plurality of motion parameters comprising at least position, speed and attitude; d) identifying, by the second drone (B), a control state for the first drone, the control state being defined by a plurality of control parameters comprising at least position, speed and attitude; e) comparing the motion state and the control state associated with the first drone (A) and generating, based on the comparison between the motion and control parameters of the relative states:

- a state confirmation signal (Scon) if the difference between the motion and control parameters falls within a first preset range of values; or

- a state correction signal (Scor) if the difference between the motion and control parameters differs from the first preset range of values; f) correcting the motion state of the first drone (A) based on the correction signal (Scor); g) repeating the steps c)- f) until a confirmation signal (Scon) is generated; h) after the generation of a confirmation signal (Scon) generating a control signal (Scorn) for the second drone based on the motion state of the first drone and on the control instruction; i) performing a motion maneuver for the second drone based on the control signal (Scorn);

1) cyclically and iteratively repeating steps c)-l) for each drone of the formation from the lead drone (T) to the tail drone (C) based on the control instruction.

2. The method (1) as claimed in claim 1, wherein the method comprises a step of storing the motion states of each drone and the corresponding state confirmation signals (Scon) and the control signals (Scorn) in a log (R) shared by and distributed among the drones of the formation.

3. The method (1) as claimed in claim 3, wherein the step of storing includes the steps of storing:

- the motion states by the corresponding first drones (A);

- the state confirmation signals (Scon) by the corresponding second drones (B); - the control signals (Scorn) by the corresponding first drones (A) and/or second drones

(B).

4. The method (1) as claimed in claim 2 or 3, wherein the step of storing comprises, for each drone, the step of encrypting, by an encryption associated with the corresponding drone, the motion states, the corresponding state confirmation signals (Scon) and the control signals (Scorn).

5. The method (1) as claimed in any of claims 2 to 4 , wherein the step of storing comprises a step that is carried out between the step h) and the step i) at each cycle, of validating, by one or more drones in the group, the motion state of each drone, the corresponding state confirmation signal (Scon) and the control signals (Scorn) stored based on the control instruction before carrying out the step i), the step of validating including comparing the motion state and a corresponding preset motion state associated with the control instruction and defined by preset motion parameters comprising at least position, speed and attitude to generate:

- a state modification signal (Smod) if the difference between the motion and preset motion parameters differs from a second preset range of values; or

- a successful validation signal (Scv) to proceed with the next steps if the difference between the motion and preset motion parameters falls within the second preset range of values. 6. The method (1) as claimed in claim 5, wherein the step of validating includes correcting the motion state of one of the drones in the group based on the modification signal and repeating the steps c) to i) until the successful validation signal is generated to carry out the step i). 7. The method (1) as claimed in any of claims 1 to 6, wherein the motion state and the control state are representative of at least one of a position of the first drone relative to a reference system based on the control instruction and a maneuver of the first drone (A) associated with an action in progress. 8. The method (1) as claimed in any of claims 1 to 7, wherein the method (1) comprises a step of identifying the lead drone (T) based on the control instruction, the step of identifying the lead drone comprising the steps of

- defining a target to be reached and/or an action to be performed based on the control instruction; - acquiring environmental data for each drone;

- comparing the acquired environmental data with reference environmental signals based on the control instruction; - assigning the function of the lead drone to a drone of the formation based on the comparison and on a target recognition acknowledgement by the remaining drones.

9. The method (1) as claimed in any of claims 1 to 8, wherein the method (1) comprises a step, carried out before step c), of starting each drone of the formation from the lead drone (T) to the tail drone (C), the step of starting comprising the steps d:

- estimating a starting position for the first drone (A);

- moving the first drone (A) a first distance from the initial position along a first forward direction of movement and a second distance along a second height direction perpendicular to the first forward direction of movement,

- stopping the first drone (A) in a second position based on the first and second distance;

- repeating the moving and stopping steps performed by the first drone for the second drone (B);

- repeating the steps for each drone from the lead drone (T) to the tail drone (C).

10. A drone control system configured to carry out the method as claimed in any of claims 1 to 10, the system comprising:

- a central processing unit configured to generate and send a control instruction,

- a group of drones comprising a plurality of drones in signal communication with one another, at least one of them being in signal communication with the central processing unit, each drone comprising:

- a data processing unit and a set of sensors in signal communication with the central processing unit, which are configured to estimate the motion state, identify the control state, manage the control instructions and generate control, confirmation and correction signals,

- motion-imparting means, designed to be controlled by the data processing unit configured to move the corresponding drone along trajectories and/or to perform maneuvers associated with the actions to be performed;

- interaction means configured to interact with an external environment and to be controlled by the data processing unit, configured based on the control instructions and the signals sent and generated to/from the corresponding drone.

Description:
“Method for controlling a group of drones"

DESCRIPTION

Technical field The present invention relates to a method of controlling a group of drones.

Preferably, the method focuses on blockchain control and ambient intelligence logic for controlling drones. Specifically, the method relates to the field of controlling heavy drones for rescue and first-aid purposes such as firefighting in remote and hard-to-reach areas. It should be noted that the method of the present invention can be used on groups of drones that can be ground vehicles, watercraft and aircraft.

Background art

Drone control methods are known in the art, which coordinate the movement of a group of drones with a master-slave logic. Specifically, a single drone is known to be controlled with instructions sent by a user/operator, the aforementioned logic consequently controlling the rest of the drones in the group, so that they will move in similar manners. Certain control methods are disclosed, for example, in CN 109213200 A, GB 2476149 A, CN 107121986 A, and KR 20180076997 A. Additional prior art methods are also disclosed in “Multi-UAVs formation flight control based on leader- follower pattern” by Yuan Wen et All. e in “Formation flight of unmanned rotorcraft based on robust and perfect tracking approach” by Biao Wang et All.

Problems of the prior art

The control methods of the prior art suffer from a number of drawbacks associated with the difficulty of coordinating the drones in the group and with remote autonomous control. The prior art master-slave logics used by the known methods do not afford efficient and versatile control of the drones in the group, and are not easily able to remotely manage the actions to be taken by the drones when subject to harsh environmental conditions, such as the heat released from the flames of a fire or winds.

In addition, the known control methods are performed either inside each drone or completely outside, resulting in inefficiencies in both cases. That is, in the former case, the action of the first drone (aircraft/object) is unrepeatable and is not always optimized, and in the latter case the controlled drone (aircraft/object) is exposed to “external hacker” attacks, with the decision-making process being further based on the perception of an external viewer. Object of the invention

The object of the present invention is to provide a method of controlling a group of drones that can obviate the above discussed drawbacks of the prior art.

In particular, it is an object of the present invention to provide a method of controlling a group of drones that can correct and control in an autonomous and distributed manner the movements of each drone of the group of drones.

In addition, a further object of the present invention is to provide an ambient intelligence-based control method and a shared control logic for the group of drones.

The aforementioned technical purpose and objects are substantially fulfilled by a method of controlling a group of drones that comprises the technical features as disclosed in one or more of the accompanying claims.

Benefits of the invention

Advantageously, the method of the present invention affords efficient, versatile, accurate and scalable control of groups of drones (and/or aircraft and/or vehicles) having the same paths and actions to be performed (targets and tasks).

Advantageously, the method of the present invention affords combined self- synchronized operation of groups of drones, for example, up to forty drones (such a group can be expanded by combining multiple coherent groups of drones).

Advantageously, the method of the present invention affords direct control and/or the planning of the trajectory/action to be performed.

Advantageously, the method of the present invention affords efficient control of a group of drones in isolated areas, under harsh conditions and specifically subject to multiple possible interferences.

Advantageously, the method of the present invention allows the group of drones to act as a single object (from the point of view of an external viewer) with “independent appendices” so that the group of drones can self-correct local and general errors and trajectories.

BRIEF DESCRIPTION OF THE FIGURES

Further features and advantages of the present invention will result more clearly from the illustrative, non-limiting description of a preferred, non-exclusive embodiment of a method of controlling a group of drones as shown in the accompanying drawings:

- Figure 1 shows a block diagram of the method of controlling a group of drones according to one embodiment of the present invention;

- Figure 2 shows a schematic view of the group of drones controlled with the method according to one embodiment of the present invention; - Figure 3 shows a schematic view of the connection of the nodes (drones) as displayed in the shared and distributed log according to an embodiment of the present invention of the method for controlling a group of drones;

- Figure 4 shows a schematic view of the step of validating according to one embodiment of the present invention; - Figure 5 shows a schematic view of an encryption process for sending signals from a drone n to a drone n+1 according to one embodiment of the present invention;

- Figure 6 shows a schematic view of an encryption process for sending signals from a drone n to a drone n-1 according to one embodiment of the present invention;

- Figure 7 shows a schematic view of an encryption process for sending signals from drones to a shared and distributed log and vice versa according to one embodiment of the present invention.

DETAILED DESCRIPTION

Even when not expressly stated, the individual features as described with reference to the particular embodiments shall be intended as auxiliary to and/or interchangeable with other features described with reference to other exemplary embodiments.

The present invention relates to a method of controlling a group of drones, generally designated by numeral 1 in Figure 1.

The method is particularly suitable for controlling a group of drones, preferably heavy drones. The term drone refers to a mechanized vehicle/aircraft/object that can be managed by automation. Specifically, a drone refers to a vehicle/aircraft/object characterized by the lack of a pilot, which can be remotely controlled by an operator and/or in an automated manner using special controls..

According to the present invention, drones are aircraft equipped with motion- imparting means, preferably a plurality of propellers, which move them with respect to a reference frame and/or a drone and/or a target and also equipped with means of interaction with the environment. Nevertheless, it should be noted that the method of the present invention can be also used with groups of drones comprising ground vehicles, watercraft, spacecraft, aircraft of various types.

Preferably, the method 1 is used with a group of autonomous (heavy) drones designed for firefighting in isolated areas with reduced data and signal coverage without limitation to further applications such as surveillance and/or target search and/or recovery. The method 1 of the present invention is configured to allow a drone to directly control the next drone or be passively thereby, which allows replication of an action to be taken (task) and tracking of a geographic trajectory (target) with converging paths on the same routes. The method 1 of the present invention comprises a series of steps as described below and schematically shown in the block diagram of Figure 1.

The method 1 comprises a step a) of providing a group of drones 10 (each drone of the group of drones being indicated as a circumference in Figure 2) configured to move in formation one after the other from a lead drone T to a tail drone C. The group of drones comprises at least a first drone A and a second drone B, wherein the second drone B follows the first drone A along the formation from the lead drone to the tail drone, and preferably follows it directly. That is, the first drone A is in a position that precedes the second drone in the ordered formation, and the drone in the formation that is next to the first drone is the second drone B. As used herein, the first and second drones are generally considered to be any two drones in the group of drones 10 arranged as stated above.

It should be noted that the steps as described below for controlling the drones apply to the whole group of drones preferably from the lead drone to the tail drone iteratively at successive times. It should be noted that drones between the lead drone T and the tail drone C may identify anomalies, as explained below, and transfer them to the entire group, to perform corrective maneuvers for the entire group and/or for themselves.

Therefore, if more than two drones are provided, for example, at the first cycle the lead drone is identified as the first drone A and the drone directly next to it is identified as the second drone B . At the next cycle of steps, the drone that was identified in the previous cycle as the second drone is now identified as the first drone and the drone directly next to it is identified as the second drone and so on throughout the cycles, up to the tail drone C, from which the cycle starts again. Preferably, the cycles on the group 10 drones are repeated for each time interval so that each drone is tracked as it moves.

According to a preferred embodiment, the method includes encrypting the transmission of signals between drones with an encryption process preferably of the type as used in crypto currency transactions. Specifically, Figures 5 and 6 show how a drone sends information to the next drone (from drone n to drone n+1) and how a drone sends information to the previous drone (from drone n to drone n-1), respectively. This encryption process, as explained below, is used to transfer a signal from the first drone A to the second drone B (Figure 4) for example in step c) and to transfer signals from the second drone B to the first drone A (Figure 5) for example in steps d) and/or e).

Specifically, the step of encrypting signals between drones uses asymmetric encryption where each drone has a public key visible to all other drones and a private key unknown to other drones and configured to decrypt an encryption performed using the corresponding public key. In addition, each drone is configured to generate a hash (from hash encryption) to recognize the sending action that has been performed (unique drone signature), to encrypt it using a private key, and to send it to the next drone along with the encrypted signal. Such encrypted hash is shared with the next drone to confirm the unique signature of the drone that sent the hash using its own public key. In the example of Figure 5, the drone 2 sends a signal to the drone 3, by encrypting the signal with the public key of the drone 3, and validates the operation using its own private key. In addition, in order to confirm that the signal has been properly sent, the drone 3 reproduces the hash of the drone 2, decrypts the received hash with the public key of the drone 2 and compares them. The result of the comparison can confirm that the signal has been properly sent and validate it using the corresponding private key of the drone 2.

Figure 6 shows the same encryption process, performed by the drone next to the previous one, with the same asymmetric encryption modes.

The method 1 comprises a step b) of sending a control instruction to drones in formation, representative of at least one of a trajectory and/or an action to be performed. Specifically, the step b) includes sending a control instruction to at least one drone of the group of drones 10 to track a trajectory to a target which may be a place and/or entity and/or to perform an action, for example, collecting a certain amount of water after identifying a pool or looking for a target (missing person or fire). Specifically, the control instruction shall include parameterized instructions compatible with modifiable parameters of the respective drones, such as position, speed, attitude or actuation of interaction means for water collection or object/person recovery.

The step b) is conducted by means of a central processing unit configured to send the control instruction to at least one drone of the group, so that it will be later shared with the other drones. Specifically, the step b) includes generating a control instruction based on an event identified, for example, as a fire (or based on an action to be performed, such as a monitoring an area, recovering missing persons) and sending that instruction to the group of drones.

It should be noted that, as explained below, the control instruction comprises both information about where to send the drones and the corresponding actions to be performed, and instructions for start-up of each drone from a home (rest) position to a second (start) position. These start-up instructions allow drones to be actuated from a ground parking condition (in a base/yard) in the initial position to a hovering condition, in a second position, and then to move to a location and/or perform a given action.

According to a preferred embodiment, management algorithms, preferably based on neural networks and machine learning, generate the control instruction to be sent to the drones for them to operate. This control instruction comprises an optimized trajectory to track and instructions to perform maneuvers associated with actions to be performed. More preferably, each drone includes basic predetermined instructions (basic programming within each drone) that are stored in a relevant data processing unit in each drone. Thus, once the control instruction has been sent, each drone is instructed about the actions to be performed and the location to be reached by actuating the predetermined instructions according to the control instruction. For example, each drone may be equipped with a water collection protocol for collecting water from a pool, which is configured to change the parameters of the drone to perform a pool approaching maneuver (by changing speed and attitude) and a subsequent moving maneuver for the interaction means to collect water. It should be noted that, as explained below, each drone is also equipped with sensors that can sense environmental data and, by the management algorithms residing in the data processing unit, can adapt the maneuvers, including those associated with predetermined instructions, to the collected environmental data and to the environment in which it is acting.

Therefore, preferably each drone comprises its own management algorithms, preferably based on neural networks and machine learning, which are configured to optimize the path/trajectory to be followed and the corresponding actions to be performed based on a small amount of information provided by the control instruction such as, for example, location and type of event and information available from the environment. Specifically, each drone may generate secondary control instructions to be shared with the other drones in the group 10 and/or for managing the parameters of the drone after receiving the control instruction of step b). In other words, the group of drones 10 may consider the control instruction as a combination of the control instruction sent to the drones in step b) and the secondary control instructions generated by management algorithms on each drone. These secondary control instructions may comprise correction maneuver signals based on the environmental data collected by the drones.

Therefore, each trained management algorithm (residing in the central processing unit and/or in each data processing unit) can generate control instructions for the drones, optimized for performing maneuvers associated with the actions to be performed and/or for identifying and later routing them along trajectories to the defined location. Advantageously, the management algorithms can also manage and act upon parameter correction due to environmental conditions (e.g. winds, fire-generated heat, etc.) that each drone can detect.

The method comprises a step c) of estimating, for the first drone A, a drone- specific motion state. The motion state is defined by a plurality of motion parameters including at least position, speed and attitude, preferably also parameters associated with interaction means of the drone. Preferably, the step c) is conducted using the data processing unit and an associated set of sensors associated with each drone.

It shall be noted that the motion state, as used in the present invention, refers to the position of a drone relative to a reference frame and/or the maneuver that the drone is performing, either associated with a change of direction or with an action and the first drone A is performing (e.g. a maneuver may be understood as the particular attitude of the drone to collect water from a water source and the relative position of the water collection means).

Preferably, the step c) includes estimating the motion state of the first drone A at preset time intervals so that the drone may be tracked in each of its motion states.

According to a preferred embodiment, each motion state is made is estimated using Kalman filters and/or Kalman-Bayesian-like filters

According to a preferred embodiment, the step c) includes sending the motion state to the second drone B preferably in the form of an Sm signal (as shown in Figure 2 by the arrow from the first drone A to the second drone B).

The method comprises a step d) if identifying, by the second drone B, a control state for the first drone A. The control state is defined by a plurality of control parameters comprising at least position, speed and attitude and preferably also parameters associated with interaction means of the drone.

As used in the present invention, a control state refers to the actual position of the first drone A relative to a reference frame and/or the actual maneuver that the drone is performing, either associated with a change of direction or with an action that the first drone A is performing (e.g. a maneuver may be understood as the particular attitude of the drone to collect water from a water source and the relative position of the water collection means), as viewed and identified by the second drone B.

Specifically, the control state corresponds to the actual state in which the first drone A is.

More in detail, the motion state Sm is an estimate of a position and/or a maneuver associated with an action being performed by the first drone A, whereas the control state is the identification by the second drone of the position and/or maneuver of an action being performed by the first drone and actually in progress. Preferably, the control state perceived by the second drone B for the first drone A is a feedback of the motion state Sm perceived by the first drone A relative to itself.

According to a preferred embodiment, the step d) includes sending the control state to the first drone A in the form of a signal Sc (as shown in Figure 2 by the arrow from the second drone B to the first drone A). It shall be noted that the communication between the drones in the group of drones for sending and receiving signals takes place at specific signal frequencies.

According to a preferred embodiment, the step d) is conducted using the data processing unit and the corresponding set of sensors associated with the second drone B. Specifically, the control state is identified by the set of sensors comprising at least one of camera sensors, lidars, radars and position sensors (e.g. GPS). Thus, the step d) includes identifying the control state for the first drone A. Preferably, a control state is identified by tacking the predetermined time intervals in which the motion states Sm are estimated. By this arrangement the actual state in which the first drone A is may be assessed at time intervals so that it can be corrected and/or imitated for the subsequent drones.

It shall be noted that the motion state Sm and the control state are representative of at least one of a position of the first drone A relative to a reference system based on the control instruction and a maneuver of the first drone A, associated with an action in progress.

According to the present invention, the signal associated with the motion state Sm transmitted during the step c) to the second drone B and the signal associated with the control state Sc transmitted as a feedback during the step d) to the first drone A along with the maneuver confirmation signals Scon and the state correction signals Scor define a sequential log. Preferably, the sequential log (represented with curved arrows between drones in Figure 2) allows communication between the drones in the group to generate a control chain according to blockchain rules, with each drone representing a node in the chain.

The method comprises a step e) of comparing the motion state Sm and the control state associated with the first drone A and generating, based on the comparison of the motion and control parameters of the corresponding states, a maneuver confirmation signal Scon if the difference between the motion and control parameters falls within a preset first range of values (as shown in Figure 2 by the arrow from the first drone A to the second drone B). However, if the difference between the motion and control parameters differs from the first preset range of values, the step e) includes generating a state correction signal Scor (as shown in Figure 2 by the arrow from the second drone A to the first drone A). It shall be noted that the comparison step e) for anomaly detection may be conducted using a Kalman-Bayesian-like filter. This filter is used in estimating the error between the motion state and the control state, as if the motion state were the data of the sensor and the control state were the data of a second sensor of the drone B which detects the control state data or the motion state data.

Specifically, the step e) includes comparing, using management algorithms, the different parameters associated with each of the states of the first drone to identify whether the motion state Sm estimated by the first drone A matches the control state identified by the second drone B as well as the actual state of the first drone A. By this arrangement, the motion state Sm of the first drone A may be corrected based on the control state or, if no correction has to be made on the motion state Sm of the first drone A, a signal ma be generated, as further explained below, to control the second drone B, which is to be considered as the first drone A at the next cycle.

It shall be noted that the comparisons between states are made between states associated with the same preset time interval. Preferably, the step e) includes sending a motion state signal Sm representative of the motion state from the first drone A to the second drone B, where the comparison and generation of a state confirmation signal Scon or a state correction signal Scor are made by the corresponding data processing unit.

Specifically, the first drone A directly controls the second drone B and is passively controlled thereby, which allows replication of a task action (action to be performed) and reaching of a geographic target with trajectories converging on the same routes.

Advantageously, the method 1 introduces the concept of chained master-slave with ambient control logic, decentralized through a shared “unchangeable" data structure in the group of drones 10.

The method comprises a step f) of correcting the motion state Sm of the first drone A based on the correction signal Scor. Specifically, the correction signal comprises correction instructions on motion parameters for changing the motion state Sm of the first a drone A. Thus, the first drone A takes corrective actions by the first data processing unit on the motion-imparting and/or interaction means based on the correction signal Scor and changes the motion state Sm.

The method comprises a step g) of repeating the steps c)- f) until a confirmation signal Scon is generated. Specifically, the step g) includes performing a feedback action on the correction signal Scor until a state confirmation signal Scon is obtained. This affords real-time correction for each drone of the group of drones 10.

The method comprises a step h), conducted after the generation of a confirmation signal Scon, of generating a control signal Scorn for the second drone B based on the motion state Sm of the first drone A and on the control instruction. Specifically, the step h) includes generating a control signal Scorn for the second drone B based on the trajectory and/or maneuver that the first drone A has performed. In detail, a control signal Scorn for the second drone B is generated at a time that follows the time of the motion state Sm with which a state confirmation signal Scon is associated. Preferably, the estimation of the motion state Sm, the identification of any corrections by the control state and the control instruction are processed in step h) by the corresponding management algorithm to obtain a control signal Scorn for the second drone B. It shall be noted that the control signal Scorn comprises parameters to be changed for the second drone B (such as position, speed, attitude and/or for the interaction means) to follow the trajectory followed by the first drone A and/or emulate a maneuver associated with a performed task associated with the first drone A.

In detail, the step h) allows the second drone B to be controlled by means of the first drone A so that it will follow the trajectory and/or perform a maneuver of the first drone A. The control signal Scorn is generated within the time interval tB+sA/vB.

It shall be noted that the terms of the interval are defined by the type of data that is detected, certain data transmitted as signals being provided continuously (e.g. the detected speed), other data, such as the perception of detected events (e.g. winds or fires) being transmitted when detected and/or at intervals defined before the mission.

Preferably, the steps c), d), (e) and h) are conducted according to a calculation process based on a pseudo-Bayesian structure. The method comprises a step i) of performing a motion maneuver for the second drone B based on the control signal. Specifically, once the trajectory and/or the action to be performed has been identified and expressed as a control signal Scorn, the control signal Scorn is sent to the processing unit of the second drone B in signal communication with the motion-imparting an interaction means. This control signal Scorn representative of parameters to be changed to follow the trajectory and/or perform a maneuver associated with an action allows the processing unit to act on the motion- imparting and interaction means to change the parameters and follow the trajectory and/or the maneuver.

The method comprises a step 1) of cyclically repeating the steps c) to i) for each drone of the formation according to the control instruction. Specifically, at the next cycle the second drone B will act as the first drone A and the drone next to the new first drone A will act as the second drone B. These cycles are repeated until reaching the tail drone C as a second drone B and are restarted again from the lead drone T drone as the first drone A. Preferably, the steps c)-l) are conducted for each time interval from the lead drone T to the tail drone C.

Advantageously, the above-discussed repetition of the steps for all the drones in the group of drones 10 provides curves on both the actual trajectory of the group and the accumulated changes of trajectory of the drones, thereby enabling correction calculations across the group. In addition, this repetition of the steps enables the group of drones to perform a coordinated and correct maneuver to perform a given action and to follow a given trajectory for both each drone of the group and the entire group of drones.

According to a preferred embodiment, the method comprises a step of storing, in a log R (as shown in Figure 2 with straight lines converging at the center of the formation of the group of drones) shared and distributed by the drones of the formation, the motion states of each drone, the associated state confirmation signals Scon for each estimated motion state, and the generated control signal Scorns. Storing, as explained below, takes place according to the thematic areas associated with the control instructions and/or the corrective maneuvers. Specifically, the information in the shared and distributed log R is divided into thematic areas such as motion parameters (attitude, speed and position), trajectory instructions and environmental perceptions.

With a shared and distributed log R, all the drones in the group of drones share the motion and control states of each drone and any corrective maneuvers generated in step e) by any of the drones, and as explained below, any additional corrective maneuvers based on the environment and/or active control by the management algorithms and/or the basic programming of each drone.

Specifically, a shared and distributed log R refers to a database similar to a blockchain, that can be accessed in real time by each drone of the formation, in which the above data is stored in real time to efficiently and effectively control the formation. In addition, the log not only stores, but allows each instruction to be executed and/or any calculated corrective maneuver to be validated and/or contested by each drone (node). It shall be noted that the shared and distributed log R, in combination with the neural network-based management algorithms, enables the definition of an ambient intelligent decision-making process that is able to control the drones in executing the control instruction. In other words, the method affords a combination of artificial intelligence and blockchain algorithms for efficient control of a group of drones.

More in detail, this step of storing occurs iteratively for each drone after obtaining a state confirmation signal Scon for a certain motion state of a drone and following the corresponding generation of the control signal Scorn for the next drone (second drone B). In other words, the sequential log, defined by the signals exchanged between the first drone A and the second drone B (motion state signal, control state signal, state confirmation signal and state correction signal), is shared with the distributed and shared log R so that the sequential log associated with the successive drones may be distributed with the rest of the drones in the group. For example, in practice, there will actually be a process of writing on the log in sequential form and a process of universal form. It shall be noted that if a state correction signal Scor is generated, the step of storing will be stopped until generation of a state confirmation signal Scon, so that a correct motion state can be stored and hence shared.

In other words, the first drone A and the second drone B, and then sequentially and iteratively all the drones in the group of drones 10, once the motion state has been estimated and confirmed after generation of the state confirmation signal Scon, will store the motion state of the first drone A, the control state identified by the second drone B and the state confirmation signal Scon signal for the stored motion state on the shared and distributed log R. Then, after generation of the control signal Scorn, the latter is also stored in the shared and distributed log R. If a state correction signal Scor is to be generated, then the step of storing enters a standby mode until the motion state Sm is corrected and the state confirmation signal Scon is later generated. In detail, as shown in Figure 2, the method affords the definition of a blockchain defined by drones (nodes) and the corresponding shared and distributed log R as well as by the sequential log. Clockwise arrows indicate the sequential interaction between a first drone A and a second drone B (in Figure 2 these drones are all arranged, for example, in an orderly fashion and per thematic area), and anti-clockwise arrows indicate the feedback signals. The arrows that come together at the center are the union of processing and the resulting signals/controls in the log shared by all nodes.

Figure 3 shows a schematic example of the connection of the nodes (drones) of the shared and distributed log R according to the present invention. This connection is represented as the intersection points of a Venn diagram. Each node can be understood as “first drone A” and a “second drone B” depending on the subj ect under consideration, e.g. Control instruction and/or corrective maneuver (motion, perception, task, etc.). Each drone A will interact with the drone B for a given subject, and there will be mutual intersection areas (the unintersected areas are those indicated by “node 1”, “node 2”, ...); the central area is common to all and is therefore the intersection of all thematic shares of the individual drones (shared and distributed log R). The first drone A and second drone B may be distributed across the formation without being necessarily adjacent. That is, there may be two or more A-B sets for each node: for example, the drone 1 corresponds to a first drone A for motion relative to the drone 2, which is the second drone B but the drone 1 may be the first drone A for fire perception and the drone 20 may be the second drone B for such perception; similarly, the drone 2 might be the first drone A for abnormal wind or disturbances perception of a particular signal, and the drone 1 might be the second drone B for the same issue. The outcome of the interaction is shared in the common area. This facilitates interaction between drones and control thereof based on the information that each drone receives, processes, shares and, as explained below, validates.

According to a preferred embodiment, the step of storing includes storing motion states using the relevant first drones A, the corresponding state confirmation signals Scon (and preferably the control state) using the relevant second drones B and the control signal Scorn using the relevant first drones A and/or second drones B.

Thus, each drone has a general knowledge of the state of group of drones 10 (and can approve it and correct individual actions). Preferably, the step of storing comprises, for each drone, the step of encrypting, by an encryption associated with the corresponding drone, the motion states, the corresponding state confirmation signals and the control signals. More preferably, the shared and distributed log R uses algorithms to manage splitting of the stored signals into thematic areas such as motion parameters, perception, alarms, task actions. Specifically, the shared and distributed log R is configured to define thematic areas depending on the control instruction (e.g. how to move the drones, where to go, what action to perform) or the corrective maneuvers. Here, the encryption performed by each drone is not only important for the uniqueness of the signal between the nodes (drones) and for identification of each of them, but also for identification of the proper thematic area of the shared and distributed log R. In detail, the encryption step includes encrypting each thematic area of the log R to uniquely identify the thematic area Thus, the signal/command processed for each node may then be extrapolated using node-specific encryption or a node-specific frequency to identify the signal from the thematic log.

This encryption step may also be used for the validation step as described below.

According to a preferred embodiment as shown in Figure 7, the encryption step includes receiving signals from drones that exchange signals according to the above described process (Figures 5 and 6). Diagram 7 shows the steps of the encryption process for transmitting signals to the shared and distributed log R. In particular, Figure 7 shows how the resulting signal and processing of the two preceding diagrams, Figures 5 and 6, are transmitted to the shared log (virtual sharing node for sharing the trajectories to be followed and/or the maneuvers associated with the action to be performed). For simplicity, reference is made to a thematic area identified in the log R considering, as already mentioned, the log is divided into thematic areas such as speed/motion, state, perceptions, tasks etc. For each subject area of the shared and distributed log R the following occurs in an independent and overlapping manner. The signal transmitted to the log is also encrypted with its theme-specific key and using asymmetric encryption (public and private key, hash). Then, the information (the action/perception) is stored, translated into the frequency of the individual nodes, re- encrypted (so that it can be considered as uniquely and unequivocally directed to a specific node from a specific log) and transmitted to the nodes as a signal/control. Again, as shown in the figure, an additional asymmetric encryption occurs. Finally, when the new action is performed or a state is validated (which will lead to a node/drone processing an action), the cycle is repeated, resuming from the sequential transmission.

This ensures the integrity and uniqueness of the motion states of the group of drones and associated calculations, as well as security against hacking and external control attempts. In addition, encryption ensures that the motion states of each group of drones can be deemed to be intact and unambiguous. It should be noted that the above described control steps c)-i) on the first drone

A and the second B drone as well as the step of storing on a shared and distributed log R are repeated cyclically and iteratively (for each identifiable thematic area) on the entire group for each time interval to ensure proper control on the entire group of drones 10. Thus, the feedback and correction logic on the shared and distributed log R affords implementation of corrections (and hence actual controls ) by the lead drone T to the tail drone C and/or by intermediate drones when the need for corrective maneuvers is identified. This operation may be better understood, for example, by comparing a naturally-occurring control implemented by an earthworm or a centipede, which adapt he movement of the central part of their body from both ends. The shared and distributed log R both affords uniqueness of data and controls and can avoid the presence of a single master-slave unidirectional relationship: each drone of the group is both master and slave.

According to a preferred embodiment, the step of storing comprises a step, conducted between the step h) and the step i) at each cycle, of validating by one or more drones in the group the motion state of each drone. The relevant feedback signal Scon and the control signals Scorn stored according to the control instruction before executing the step i). In particular, the validation step can monitor through the cycles the proper execution of the control instruction by checking the motion states based on environmental changes and/or the mismatch with predetermined motion states.

Preferably, the validation step includes comparing the motion state and a corresponding default motion state associated with the control instruction to make sure that each drone is executing the control instruction. It should be noted that the default motion state is provided by the control instruction and is defined by default motion parameters (such as position, speed and attitude and/or parameters for the interaction means). This default motion state is representative of a motion state that the first drone should assume based on the control instruction over a given time interval. Specifically, the parameters associated with the motion states are compared to the default motion state.

After comparison, the validation step includes generating

- a state modification signal Smod if the difference between the motion parameters and the default motion parameters differs from a second preset range of values; or

- a successful validation signal Scv to proceed with the next steps if the difference between the motion and preset motion parameters falls within the second preset range of values. Specifically, the generation of a modification signal Smod requires the cycle to enter a stand-by mode until the successful validation signal Scv is generated.

Then, the validation step includes correcting the motion state of one of the drones in the group according to the modification signal comprising the parameters to be modified to obtain the default motion state, thus avoiding the execution of the step i) until the motion state of the first drone A has been corrected. In particular, the validation step affords further control of the group of drones in executing the control instruction. Finally, the validation step includes repeating the steps up to step i) until the successful validation signal has been generated to perform the step i).

According to a preferred embodiment, the method includes a step of acquiring environmental data using the sensors of each drone and generating corrective maneuver signals Sman by acting on the motion parameters of each drone based on such environmental data as well as on the environment itself. Specifically, the present step includes generating corrective maneuvers using management algorithms to properly follow the trajectory and/or perform a maneuver that has been set according to the control instruction and/or the default instructions and/or the control signal. In detail, the step of generating corrective maneuver signals includes processing environmental data, recognizing any environmental data-dependent trajectory and/or maneuver alterations by comparison between the motion state and the predetermined motion state and taking countermeasures in the form of corrective maneuvers to correct these alterations. More in detail, this step allows the group of drones 10 to be managed based on environmental data with the same logic as described above, concerning the control of the first drone A and the second drone B throughout the formation, the steps of storing and validating. In other words, the present step affords control by optimization of proper execution of the control instruction and association of any alterations in the motion state from the default motion state in the environment and to act accordingly with corrective maneuvers. It shall be noted that each drone in the group of drones may be identified as a node in an interaction chain that requires validation in order to allow control and management of the other nodes. This affords more secure and efficient control of the group of drones 10.

According to a preferred embodiment, the validation step, as schematically shown in Figure 4, includes, once the first drone A and the second drone B have shared the signals with the shared and distributed log R, transmitting the shared information with each drone in the group of drones so that each of them can sequentially or universally validate, execute, reject and possibly correct the signal. In the example as shown in Figure 4, with the transmission of the signal and the subsequent verification by validation an anomaly has been identified, which led to the generation of a modification signal Smod and/or a corrective maneuver signal Sman which might otherwise be a successful validation signal Scv.

According to a preferred embodiment, the method 1 comprises a step of identifying the lead drone T based on the control instruction. Specifically, with the step of identifying the lead drone, the method can enable target control by the drones in the group of drones 10. In detail, if the control instruction includes the instruction to identify a target (e.g. fire-fighting, identification of people in distress or water resources), such target may be identified by any one of the drones in the group that will share this information with the shared and distributed log R. Thus, this drone acts as the lead drone T and becomes the first drone A for the next drone to reach the target if the other drones acknowledge the observation in the log R by validation. Preferably, the step of identifying the lead drone includes the step of defining a target to reach and/or an action to be performed based on the control instruction as previously anticipated. Then, the step of identifying the lead drone comprises the step of acquiring environmental data for each drone and comparing the acquired environmental data with reference environmental signals based on the control instruction. Specifically, the steps of acquiring and comparing are conducted using the management algorithm which affords analysis of acquired images or environmental data such as temperature, noise, humidity or air composition. Finally, the step of identifying the lead drone includes assigning the function of a lead drone to a drone of the formation based on the comparison and on a target recognition acknowledgement by the remaining drones.

It should be noted that the step of identifying the lead drone is a perception specified for a drone in the log R and shared by all the drones subscribing to the log R. The same applies to “environmental” perception for detection of atmospheric, navigation (GPS), infrared data, etc. These environmental data perceptions provide increasing approximation of detection data to the actual dimension being analyzed and also provide the unambiguous presence of sudden changes in working conditions. Therefore, with these steps, the method also affords “on-mission” data collection (i.e. during the execution of the control instruction). Such “on-mission” data relates to optimal execution of the final task that, as mentioned above, is easily managed by neural network- and machine learning-based management algorithms. Specifically, the first drone A (which is the first by position in the formation or by target detection) is the first to perform the maneuver associated with the action to be performed (releasing water on the fire) or to follow the trajectory. For simplicity, a maneuver is analyzed without excluding the application of the same process to following a trajectory. Once the control instruction has been received, this maneuver is performed in accordance with default instructions (i.e. in accordance with an initial programming) based on the environmental conditions under which the maneuver is to be performed. The maneuver shall be performed for each drone as described above, to control each drone from the lead drone to the tail drone, This maneuver based on environmental data acquired at each time point is corrected in real time by communication between drones. The subsequent drones will attempt to further correct any errors by approximating the most efficient task execution results. Thus, each drone “contributes” to group perception as well as to corrections or maneuvering guidance through the use of management algorithms (including classification algorithms, detection, and Artificial Intelligence decision-making processes). For example, in a fire-extinguishing mission (the same being also applicable to the collection of water from water resources or to agricultural tasks), the drones, during their extinguishing maneuvers, account for the heat released by the flames to manage the temperature that might damage drones and batteries as well as proper release of water. With the steps of the method, the group of drones can efficiently follow a trajectory and/or perform an action.

According to a preferred embodiment, the method 1 comprises a step, carried out before step c), of starting each drone of the formation from the lead drone T to the tail drone C preferably based on the instruction. The step of starting includes the steps of estimating, for the first drone, an initial position associated with a preferably geographic location. Then, the step includes moving the first drone A a first distance from the initial position along a first forward direction of movement and a second distance along a second vertical direction perpendicular to the first forward direction of movement. Specifically, each drone from the lead drone T will move along the first forward direction of movement parallel to the ground and along the second vertical direction perpendicular to the ground.

It shall be noted that each drone is associated with a padding area which surrounds the drone. Specifically, the padding area is defined by a fictitious (virtual) three-dimensional volume that encloses the drone. The volume has a spherical and/or polyhedral shape characterized by diameter and/or side length values.

In detail, each drone is configured to control other drones outside the padding area, and to have a full control inside the padding area over the motion state and thus over the motion parameters managed by the data processing unit, based on the control instruction and the signals received. According to a preferred embodiment, the padding area is spherical and characterized by the diameter of the sphere. Specifically, the step of moving includes moving the first drone a first distance equal to the value of the diameter of the padding area and a second distance equal to the value of the diameter of the padding area.

Then, once these steps of moving have been completed, the step of starting comprises a step of stopping the first drone in a second position based on the first and second distance; Specifically, the first drone moves from its initial position to a hovering position. The step comprises the step of repeating for the second drone the steps of moving and stopping, so that the second drone B will move from the rest configuration to the stop configuration.

Finally, the step of starting includes sequentially repeating the steps of moving and stopping for the subsequent drones according to the previously described first-and- second drone logic from the lead drone T to the tail drone C.

It shall be noted that the step of starting is completed when all the drones in the group have moved from the initial position to the second position. The steps following the step b) may then be initiated to move the drones along a trajectory and/or to perform a maneuver associated with an action to be performed.

The invention also relates to a drone control system configured to implement the method for controlling a group of drones as described above.

The system comprises a central processing unit configured to generate and send a control instruction preferably using a management algorithm residing within the central processing unit. Preferably, the central processing unit is located at an operating site that may be associated to a yard and a track for starting drones.

The system further comprises a group of drones comprising a plurality of drones in signal communication with one another, at least one of them being in signal communication with the central processing unit. Specifically, each drone comprises a data processing unit in signal communication with the central processing unit and preferably comprising a management algorithm of the above-described type.

Each drone also comprises a set of sensors associated with the data processing unit of the drone. This data processing unit, along with the set of sensors, are configured to estimate the motion state, identify the control state and preferably acquire environmental data for corrective maneuvers.

According to a preferred embodiment,, each data processing unit is configured to receive the control instructions and generate the above signals and to manage the motion parameters based on the control instructions by acting on the motion-imparting and/or interaction means. Specifically, each data processing unit is configured, based on the instructions and signals received, to follow the trajectory of the drone and/or to perform a drone maneuver associated with an action to be performed by adjusting the parameters of position, speed, attitude and preferably the interaction means.

It shall be noted that the trajectory and/or the maneuver are then performed by the remaining drones according to the above-described steps.

According to a preferred embodiment, each data processing unit together with the set of sensors is also configured to acquire environmental data and process corrective maneuvers using the management algorithm.

Preferably, the data processing unit is further configured to conduct the step of storing and validating to improve control over the group of drones.

Each drone comprises motion-imparting means, designed to be controlled by the data processing unit, and configured to move the corresponding drone along trajectories and/or to perform maneuvers associated with the actions to be performed. As used in the present invention, the motion-imparting means are propellers and associated control devices for adjustment of the different parameters.. Specifically, for each drone, the data processing unit is in signal communication with the motion- imparting means to act on them based on instructions and signals. Preferably, each drone also comprises interaction means that can be controlled by the data processing unit and are configured to be controlled by the data processing unit based the actions to be performed. Such interaction means are configured to interact with the environment based on the control instruction and/or the signals sent/generated to/from the drone. Such interaction means may include automated water collection and release devices and/or transport devices (such as a rope or a stretcher).