Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR CONFIGURING A SWARM OF DRONES
Document Type and Number:
WIPO Patent Application WO/2022/261678
Kind Code:
A1
Abstract:
A central command system may determine a mission plan for cooperative execution by a swarm of drones comprising one or more sensors to capture data in accordance with the mission plan. The mission plan may specify requirements for fault tolerance or parallelism and a redundancy structure for the swarm. The mission plan may be transmitted to a remote drone swarm controller device that determines a swarm configuration based on the mission plan and available drones. The controller may transmit instructions regarding the swarm configuration to dispatch a resilient swarm of drones. During execution of the mission plan, drones in the resilient swarm may be monitored by other drones in the swarm, by the remote drone swarm controller, and/or by the central command system. The redundancy structure provides for failover options for one or more drones in the resilient swarm.

Inventors:
ARKSEY MATTHEW LAURENCE (US)
BLAAUW DEON (US)
HAHN LUCAS THOMAS (US)
MCQUEEN JOHN GORDON (US)
NAKAJIMA SATOSHI (US)
SHEFNER GUY DAVID BYRON (US)
TONG RICHARD CHIA TSING (US)
Application Number:
PCT/US2022/072890
Publication Date:
December 15, 2022
Filing Date:
June 10, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NETDRONES INC (US)
International Classes:
G05D1/10; B64C39/02; G01C21/20; G05D1/00; G07C5/00; G08G5/00
Foreign References:
US20180188724A12018-07-05
US20190324456A12019-10-24
US9858822B12018-01-02
US20180327091A12018-11-15
US20200233411A12020-07-23
US20180157255A12018-06-07
Attorney, Agent or Firm:
CHOI, Hogene et al. (US)
Download PDF:
Claims:
CLAIMS

1. A method comprising, by one or more server computing devices of a central command system: determining a mission plan for resilient execution by a swarm of drones, wherein one or more of the drones in the swarm comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the swarm corresponding to the requirements; transmitting instructions representing the mission plan to a remote swarm controller device; receiving mission-related data captured by the swarm; and displaying results of the mission plan.

2. The method of claim 1, further comprising: determining a swarm configuration that specifies a number of drones in the swarm, specific types of drones to include in the swarm, or specific roles for drones in the swarm.

3. The method of claims 1 or 2, further comprising: receiving a request from the remote swarm controller device for replacement drones or an updated mission plan; generating, based on the completion status and the redundancy structure, a determination whether to dispatch one or more replacement drones, generate an updated mission plan, or both; and transmitting, based on the determination, a response to the request from the remote swarm controller device.

4. The method of claim 3, further comprising: assessing completion status of the mission plan based on the received mission-related data, wherein the generating the determination is further based on the completion status.

5. The method any of claims 1 to 4, further comprising: detecting a position of each of one or more drones in the swarm; calculating a correction for one or more of the drones in the swarm; and transmitting, to the remote swarm controller device, information based on the calculated correction to adjust a timing or position of one or more of the drones in the swarm.

6. One or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by one or more server computing devices to perform the steps of any of claims 1 to 5.

7. A system comprising one or more server computing devices, each of the server computing devices comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of any of claims 1 to 5.

8. A method comprising, by a drone swarm controller device: receiving, from a central command system, a mission plan for resilient execution by a swarm of drones, wherein one or more of the drones in the swarm comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the swarm corresponding to the requirements; determining a swarm configuration based on the mission plan and a set of available drones and capabilities of the available drones, wherein the swarm configuration includes a number of the available drones in accordance with the specified redundancy structure; transmitting instructions corresponding to the swarm configuration and information related to the mission plan to dispatch a plurality of the available drones as a resilient swarm; after dispatch, receiving status information from the drones in the resilient swarm during execution of the mission plan; receiving sensor data captured by one or more sensors of the resilient swarm; and transmitting, to the central command system, mission-related data generated based on the sensor data or the status information.

9. The method of claim 8, further comprising: transmitting information to synchronize the drones in the resilient swarm.

10. The method of claims 8 or 9, further comprising: merging the mission-related data captured by each of the drones in the resilient swarm into a single dataset.

11. The method of any of claims 8 to 10, further comprising: detecting a fault involving one or more of the drones in the resilient swarm; and either: transmitting reconfiguration instructions to the drones in the resilient swarm; transmitting instructions to one or more of the available drones not included in the resilient swarm to join the resilient swarm; or transmitting a request to a central command system for replacement drones or an updated mission plan.

12. One or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by a drone swarm controller device to perform the steps of any of claims 8 to 11.

13. A drone swarm controller device comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of any of claims 8 to 11.

14. A method comprising, by a plurality of drones in a resilient swarm: receiving, from a drone swarm controller device, dispatch instructions corresponding to a swarm configuration and information related to a mission plan for resilient execution by the drones, wherein one or more of the drones comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the resilient swarm corresponding to the requirements; determining, by each of the drones and based on the dispatch instructions, a role of the drone in the resilient swarm based on the swarm configuration and the specified redundancy structure; commencing, by the resilient swarm, execution of the mission plan; and during execution of the mission plan: sensing, by each of the drones, a relative position of the drone with respect to other drones in the resilient swarm; transmitting, by each of the drones in the resilient swarm and to the drone swarm controller device, status information regarding the drone; capturing, by one or more sensors of one or more of the drones in the resilient swarm, sensor data; and transmitting, by the resilient swarm, mission-related data generated based on the sensor data or the status information.

15. The method of claim 14, further comprising: exchanging, with the other drones in the resilient swarm, information to calibrate the drones in the resilient swarm.

16. The method of claims 14 or 15, further comprising: detecting that an other drone in the resilient swarm is absent or out of position; and in response to detecting, transmitting a notification to the drone swarm controller device identifying the other drone.

17. The method of any of claims 14 to 16, further comprising: receiving, by the resilient swarm and from the drone swarm controller device, correction information of one or more of the drones in the resilient swarm; and spatiotemporally adjusting one or more of the drones in the resilient swarm.

18. The method of any of claims 14 to 17, wherein the mission-related data is transmitted from the resilient swarm directly to the central command system or to the drone swarm controller device.

19. One or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by a drone swarm controller device to perform the steps of any of claims 14 to 18.

20. A drone swarm controller device comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of any of claims 14 to 18.

Description:
SYSTEMS AND METHODS FOR CONFIGURING A SWARM OF DRONES

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims priority to U.S. Provisional Patent Application No. 63/209,392, filed June 11, 2021, the content of which is hereby incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] This disclosure generally relates to drone systems.

INTRODUCTION

[0003] Over the last 50 years there have been successive waves of billions of end-user computing devices driven by tremendous cost declines in semiconductor hardware. In the 1980s, microprocessors made computations became essentially free and ubiquitous. This enabled a new generation of software that led to the “Personal Computer (PC) revolution.”

[0004] In the 2000s, the semiconductor revolution arrived for communications technology, connecting devices worldwide. Free computation and communications enabled a new generation of software that led to the “Mobile revolution.”

[0005] In the 2010s, the integration of low-cost imaging chips made cameras and other sensors essentially free and ubiquitous. Free computation, communications, and sensing enabled a new generation of Internet of Things (IoT) software, which ushered in a revolution in Smart Home, Office, and Factory technology. IoT refers to physical objects with sensors, processing ability, software, and other technologies configured to connect and exchange data with other devices and systems over the Internet or other communications networks.

[0006] Now, in the 2020s, the next revolution has arrived. Motors and batteries driven by hardware technology have made motion essentially free. However, there is still a need for a new generation of software that takes advantage of free computation, communications, sensing, and now motion.

[0007] The history of innovation has followed a cycle over the last 40 years. As seen, for example, with reference to FIG. 4, cycle 400 first begins with the hardware innovation that makes some formerly expensive part of the ecosystem effectively “free,” leading to the personal computer (PC) revolution. That is, the cost of computation in the 1980’ s dropped so much, that use of the PC became ubiquitous. Furthermore, the semiconductor revolution begot the mobile revolution. This then enabled the software revolution that exploited that power, thus entering into the IoT era. Finally, the end-user revolution arrived with first millions then billions of devices were available to everyone.

[0008] Then innovation proceeded to another component, until the addition of free motors, which refers to everything that can move and everything that can be controlled. The final component of cycle 400 is the revolutions taking place to enable the “drone revolution.”

[0009] Some of the issues faced by drones today include them being expensive and being manually operated (or programmed), and thus completing their mission(s) takes increased amounts of time. As an example, today there are approximately 300,000 Part 107 FAA certified drone pilots and a typical flying drone mission takes 1-5 days to a) plan the mission(s), b) actually flying the paths required, c) integrating the captured images into a 3D model and cleaning the artifacts, d) preparing the 3D models for use and/or sale, and e) ensuring that the appropriate regulatory and privacy rules were followed in taking the images. Some of the main problems faced today across the entire workflow of flying drones to gather images are that each stage is slow and requires significant manual intervention-this significantly limits the volume and quality of imaging that can be done today.

SUMMARY OF PARTICULAR EMBODIMENTS

[0010] A central command system may determine a mission plan for resilient execution by a swarm of drones comprising one or more sensors to capture data in accordance with the mission plan. The mission plan may specify requirements for fault tolerance or parallelism and a redundancy structure for the swarm. The mission plan may be transmitted to a remote drone swarm controller device that determines a swarm configuration based on the mission plan and available drones. The controller may transmit instructions regarding the swarm configuration to dispatch a resilient swarm of drones. During execution of the mission plan, drones in the resilient swarm may be monitored by other drones in the swarm, by the remote drone swarm controller, and/or by the central command system. The redundancy structure provides for failover options for one or more drones in the resilient swarm.

[0011] Particular embodiments include a central command method comprising, by one or more server computing devices of a central command system: determining a mission plan for resilient execution by a swarm of drones, wherein one or more of the drones in the swarm comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the swarm corresponding to the requirements; transmitting instructions representing the mission plan to a remote swarm controller device; receiving mission-related data captured by the swarm; and displaying results of the mission plan. Particular embodiments of the central command method may further comprise determining a swarm configuration that specifies a number of drones in the swarm, specific types of drones to include in the swarm, or specific roles for drones in the swarm. Particular embodiments of the central command method may further comprise receiving a request from the remote swarm controller device for replacement drones or an updated mission plan; generating, based on the completion status and the redundancy structure, a determination whether to dispatch one or more replacement drones, generate an updated mission plan, or both; and transmitting, based on the determination, a response to the request from the remote swarm controller device. Particular embodiments of the central command method may further comprise assessing completion status of the mission plan based on the received mission- related data, wherein the generating the determination is further based on the completion status. Particular embodiments of the central command method may further comprise detecting a position of each of one or more drones in the swarm; calculating a correction for one or more of the drones in the swarm; and transmitting, to the remote swarm controller device, information based on the calculated correction to adjust a timing or position of one or more of the drones in the swarm. Particular embodiments may provide one or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by one or more server computing devices to perform the steps of the central command method. Particular embodiments may provide a system comprising one or more server computing devices, each of the server computing devices comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of the central command method.

[0012] Particular embodiments include a swarm controller method comprising, by a drone swarm controller device: receiving, from a central command system, a mission plan for resilient execution by a swarm of drones, wherein one or more of the drones in the swarm comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the swarm corresponding to the requirements; determining a swarm configuration based on the mission plan and a set of available drones and capabilities of the available drones, wherein the swarm configuration includes a number of the available drones in accordance with the specified redundancy structure; transmitting instructions corresponding to the swarm configuration and information related to the mission plan to dispatch a plurality of the available drones as a resilient swarm; after dispatch, receiving status information from the drones in the resilient swarm during execution of the mission plan; receiving sensor data captured by one or more sensors of the resilient swarm; and transmitting, to the central command system, mission-related data generated based on the sensor data or the status information. Particular embodiments of the swarm controller method may further comprise transmitting information to synchronize the drones in the resilient swarm. Particular embodiments of the swarm controller method may further comprise merging the mission-related data captured by each of the drones in the resilient swarm into a single dataset. Particular embodiments of the swarm controller method may further comprise detecting a fault involving one or more of the drones in the resilient swarm; and either: transmitting reconfiguration instructions to the drones in the resilient swarm; transmitting instructions to one or more of the available drones not included in the resilient swarm to join the resilient swarm; or transmitting a request to a central command system for replacement drones or an updated mission plan. Particular embodiments may provide one or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by a drone swarm controller device to perform the steps of the swarm controller method. Particular embodiments may provide a drone swarm controller device comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of the swarm controller method.

[0013] Particular embodiments include a mission execution method comprising, by a plurality of drones in a resilient swarm: receiving, from a drone swarm controller device, dispatch instructions corresponding to a swarm configuration and information related to a mission plan for resilient execution by the drones, wherein one or more of the drones comprises one or more sensors to capture data in accordance with the mission plan, and wherein the mission plan specifies (1) requirements for fault tolerance or parallelism and (2) a redundancy structure for the resilient swarm corresponding to the requirements; determining, by each of the drones and based on the dispatch instructions, a role of the drone in the resilient swarm based on the swarm configuration and the specified redundancy structure; commencing, by the resilient swarm, execution of the mission plan; and during execution of the mission plan: sensing, by each of the drones, a relative position of the drone with respect to other drones in the resilient swarm; transmitting, by each of the drones in the resilient swarm and to the drone swarm controller device, status information regarding the drone; capturing, by one or more sensors of one or more of the drones in the resilient swarm, sensor data; and transmitting, by the resilient swarm, mission-related data generated based on the sensor data or the status information. Particular embodiments of the mission execution method may further comprise exchanging, with the other drones in the resilient swarm, information to calibrate the drones in the resilient swarm. Particular embodiments of the mission execution method may further comprise detecting that an other drone in the resilient swarm is absent or out of position; and in response to detecting, transmitting a notification to the drone swarm controller device identifying the other drone. Particular embodiments of the mission execution method may further comprise receiving, by the resilient swarm and from the drone swarm controller device, correction information of one or more of the drones in the resilient swarm; and spatiotemporally adjusting one or more of the drones in the resilient swarm. The mission-related data may be transmitted from the resilient swarm directly to the central command system or to the drone swarm controller device. Particular embodiments may provide one or more computer-readable non-transitory storage media embodying software comprising instructions operable when executed by a drone swarm controller device to perform the steps of the mission execution method. Particular embodiments may provide a drone swarm controller device comprising one or more processors and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to perform the steps of the mission execution method.

[0014] The embodiments disclosed above are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed above.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015] FIG. 1 is an illustrative schematic of resilient swarm types, in accordance with various embodiments.

[0016] FIG. 2 is an illustrative schematic of resilient swarm hardware and software architecture, in accordance with various embodiments.

[0017] FIG. 3 is an example drone application workflow, in accordance with various embodiments.

[0018] FIG. 4 is an illustrative schematic of the cycle of innovation over the last 40 years, in accordance with various embodiments.

[0019] FIG. 5 is an illustrative diagram describing how a drone can be controlled, in accordance with various embodiments.

[0020] FIG. 6 is a schematic diagram of a drone, in accordance with various embodiments. [0021] FIG. 7 is an illustrative diagram of an example hive architecture, in accordance with various embodiments.

[0022] FIG. 8A is an illustrative diagram of an overall system 800 including one or more drone pods 810, in accordance with various embodiments.

[0023] FIG. 8B is an illustrative diagram of an overview of the system, in accordance with various embodiments.

[0024] FIG. 9 is a schematic diagram of drone software architecture, in accordance with various embodiments.

[0025] FIG. 10 is a schematic diagram of a hive software architecture, in accordance with various embodiments.

[0026] FIG. 11 is a schematic diagram of a cloud software architecture, in accordance with various embodiments.

[0027] FIG. 12 is a schematic diagram of an example workflow, in accordance with various embodiments.

[0028] FIG. 13 illustrates an example computer system.

[0029] FIG. 14 is a representative comparison between a traditional single-drone configuration and a swarm of drones.

[0030] FIG. 15 illustrates representative small, medium, and large swarms of drones.

[0031] FIG. 16 is a schematic abstraction of a swarm of drones as a network of sensors.

DETAILED DESCRIPTION

[0032] In all example embodiments described herein, appropriate options, features, and system components may be provided to enable collection, storing, transmission, information security measures (e.g., encryption, authentication/authorization mechanisms), anonymization, pseudonymization, isolation, and aggregation of information in compliance with applicable laws, regulations, and rules. In all example embodiments described herein, appropriate options, features, and system components may be provided to enable protection of privacy for a specific individual, including by way of example and not limitation, generating a report regarding what personal information is being or has been collected and how it is being or will be used, enabling deletion or erasure of any personal information collected, and/or enabling control over the purpose for which any personal information collected is used.

[0033] Particular embodiments have the unit of operation not being a single multi-function drone but a resilient swarm of ultralight small drones that operate as a unit to emulate the reliability and capabilities of much heavier drones which may have less range and lower reliability.

[0034] A resilient swarm emulates complex drones more cheaply and more flexibly. Drones in a resilient swarm complete missions as a group much like traditional computer systems improve scalability by internally having multiple units but showing externally as a single logical entity. The drones in the resilient swarm maintain a fixed location relative to each other while in motion by automatically sensing the relative position of each drone in a continually 6DOF pose through techniques localization techniques such as ORB3 or SuperPoint or other similar techniques. This relative orientation may change dynamically during the mission each phase of the mission. The resilient swarm thus emulates a single system with a collection of components. From the higher-level software point of view, path planning of a resilient swarm may be done as a unit. If the planner as the sample needs high- resolution images taken in a complete 360 degree- surround image, this may be carried out by four drones with a single 90-degree field of view camera flying with one camera pointed forward, another drone pointed backward, a third drone pointed to the left and the fourth drone pointed to the right. This would normally require a large heavy drone with four cameras or a single drone would have to take four times more time to complete four passes to create a surround-360 image and still would not be able to take a simultaneous picture, but the same mission can be accomplished with four small drones. Small drones are cheaper to build, and safer because they weigh less, and are more flexible as the resilient swarm can be broken up when it is no longer of use and then reconfigured.

[0035] A resilient swarm may also be used to improve reliability. When using a single traditional drone fails, the entire mission fails or a new drone must be found and dispatched to complete the mission. A resilient swarm simplifies the higher level path planning system by automatically initiating failover and fault tolerance. In a non-resilient swarm, multiple drones may be deployed simultaneously to take different parts of a mission. In the example above where four drones emulate a 360-degree surround camera, if any drone fails, the process would be significantly faster than having a single drone repeat the operation four times, yet also vulnerable to mission failure if any single member drone of the swarm encounters a problem. A Level 1 resilient swarm configuration would have two drones taking the same imaging at the same time. This ensures redundancy if a single drone fails, the other drone can continue the mission. A Level 2 resilient swarm configuration would have a single drone ready to take over if any one drone fails. A Level 3 resilient swarm configuration would have two drones ready to take over. From the point of view of the planning software and operator, a resilient swarm can automatically hide faults and problems from the system. The resilient swarm also allows hot spares and rebuilding of resilient swarms. So for example, if a camera drone fails, another drone may launch from the hive or be detached from another resilient swarm and take some time to move and replace the failed drone. The resilient swarm may then be rebuilt when it arrives with the other drones and properly configures itself into the resilient swarm.

[0036] A resilient swarm can have specific characteristics. This simplifies path planning for operators or autonomous planning systems and increases flexibility by emulating specialized functions. As an example, a camera resilient swarm may have multiple cameras, but there may be other specialized resilient swarms such as a lighting resilient swarm which may illuminate, or communications resilient swarm where the drones act like a moving access point providing communications relay.

[0037] A resilient swarm may be a hybrid resilient swarm, which is heterogeneous so that a single resilient swarm may have different drones with different capabilities, and may emulate a large expensive special-purpose drone. The resilient swarm configuration controller may as an example specify that it needs cameras covering the front 180 degrees of flight, 360 sound announcements and sensing, and illumination in the front 90 degrees of flight. The resulting resilient swarm may emulate a much more expensive, rarely used complex drone with nine different drones. For example, three-camera drones in a Level 2 configuration cover 180 degrees with one of the camera drones constantly available to take over if necessary. And it may have a number of sound drones with appropriate speakers and directional microphones to provide a 360-degree surround sound notification. Finally, the same resilient swarm may have a lighting drone that provides the front coverage.

[0038] FIG. 14 illustrates a representative comparison between a traditional single-drone configuration and a swarm of drones. Compared with a traditional system 14100, this novel system 14200 uses many more small, lightweight drones and can scale to 1,000s of drones per site and is much safer because drones are light, typically less than 250 grams, and are cheaper because they are smaller and have a smaller battery or fuel tank 14120 vs 14220. To compensate for the smaller size and reduced power available, the less capable, single-purpose drones will work in resilient swarms 14220 to form the equivalent of a single large multi purpose drone. They may communicate through the use of novel broadband networking for LAN and WAN 14240 vs traditional narrowband point-to-point radios 14140 used with traditional drones. Drones may also offload their software processes or functions to one or more hives 14240 with heavier batteries that require less power to operate because they are ground-based or floating or fixed. And the processing of information is in real-time 14260 vs. in batch mode 14160.

[0039] The parallel execution enabled by the additional scale of this invention means they complete missions faster, with horizontal scaling by adding more drones, hives, and cloud resources without requiring changes in software and workflow. A small mission 15100 may require a small number of drones, for example, 4 drones 15120 that are controlled by a small hive with 4 slots in a “pizza box” 15140 and which a small cloud 15160 of computation to assist it. Such a system with a 9-minute per drone range could scan the equivalent of a single drone with a 36 minute flight time. But, the entire mission would take 9 minutes saving 4x mission time with resulting lower cost for operators and each drone would be significantly lighter, in particular, if they are under 250 grams they may be subject to fewer regulatory requirements because they have less kinetic energy and are safer. Finally, a 4 drone system could tolerate up to 3 failures and still be able to complete its mission while a 1-drone system would be vulnerable to a single fault in the drone.

[0040] FIG. 15 illustrates representative small, medium, and large swarms of drones. A medium-sized mission 15200 for example 16 drones 15220 would require a larger hive to launch, manage and land that would be a hive that might be the size of a movable cart or cabinet 15240 and would require a medium sized installation in the cloud 15260. Without changing the software and application workflow, this would scale significantly. 16 drones with a 9-minute flight could complete the equivalent of a 2-hour mission for a single large drone. The 16x savings in time saves money but also allows much more simultaneous scanning so that as an example a multi-drone mission is much less vulnerable to weather changes, changes in sunlight, or other ambient conditions and is much more fault-tolerant than a single drone mission.

[0041] Finally, a large-sized mission 15300 with 64 drones 15320 would not require any change in the software or workflow. It might require a larger hive 15340 that might be carried on a motorized truck hive with more computation than a small or medium- sized system. This in turn would require more cloud resources 15340 that is provided just by increasing the number of processors and storage. This system of 64 drones with a 9-minute flight time each could complete the equivalent of a 9-hour mission by a single drone. By completing a day long mission in 9 minutes, much more frequent scans are possible at much lower human operating costs.

[0042] FIG. 16 is a schematic abstraction of a swarm of drones as a network of sensors. The largest technical benefit of the overall system is that it is easy to compose and configure and can be integrated into any enterprise application, because at a high level it abstracts a network of sensors 16100 which may be anywhere in the world and which can be given any path to move in or live in and that these sensors may then produce a series of objects 16500 which may be 3D, may have automatically recognized attributes. When integrating into an enterprise or existing application, the system abstracts away the application workflow 16200 that is required to plan 16220, pilot 16240, and process 16240 to take the missions. The invention provides the software 16300 needed to run a complex mission of many drones, hives, and cloud resources through an orchestration and management system. The invention provides a new hardware layer 16400 of drones that may be in resilient swarms, hives for computation and housing of drones, and the cloud resources connected by a 3D-LAN and 3D-WAN.

Types of Resilient Swarm Configurations

[0043] Drones may be put into resilient swarm configurations based on the types of payload that they have. Each drone has a capability list since some can be multifunctional, but this allows different drones to perform different roles. The advantage is that it makes each drone lighter and smaller which is key to safety. The architecture favors many small drones that are under the FAA and other regulatory limits vs large drones which provide great safety hazards.

[0044] Each drone type is categorized in the system by its basic characteristics including power, size and the specific payload it may carry and as new types of payloads are created, different drone types are marked in a database with different characteristics in an extensible schema such as a YAML document or equivalent. Some of the drone types that may be defined are:

[0045] Camera drones. These drones are for optical imaging and typically include fixed arrays of cameras usually based on mobile phone technology. This makes them small and light and we “steer” what we want by taking images across all of them and then building a single view from that stitching. Typically, the arrays of the lenses will be positioned for stereo vision for each side and the bottom so that the system can calculate distances by parallax and from frames taken in motion.

[0046] Microphone drones. These pick up the ambient sounds and can have microphone arrays to allow positioning and estimation of the distance of sounds. When used with other microphone drones, they can triangulate the source of all sounds in a given area.

[0047] Speaker Drones. These drones send acoustic signals and can have a phased array of speakers to allow direct sending of sounds with phased array techniques. And when a cluster of speaker drones operates, they can create a point source of sound so each person in a crowd can hear a distinct message

[0048] Communication drones. These drones function as mobile access points so that if the system determines that coverage is needed, say behind a building, a resilient swarm of these drones can provide complete coverage. This type of drone could be a fixed-wing glider drone that circles above the target site. Or it could be a “fly-and-hop” drone that to conserve power will fly to a high location and land providing temporary coverage.

[0049] Lighting drones. These drones are designed to provide illumination for shadow areas. Like communication drones, these will often be of the fly-and-hop variety

[0050] Improved Lidar with Laser transmitter drones and Laser receive drones. These provide transmission only of lasers that provide specific illumination specific drones. They can be much higher power than the Lidar system typically found in cell phones. The receiving end however can use low-power Lidar modules available in camera systems. This allows a small number of laser transmitters to cover a particular object with a “bath” of laser points and the “laser receiver” drones typically integrated with the camera drones can more precisely calculate distances.

[0051] IR or Heat drones. These are specific lightweight drones with IR sensors for detecting fire, people, and other heat sources.

[0052] Control Point Drones. In fog or bad weather, these can function as lighthouses and fixed points for images. While all drones will have visible markings, these have higher- powered lighting that flashes in specific patterns. Every drone can recognize these patterns and position itself based on them. These are drones that will often be “fly and hop” drones to provide “lighthouses” and also to function as control points. For instance these could be deployed to the edges of rooftops so that photogrammetry and other 3D model conversions can be very precise.

[0053] “Drop and Forget” drones. These drones can be used as survey markers or to position static cameras, lighting, and other artifacts that need to be semi-permanent. [0054] Rogue Capture Drones. In circumstances where drones or other flying vehicles are not allowed, the system can deploy net drones. This is a set of drones with netting, fishing lines, or other mechanisms designed to capture and ensnare drones. These can then fly to a safe location or deploy their safety system to land the drones and their catch safely.

[0055] High Altitude or Floating Drones. These drones may be suspended by balloon or flying high and are used for Nadir (overhead) shots to augment the local shots taken by quadcopters etc.

[0056] Hives themselves. Hives are in essence just heavy drones and can be treated as a resilient swarm. A hive is a “drone” with an additional battery, and the ability to host drones and recharge them.

Composability: Different types of drones in a single resilient swarm and hierarchies of resilient swarms

[0057] In the most simple form, a resilient swarm configuration is homogeneous. All the drones in the resilient swarm configuration are the same. However, to compose many simple drones into a single “logical” drone with multi-mission characteristics, different drones may be composed into a single resilient swarm. As an example, if a camera, light, and loudspeaker are needed, then a heterogeneous resilient swarm configuration may include a camera drone, a drone with a high power light, and a drone with a speaker on it.

[0058] And since a single drone is a degenerate swarm, the system lets you compose recursively swarms of swarms that may be homogeneous or heterogeneous which lets users tune the reliability and speed that they need. As an example, for a multipurpose system, four cameras may be needed with one spare, two communications drones may be needed for fault tolerance and two drop-and-forget drones may be in the array to leave markers at specific locations at the survey site to increase positional accuracy and note important locations.

[0059] Also, a specific drone may have multiple characteristics, so a single drone may have cameras, speakers, and lights as an example and may belong to resilient swarm configurations of any of those types. [0060] The system can divide an array into one or more physical arrays that handle similar workloads. This is the mechanism to allow fault tolerance. An array may have as an example spare drones or drones that can interleave and share work.

Resilient Swarm Configuration Types

[0061] FIG. 1 is an illustrative schematic 100 of resilient swarm types, in accordance with various embodiments. The hardware may be configured in many different resilient swarm types.

[0062] A resilient swarm can have a “spare” that is a unit that is not dedicated to the immediate task but which is not in motion but is in standby. When a drone fails or unexpectedly runs short on battery (because of unexpected conditions like heavy winds), then 1-N drones can be assigned as a hot spare to take over. The spare may be “hot” that is it is flying with the resilient swarm and can immediately take over or it may be “warm” that is available and marked as ready to lift off from a nearby Hive. Readiness may be characterized by the time it takes for a drone to lift off. As an example, an “Alert-5” drone would be available with five minutes that the drone being required. A drone may also be a “cold” spare and require long periods to be prepared to assist. For instance, a cold drone may be in recharge mode and may need 30 minutes or an hour. Or it may need to be delivered to the mission site from another location. The degree of coldness may also be expressed in time to fly. This categorization provides the planning software using the resilient swarm with information to modify plans based on spare availability.

[0063] Example swarm configuration types may include non-resilient swarm configuration 150 and Level 1 resilient swarm configuration 190. In this case, non-resilient swarm configuration 150 only has drones that are unique, so if they fail then some mission function is lost. As an example, if two drones are assigned in a resilient swarm to take a forward and aft position in non-resilient swarm configuration, then if either fails, 180 degrees of the imaging is lost. A Level 1 resilient swarm configuration 190, completely duplicates each function, so as an example, two drones may take the same forward image to ensure that if one drone fails, the image is still available in real-time. [0064] Resilient swarm redundancy can be more sophisticated and use the Level 2 resilient swarm configuration 160 or 6 concepts of redundancy, we can assign say 5 drones to the task of 4 with overlap, 160. This means that if one drone fails, the other 4 can continue. The mission will not fail because as long as we have fewer than the “parity” drones failing we can complete the mission. So for instance a Level 3 resilient swarm configuration has 2 extra drones and only if 3 drones fail will the mission have to be rerun. Drone configuration like this may, for example, require an array “rebuild”. That is if a single camera drone fails as an example in a Level 2 resilient swarm configuration with 3 camera drones that needs a 120 degree forward field of view and an individual drone is capable of the 60-degree field of view, then the configuration may be three drones 30 degrees apart, so the forward image is fully redundant. If the central drone fails, then the Level 2 resilient swarm configuration continues as the entire field-of-view is captured. But if either the left or right drone fails, then the system goings into a resilient swarm rebuild which means the array stops, the drones reconfigure themselves to have non-overlapping fields of view so the left images the 60 degrees on the left and the 60 degrees on the right are taken by the other drone.

[0065] In general, from a Level 3 resilient swarm configuration which has two levels of redundancy or higher levels with 3, 4, 5, etc. of redundancy, the resilient swarm configuration arranges itself for maximum possible overlap when the fields of view of the payloads are limited which is true for most imaging systems. In general, to cover D degrees with M drones that have E degrees of field of view, the minimum number of drones needed for non-resilient swarm coverage is D/E rounded up to the nearest integer. Any excess drones are the spare capacity S = M - roundup(D/E,l), and these are allocated to duplicate coverage in a tunable manner, the resilient swarm controller device can specify which drones should have 100% duplication so that if as an example 360 coverage for imaging is desired, but the critical zone is in the forward 120 degrees of a resilient swarm, then the resilient swarm spares will be allocated to duplicated imaging from those positions. If no specification is provided, the system assumes the forward path of the array needs the most image coverage so will bias duplicate in that direction. That is, it will take the spare drones and spread them in the forward direction. Alternatively, the resilient swarm may be configured to provide equal coverage, so the S Spare drones may be spread equally around the entire coverage area. [0066] In the degenerate case, the swarm can be a single drone but even, in that case, the raid algorithms function normally with a non-resilient swarm configuration with one member 170.

Resilient Swarm Controller Software Location [0067] FIG. 2 is an illustrative schematic of resilient swarm hardware and software architecture, in accordance with various embodiments. To enable this functionality the architecture in 200, each resilient swarm 250 and 260 for example has a resilient swarm controller device 210. The resilient swarm controller device 210 may be implemented in different ways. It may be implemented in a peer controller 212. So that a drone in the array runs controller software and it gathers the location of each drone from pose estimation that is done by each drone and then this peer information is sent to the dedicated controller software running on a single drone.

[0068] For additional redundancy, a dedicated drone control array 214 may be used so multiple drones may share and interchange this information over the network so that if any drone goes down, the controller may still function. This is done with a multi-master database scheme where asynchronous updates are sent between the various databases to provide the position and status of each drone. This adds additional redundancy to the system at the cost of additional network communications overhead and processing overhead for these drones. In some embodiments, a virtual array 216 may also be used.

[0069] Finally, the planning system 220 may itself control this function. So that drones are managed as an array, but individual paths for each drone are sent. This virtual controller increases the workload on the planning system and will not function if the planning system cannot communicate with the resilient swarm, but does reduce the compute overhead for the drones.

Resilient Swarm Controller Configuration Modules [0070] When a resilient swarm is created, the Initialization software, 220 may have configurations loaded as part of the resilient swarm mission planning, 222. The array planner may take input from the overall mission plan for all the drones in the system and will define the number and type of sensors or other systems that are required and generate a list of individual drones that may be required. The Array Configure 226 will then take available drones and create arrays based on the fault tolerance and parallelism requirements from the planning system. These systems may then go through a detailed path planning 228 that provides additional data for resilient swarms and what specific paths drones may take that are exceptions to the default rules. The resilient swarm Configuration 220 may dynamically change a resilient swarm configuration while drones are in flight and send updates to the resilient swarm Pilot system 230 for reconfiguration in flight.

[0071] The result of the Configuration 220 and then fed to the run-time resilient swarm Piloting system 230 that manages the systems in flight. These may be transmitted from the resilient swarm Configuration 220 over a network dynamically or be statically uploaded. These configurations may then be loaded into the Initialize and Configure, 231 system which may send to each drone its array location, the resilient swarm controller to which it is bound, and from which it takes detailed planning and location instructions.

[0072] The Drone Self Correct and Merge Process 232 may set drones in an array to take images that include the other drones in the photos which can be reviewed in real time at the hive or processed in the cloud. Because the drones operate as a unit and their images are a unit, they can cooperate to provide higher accuracy, so in addition to individual imaging from all the drones in the array, the Self Correct and Merge may take all the sensor data and fuse it into a single entity. As an example, the images from 4 drones may be fused into a single 3D model of the object under observation. This can be used to validate that the image is correct by looking for feature points and matching them together using algorithms such as:

• Structure from motion (SfM): a photogrammetric range imaging technique for estimating three-dimensional structures from two-dimensional image sequences that may be coupled with local motion signals; or

• Multi-View Stereo (MVS) algorithms such as: scale-invariant feature transform (SIFT), Oriented FAST and Rotated BRIEF (ORB), or SuperPoint.

[0073] The Drone Self Correct and Merge 232 may examine the images to see if any camera shows an image with the incorrect colors that are colors that a significantly different than the average of the resilient swarm or if they are taking an image of a known photo swath, then that drone camera can be marked as degraded but the mission can continue. Similarly, if the position location is inaccurate, this can be corrected by seeing the error that is generated compared with the other drones in the array.

[0074] When a drone does fail, the drones in the resilient swarm may together determine the failure point. This may be either a consensus system where the votes of the drones in the array may manage fail-over and fault tolerance. This significantly simplifies the electronics and power requirements of each drone and puts the computer in a station that is not consuming power.

[0075] When a member of a resilient swarm has a fault, it may send information to the Sensing and Recovery, 236 process. Alternatively, if a specific drone fails without notification, this process may have a keep-alive system where if a drone does not report in after a specific period or when a specific part of the mission plan should be complete, the drone may be marked out of service. Other systems such as the planning system 220 may detect failures through other cameras or the loss of signal. Finally, resilient swarm members may detect a drone event such as a crash or a loss of position for a drone. When the Sensing system determines a drone is out of service, it may initiate a Recovery process. It will look up the resilient swarm type in the Configuration 231 system and then initiate the proper sequence. If the swarm is in a non-resilient swarm configuration without any recovery options, then the array may signal to the Planning System 220 a general failure and the planning system may send a new mission or a return to home or other signals such as self- destruct or land immediately depending on the mission parameters.

[0076] If the resilient swarm has a recovery option, these are listed in the configuration database kept by the Sense and Recovery 236. A Level 1 resilient swarm configuration with full redundancy will notify the planning system of the failure and continue the mission and remark this as a Level 1 resilient swarm configuration with a failure. The planning system may have cold spare drones and/or may reinitialize the array 231. If the systems are multiple fault-tolerant systems like Level 2 or Level 3 resilient swarm configuration or higher systems, the Recovery system has a set of procedures stored for each array type, and if a heterogeneous array or a hierarchical array, these will be read recursively from the recovery database. In this case, the Recovery system will reinitialize the system and initiate a rebuild which reruns the last part of the mission plan set in 231. The Recovery system may rerun any part of the mission to recover and provide the correct data to the system.

[0077] The resilient swarm may have a global shutter module 234 which may coordinate all the sensors in the resilient swarm to time-locked sensing data. This allows precise modeling and simultaneous data collection. The Global Shutter may set the clocks of each drone precisely or may asynchronously send a broadcast image start and match resulting imaging by time stamp and provide a single time- synchronized feed of images and time-coherent 3D models and object detections.

Workflow

[0078] FIG. 3 is an example drone application workflow, in accordance with various embodiments. The Application Workflow 300 involves individual drones 3100, the resilient swarm controller 3200, and one or more central command systems (e.g., hive(s) and/or cloud) 3300. The workflow stated with a Mission Plan 3320 that specifies the requirements and the required structure for arrays of drones, this is then given to the Optimizer 3340 to determine the optimal number of drones and the types.

[0079] This is then sent to the Controller 3200 and the configure array system 3220 which then initiates a configure members 3120 for each drone that lets the drones position themselves correctly in the array. The drones may run Calibrate 3140 to ensure they are in the correct location and may notify the Controller if time, position, pose, or other data synchronization is needed for Correction 3240 for the resilient swarm. As an example, if the controller has additional data from a GPS clock, it may correct the time on each member or if it has additional sensors, it may correct the position of each member. The controller may also get Global Correction, 3350 from the central command system, the Hive or the Cloud based on external data such as additional cameras or other drone swarms.

[0080] The drones will then lock their formations together and execute the missions plan 3180 and ensure that as they navigate the resilient swarm remains in the proper relative configuration [0081] With the drones Sense Modules 3190 and provide imaging information and may send their sensing data to the controller and they may Calibrate Sensor 3260 data and feed and image correctly. The data may then be forwarded to the hive and placed in a central Image Store 3360. If the system is to provide a single sensor fused image, the controller may Merge Sensing 3280 and provide that merged image to the Image Store 3360.

[0082] After sensing the drones run the Next Step Module 3192 which may be a movement of the drones to a new place. Each drone will maintain their relative positioning as they navigate to the point. If the next step is a reforming of the array, the Configure Module 3120 to rearrange the individual drones into a new configuration and then continuing to recalibrate and execute.

[0083] If there is a fault in a drone, it may initiate Fault Notification 3195 that may be sent to Fault Detection 3290. If the drone is unable to send a notification, the Fault Detection may use data from other drones, or other hives or may detect a drone is nonoperational if it does not report back after a specific time or at a specific location.

[0084] The Fault detection may initiate a Reconfiguration 3290 which may send to one or more drones reconfiguration information to the Configure Members 3210 module and then the drones will Calibrate 3140 and Form-Execute 3180 with the new configuration. If the controller cannot repair the problem with currently in use drones, it may send a message to the Hive or cloud Cold Spare 3380 module which may initiate an updated mission plan with additional drones flying to replace members of a resilient swarm. This may cause a new optimization of drone members in Optimizer 3340 and restart the drone system. A resilient swarm may wait in Configure Array 3320 until all members are in range and available in Configure Members 3120.

[0085] As mentioned above, problems with current drone hardware architectures results in drones that are complex in design/fabrication/operations, and that are navigation and error- prone. As an example, as seen with reference to FIG. 5, system 500 depicts traditional control of a drone 510. Although a single instance of drone 510 is included within FIG. 5, multiple instances of drone 510 may be included within system 500. For simplicity, a single instance of drone 510 is depicted. Drone 510 may include a receiver 512 to communicate a radio link 550 to an operator 531 via a manual controller 520. Drone 510 may also include a video transmitter 514 to facilitate a low-resolution video downlink 540 to a display device 530. When drone 510 lands, a storage device may be removed or may be connected to a laptop to download data, such as high resolution sensor data, for further processing. For positioning, a global positioning system (GPS) may be used along with one or more cameras for imaging and obstacle detection.

[0086] Drones are relatively expensive costing $3100K for typical enterprise drones. They are typically deployed in small numbers due to the cost and the need to manually control each drone. Furthermore, because there are relatively few drones, they typically need to have a wide variety of sensors and be general purpose in nature.

[0087] Drones have to carry everything they need. Drones carry all the sensors needed for their operations, as well as compute and storage devices to locate and navigate through their environment. Drones are limited with respect to how much hardware computation they can perform based on the amount of compute and storage devices they can carry, and thus move with limited power available. As an example, a typical smaller drone may weight 900g, may have a flight time of 30 minutes, and need 100W of power to stay in flight. The limited power budget means that a drone cannot carry a 1 kg GPU system that draws 250-500W, for example, so for drone-borne systems there are significant computational limitations that ground-based systems do not have.

[0088] As a result, given the power and weight limitations, a drone’s computational hardware is very limited and cannot run the most advanced artificial intelligence (AI) and other processing. The small size also requires more skill for programmers to write software for use with the drone and results in difficulty in testing and validating the software. Furthermore, when more capability is needed, the drones get heavier and more power hungry. If more AI is needed to recognize objects, a bigger processor is needed. If more detailed scans are needed, a heavier, bigger camera is used. A scan, which may be referred to herein interchangeably as a site scan or survey, as described herein, refers to a drone that moves in controlled manner through a 3D environment and, using a collection of red, green, blue (RGB), RF, LIDAR, ultrasonic and acoustic sensors, maps out the area in 3D space, contextually placing detected objects in the 3D scene. And more weight means more power and therefore bigger motors, more batteries, or other fuel. As weight increases, drones have more kinetic energy and have more and higher rotor velocity. If a longer mission time is required, the drones have to carry heavier batteries, which creates a negative cycle where heavier drones actually need more battery and have less flight time. Finally, because these single drones have to run standalone, they need to carry significant compute power for object recognition and must be capable of complete operation including retum-to-home and other failure systems to prevent large heavy drones from damaging property.

[0089] Drones may rely on global positioning satellite (GPS) for positioning. In some embodiments, drone 510 may use GPS for position or for an optical flow to maintain position. In some embodiments, drone 510 may include an inertial measurement unit (IMU) and a compass to provide direction. The use of GPS makes drones difficult to use in GPS- denied areas, such as close to buildings or inside buildings. As a result, most drones have to fly far from imaging targets, which increases their weight and computational requirements for the sensor systems.

[0090] Drone communications are generally local, subject to interference, point-to-point and relatively slow. Drones today are commonly controlled by specific radio links, do not share or communicate with each other, and use traditional wireless technology. If too many drones are used in the same area, there may not be enough communication channels to send control signals or to achieve telemetry or video from them. Drone communications today are also synchronous (real-time) and must be reliable. In order to operate a large number of drones in various conditions, there needs to be a large number of frequencies available to prevent interference and communications can be lost if a drone loses line-of-sight to the controllers. Drones may also use satellite or cellular communications, but this requires large and power intensive transmitters to maintain signals.

[0091] Drones are generally not very fault tolerant. For instance, when deployed, if the drone has an issue, such as a failed motor, a fault with a camera, or a lower than typically expected charge, the drone will have to be recalled, a new drone found, and the mission re run. If a backup drone is not available, the entire mission may be scrubbed and rescheduling of operators and others may be required. For mission-critical procedures, this can be life threatening if a mission is not run on time. Even if there are multiple drones available, losing a drone can require manual re-planning of the mission to take over the parts of the mission that cannot be performed. Further still, if there are on-site operators involved, it can require communications between them, losing mission time as they work with each other to re-plan the mission manually.

[0092] Problems further exist with respect to drone software systems, as they may be limited software deployed at the edge, be non-uniform, procedural, and not AI-ready. The edge refers to computing devices placed between the system and the remote cloud, also often referred to as a local cloud solution. A local cloud refers to a dedicated cloud service that is managed by a third party and runs on-premises. A local cloud includes provisioning of physical infrastructure, software and full lifecycle management by the third party service provider. Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. Drone programming today is very difficult because of hardware limitations listed above and to date most of the effort has been focused on flight control, keeping the drones flying and managing their limited power.

[0093] The limited software available at the edge means that it can be time consuming to write applications for drones, and the most advanced algorithms (e.g., which today may include artificial intelligence (AI) methods) generally require large amounts of processing power and storage, and can therefore not “fit” onto a drone. This results in less accurate and more error prone methods having to be used. Therefore, instead of modem software, more traditional and error prone procedural programming techniques are used to fit the hardware constraints, and these techniques fail to solve complex multivariable problems typically encountered in computer vision.

[0094] The non-uniform programming and procedural focus refers to the programming environment on a drone is radically different from programming of base stations and cloud computing systems. Thus, it can be difficult to move software between the hardware of the drone and other elements of the system, such as system 200. They do not offer a single uniform compute environment that allows moving software building blocks to the best place for speed and reliability. [0095] The non-uniform user interface means that systems having dedicated planning software which runs on a PC for planning, software that runs on a smartphone for piloting, and then separate cloud software run through a browser significantly complicates the user experience and makes it difficult to maintain the software. Additionally, simulating and testing such a complex environment can be extremely challenging.

[0096] With respect to the drones not being vision-first and being 3D mapping focused, drones may operate in environments where GPS does not work and for processing reasons, the internal representations are primarily 2D. For example, because of limited compute resources inside drones, a full 3D representation is hard to manage inside a drone and the large-scale requirements needed for full vision means that simplistic systems are used for object detection, object recognition, and 3D world navigation.

[0097] Current drones also may have workflow application problems, such as not being integrated, being 2D, being GPS dependent, and performing batch processing. Drones have followed the traditional flight model taken from manned aviation. With modem systems, an operator has to manage many complex tasks, from de-conflicting flights and monitoring battery status to stepping through a series of image processing steps and working with very different hardware in the drone, base station and cloud. Modern systems treat each task as a different software stack that must be learned and then the operator may hand integrates them to make the system work.

[0098] Another issue faced by modem drones is that they generally plan with waypoints using 2D maps. Today, flight planning is done with satellite imagery and uses a series of waypoints with instmctions to the drone in terms of its mission. Waypoints rely on GPS information are can be unreliable at close distances as GPS is not reliable. Obtaining optimal reliability requires heavy, power consuming, and expensive additions, such as GPS real-time kinematics (RTK), which can still fail in urban areas where multipath reflections are prominent. RTKs refer to a GPS receive capable of performing “real-time kinematics,” whereby normal signals received from current global navigation satellite systems (GNSS) along with a correction stream to achieve positional accuracy (e.g., less than 10 cm accuracy, 1 cm accuracy, 0.1 cm accuracy, or other accuracy measurement errors). Due to the inherent limitations of current GNSS technology, operators often perform manual missions and have to frequently take over manual control during automated missions, making these missions impractical and unsafe. In addition, mission planning does not take into account atmospheric effects and lighting conditions, all which influences imaging quality, including, but not limited to, which is not to suggest that other lists are limiting, shadows, obstructions, fog, or glare.

[0099] Modern drones may also have piloting problems. As human operators oversee operations, there is a limited in number of drones they can fly (typically 1:1 ratio is required for operators). Many of the steps require manual intervention, such as checking and planning made away from the site do not take into account the actual weather, shadows, and other problems that are present requiring manual intervention by drone operators. During imaging missions, low-resolution images are return in real time and the success of a mission may be determined after the high-resolution data is brought back and the data is downloaded or physically extracted from the drone

[0100] Modern drones may also process data slowly and not in real-time. During missions, it may not be clear if the sensors are capturing the correct data. The time to convert images into usable data, such as 3D data, and the size of that data may mean it takes time (e.g., on the order of days or weeks) to prepare the resulting images for output and proper conversion. Additionally, the data for a single simple structure can require over 200 GB or 30 million data points, which is difficult to store and manage in the field while a drone is operating. This means that missions must be re-flown if the capture is inadequate or if additional data is required. Having a real-time means to examine the 3D structure would dramatically increase productivity.

[0101] There may also be mission-level small-scale operation problems, such as with accuracy, speed, cost, and dependability. For instance, the accuracy of the drone may be hindered because sensing processing is done after the fact. Processing is batch oriented after a mission has been completed and the results known. Therefore problems are missed or rescans are required. Additionally, because drones use generally use GPS for location, they need to fly high above structures to ensure good satellite contact, which reduces image accuracy or requires much higher quality sensors. [0102] Speed and cost can also be problematic with the small scale of an operator coming to a single site to scan with a typically a single drone. Due to the operator using manual control of individual drones, a typical scan of a building can take hours to fly and then days to process the resulting sensor data. Modern drones are not suitable for scanning major buildings in minutes and many applications (such as continuous 24/7 scanning) are impractical.

[0103] Modern drones also have dependability issues, such as reliability, availability, maintainability, and safety. As drones are few in number, expensive, and have many sensors, there are many failure points that can cause a mission to be scrubbed. At the software level, scans are open loop systems without real-time feedback, where after a survey is completed it can be determined whether the imaging was properly done, and then the entire mission has to be rerun further reducing reliability. With a small number of complex drones, if a drone goes down, then the entire mission may be scrubbed. This drives expenses because multiple redundant systems are required. For instance, increasing the number of motors in a quad- copter to a hexa-copter makes the drone more robust to mechanical failure. More complex drones also have more parts, which means more parts that can fail, particularly in the harsh outdoor conditions. Furthermore, safety (and/or regulatory) issues can arise as heavier drones inherently have more kinetic energy and typically higher terminal velocities. As an example with current Federal Aviation Authority (FAA) regulations, Category 1 drones, which are less than 250 g or not regulated at all and a Category 2 drone must have less than 12 foot-pounds of energy when it strikes something. It takes one second of free fall for a typical 0.9 kg drone to reach 20 mph which equates to 12 foot pounds limit for the FAA, so even small drones can be dangerous. Heavier drones are more capable, but since they can easily weight 10-20 kg and a crash would be a very significant event. Drones are regulated as they are moving in the real-world. Ensuring compliance for both vehicles and operators is very important. In some cases, the vehicles may be electric vehicles (EV), which include terrestrial, aquatic, submersible, or airborne vehicles. Oftentimes simply making a drone very light or small will simplify regulations as does limiting whether a drone is beyond of visual line of sight (BVLOS) [0104] Problems may also exist with modern drones in terms of fleet-wide large-scale operation problems, such as with scalability, interoperability, and security. Manual control and operation do not scale. The United States has over 5.6 million commercial buildings, 10 million industrial, 1 million public buildings, 2.3 million multifamily, 91 million single family homes, and 900 thousand bridges. These 100M+ structures need between 1-12 surveys every year. However, in the US today, there are 300 thousand operators, and automation must increase significantly to support the requirements for all these structures. To scale to planes, large improvements to enable a large-scale systems are needed.

[0105] In terms scalability, the ability to control thousands or even millions of drones requires continuous imaging of the world in near real-time. Modern systems require far too many operators to make this possible. This scale level is limited because of testability and manageability problems. Testability problems and whole system simulation problems refer to large scale systems with drones that are difficult to continuously integrate and to continuously deliver (CD/CI) to ensure the entire system works properly. Current systems lack a general simulation environment for all parts of the system from edge to cloud. In terms of manageability, a single drone is relatively easy to track, but knowing the status of millions of drones, as well as handling logging and other events is difficult with current systems. In addition to the core scaling issues, modem drone systems are designed as standalone systems that typically have issues with interoperability and integration. For instance, many drone systems do not work with other hardware and existing drones so an upgrade requires many hardware changes. Furthermore, because each drone runs separately without broadband communications with respect to the rest of an enterprise, they work in batch mode. The drones do not directly integrate in real-time with enterprise databases or virtual communications systems. Further still, bad actors are a constant part of drone operations and current systems have security holes and lack privacy. For instance, most drone systems use proprietary protocols, and thus it can be is difficult to assess how secure they are and whether they have flaws that can be exploited. Losing control of drones or any part of the system would have significant impacts as these drones could be repurposed for malicious acts. Furthermore, when data is captured, it belongs to a single entity and it is difficult to share the information. There is no general marketplace or way to reuse survey information or even to determine what surveys and information has already been collected. Moreover, most drone today works as an “island” and do not share information and cooperate in a dynamic way. Drones that could cooperate would work faster and more accurately, thereby overcoming the aforementioned technical problems. In terms of privacy, drones can be intrusive and the ability to collect surveillance data from drones magnifies these problems even more. Modem drone system do not provide for systems that will prevent unlawful sensing from taking place or for locking of information so that only those with the right to see data can access it, or for preventing that access.

[0106] As with PCs, smartphones, and IoT, a dramatic cost reduction in drones is happening, which can change the way drones are used. A system that works for a drone costing $5,000 is very different from a system that uses drones costing $5-$500. A system that relies on a small companion computer is very different from a system with lOx more compute per drone and 100-1000x more computation in closely connected hives. This system may employ a multi-tier vision-focused hardware architecture that runs distributed software systems that enables integrated real-time application workflow dramatically improving the accuracy, speed, and cost of drone missions and scales to thousands and even millions of networked drones working together.

[0107] FIG. 6 describes an example of a drone 600, in accordance with various embodiments. In some embodiments, drone 300 may be the same or similar to drone 510 included within system 200. The local communications network architecture of drone 600 may be a 3D dynamic local area network, instead of a static point-to-point network. A plurality of drones, such as multiple instances of drone 300, may act as nodes within the dynamic 3D wireless network to constantly communicate and map the radio frequency (RF) environment, dynamically creating an RF spectrum map used to optimize the flight trajectories to find the most efficient balance between flight time, network coverage, bandwidth, and communication latency.

[0108] Computing capability is performed on a base station, which is also referred to herein interchangeably as a hive, to augment drones. A hive, as defined herein, refers to a physical structure (stationary or mobile) providing processing, communication, and storage capabilities for data gathered by a variety of sensors, including, but not limited to drones, smart phones, wearables, and ground based sensors. Doing so moves heavy and power- hungry machine learning and 3D world modeling away from drones. In some embodiments, multiple hives may work together to create a large-scale local computing cluster with high speed communications to many drones. Furthermore, instead of putting GPS receivers on drones, some embodiments may include placing GPS receivers on the ground and may use RTK GPS or other positioning in combination with visual position so that each drone can precisely position itself. In some embodiments, local compute may connect to the highest tier with a 3D wide area network. Hives may constantly map the radio frequencies and bandwidth available to them and can reposition hives to better locations to provide high bandwidth (e.g., 1 gigabit/second) to the cloud systems. In some embodiments, the cloud computing environment may be more closely integrated with the remote hardware at the mission site. This allows direct processing of real-time data and enables real-time collaboration while drones are in flight, thereby significantly shortening the time to analyze problems in the field.

[0109] In some embodiments, hardware architecture implemented by the drone systems described herein may dramatically increases Al-computation by distributing it properly across drones, mobile hives, and the cloud. The hardware may be “vision-first” because cameras are extremely low cost and when mated with advanced AI provide high accuracy even in GPS-denied areas. This can allow for closer imaging in unique areas with low-cost hardware. Some embodiments include the drone’s design focuses on being ultra-lightweight. For instance, drone 300 can be single purpose rather than multi-purpose (although in some cases, a given drone may have two (or more) purposes). Flight time becomes a trade-off with size and more drone computation capabilities (e.g., ten times that of conventional systems) can make for shorter flights by having many more drones. For example, a single drone with a 30 minute mission time may be replaced with 10 smaller drones with a 6 minute flight time and lOx more compute power in motion. Such a system would have (10 drones x 6 minutes) 60 minutes of effect flight time and 100 times (10 drones x lOx faster) the computation in motion. Within a single drone, the real-time mission-critical flight controls may be isolated into a separate processor that may have a security gateway processor connected with a general-purpose computer. This isolates flight control and allows a rich environment for path planning, obstacle detection, and other programs that run above the low-level flight control. [0110] The hardware architecture of the drone system described herein may use a large network (e.g., 10 or more, 1,000 or more, 100,000 or more, 1,000,000 or more, etc.) of small lightweight drones with separation of concerns that improve reliability and simplicity of programming. At the lowest level, instead of individual complex drones, the system may use inexpensive, single-purpose drones that are simpler and low-cost.

[0111] To make these light drones as reliable and capable as larger systems, the drones may work together in a resilient swarm so that if a drone fails, another drone can automatically take its place. A resilient swarm may integrate multiple drones into one or more logical units for the purposes of operational redundancy, performance improvement, or both. These resilient swarms may be further cost-reduced by being specialized in function. Some embodiments may include some drones carrying one component and other drones carrying another different component. For example, some drones may carry cameras while other drones may carry lights or communication relays. This allows multi-function missions without the weight and cost of a large multi-purpose drone, for example one that would carry cameras, lights, and communication relays. Missions generally may be performed by individual drones or a novel concept of a resilient swarm. This logical layer makes it much easier to design planning systems. Instead of planning the routes of large quantities (e.g., 1 or more, 1,000 or more, 100,000 or more, 1,000,000 or more, etc.) of drones in a single optimizer, the system may provide redundancy at the virtual hardware level with resilient swarm and then plan missions treating a set of drones in a resilient swarm as a logical whole assuming fault tolerance is high in the array.

[0112] The use of resilient swarms enables a hierarchy of virtualization from resilient swarms to drone Kubemetes to drone no-code. In this hierarchy, resilient swarms may be assigned units of work such as scanning a specific object, providing a uniform lighting of an object, or providing a communications network. Therefore, the system may be split into several parts that are much easier to program. On top of the resilient swarm may be drone pods. Drone pods, which may also be referred to herein interchangeably as “pods” or “hives,” are units of work managed by an orchestration system. The orchestration system may be configured to assign each software process to a container virtual environment. This allows a drone process to run on one or more drone pods that would otherwise be run on a single drone, in a resilient swarm, or in a hive or a cloud. Some embodiments may include performing process migration in a drone fleet by separating the software process from the actual hardware. Finally, at the application workflow level, a No-code intent system may be used in addition to a procedural programming system. With this system, a desired set of actions, such as imaging an object, can be specified, but the actual software process can be automatically generated through an optimizer, which may be AI-based or a classic linear optimizer.

[0113] The drones, such as a resilient swarm, may be launched, landed, and recharged from one or more drone hives. More than just a storage system, these hives may have a large compute-cluster with AI accelerators used to provide even more compute capability to the drones. A typical graphical processing unit (GPU) accelerator may have more processing power (e.g., 10 times, 100 times, 1,000 times, etc.) than carried on a single drone. The drones may be sent instructions, perform object recognition, and create new paths from the hives. Hives may also be movable and may themselves be treated as resilient swarms and pods.

[0114] As an example, hives may launch a series of drones for recovery by other hives that are downwind so that drones may float in the elements using less power. Hives may also act as partners to drones so the drones are not required to carry heavy and power hungry components, such as RTK GPS systems. The hives may relay critical information, such as location, to the drones making them more capable without adding to the drone’s weight or power consumption needs. Hives may also act as beacons with passive and active markers, rulers in the visual, or RF ranges to provide positioning data for the drones. Computation can be moved from drones to hives or vice versa so that if a computationally expensive process has to be run, the task may be sent to a hive thereby lowering the drone’s power requirements.

[0115] Drones and hives may themselves act as positioning and communication relay points. For example, when a drone is imaging, it may intentionally include other drones, hives, or other markers to provide positioning aids and feature points with known locations and characteristics. [0116] To enable so many drones and hives to work together, the system may use a radio- frequency aware and agile local network, such as 3D-LAN, that may connected by a high speed network. A 3D-LAN refers to a Local area network (LAN) setup and is administered in 3D space where each node in the LAN corresponds to an entity capable of traversing 3D space through three degrees of freedom, six degrees of freedom, or some combination thereof. Drones may communicate with a 3D-LAN configured to anticipate data needs and dynamically changes drone paths to fly into high bandwidth “hotspots” created in the sky. A LAN refers to a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building. By contrast, a wide area network (WAN) not only covers a larger geographic distance, but also generally involves leased telecommunication circuits. The 3D-LAN may use standard internet protocols (IP), such as mesh networking, multi-cast, video streaming, or quality of service (QoS), so that modem facilities may all be available to the system. The system is designed to allow communications failures or outages. If communications are lost, the drones can continue to function until restoration improves drone availability.

[0117] Cloud computing may back the base stations via high-speed wide-area networking 3D-WAN that extends the 3D-LAN concept to wide area communications. A 3D-WAN refers to a wide area network (WAN) setup and administered in 3D space where each node in the network is an entity capable of traversing 3D space through three degrees of freedom, six degrees of freedom, or some combination thereof. The 3D-WAN may analyzes the RF environment and position hives and drones in “bucket brigades” to ensure high bandwidth.

As a result, drones have orders of magnitude more processing power and storage compared with standalone drones.

[0118] The software architecture may be fully distributed, uniform, and dynamic. The software architecture may isolate the detailed flight controls into a small contained module, and may use high performance general processors commonly available, such as those deployed in smartphones, in the drones to run the same compute container and networking protocols in the drone edge, local computer in the hives, and in the cloud.

[0119] The system may allow dynamic migration of functions such that if a connection is lost, the drones can continue to fly. Furthermore, when there is high bandwidth, functions can be offloaded into the hive or into the cloud. Instead of separate environments, some embodiment include the software providing the entire interface within a browser and separating components into micro-services. Doing so can simplify the user experience and can make programming and testing easier. Furthermore, because the system is uniform, there may be a single uniform simulation environment that emulates the drones, the drone network, the hives, and/or the cloud together as a single system, significantly improving software productivity. Furthermore, the software may be vision-first and 3D-model centric.

[0120] The workflow may be built as an integrated whole rather that has a single vision- built representation for the 3D world used in planning, piloting, and processing. Planning may be done with accurate 3D models of the world and without waypoints. Instead, the planning may be done using intentions, piloting may be done with pods of drones that accomplish missions rather than individual waypoint planning, and hardware failures may automatically be compensated for with dynamic planning. Processing can be done in real time enabling real-time collaboration. The system can build 3D models with sensor data and can allow virtual cloud collaboration and dynamic changes in drone operation depending on what is found, all in real-time.

[0121] The aforementioned technical benefits may be obtained at small-scale, and the systems can be more accurate, faster, cheaper, and more dependable. Furthermore, the technical solutions described herein may allow for missions that are faster than those performed by a single manually-flown drone. For example, in 9 minutes, a 64-drone network can accomplish the same amount of scanning as a single drone scanning for 9 hours. The system may allow real-time creation of 3D models at planet-scale and real-time virtual collaboration while drones are in flight, thereby facilitating real-time consultation improving decision making cycle times. Each drone may also be simpler and has fewer components than traditional drones, and can be highly available with automatic failover when drones fail or run short of power, more maintainable with simpler drones capable of returning home in the event of failures, and improve safety with lower mass drones moving and less powerful motors needed.

[0122] And at the large-scale, the drones can be deployed in the millions (or at different scales). A single integrated system may be formed with direct oversight of the large fleets of drones from the cloud. Furthermore, automatic scaling may be enabled so that if a mission is taking too long, new networks of drones can be automatically assigned and moves as needed. Still further, with an integrated data storage system, the system can accumulate information for AI machine learning and training to improve, amongst other things, path planning, piloting, and processing of data.

[0123] Some technologies described herein may include, but are not limited to, which is not to suggest that other listings are limiting, robotic system technologies such as unscrewed air vehicles (UAVs), simultaneous localization and mapping (SLAM), robotic operating systems, motor controls, battery management, or other technologies. UAVs, referred to herein interchangeably as drones, refer aircrafts without any human pilot, crew, or passengers on board. UAVs are a component of an unmanned aircraft system, which includes adding a ground-based controller and a system of communications with the UAV. SLAM technologies refer to the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.

[0124] Machine learning and artificial intelligence techniques may include, but are not limited to, neural networks, visual transformers, back propagation, convolutional neural networks (CNN), and deep learning, which may be implemented using software packages such as Tensorflow or PyTorch.

[0125] Some machine learning and AI techniques may include computer vision technologies, such as photogrammetry, 3D model reconstruction, and neural radiance fields.

A neural radiance field (NeRF) is a fully-connected neural network that can generate novel views of complex 3D scenes, based on a partial set of 2D images. It is trained to use a rendering loss to reproduce input views of a scene.

[0126] Some cloud computing technologies may include micro-services, Docker and other container technologies, Kubernetes and other orchestration systems, cloud services, data caching and synchronization, and other technologies.

[0127] Some mobile technologies include Android architecture and AI acceleration. Networking technologies may include mesh networking, multi-user MIMO (MU-MIMO) and IP networking, 5G networking, and Wi-Fi. MU-MIMO refers to a set of multiple-input and multiple-output technologies for multipath wireless communication, where multiple users or terminals, each radioing over one or more antennas, are capable of communicating with one another.

[0128] Hardware concepts may include redundant array technology such as used in RAID for storage, fault- tolerant system technologies, and multi- system interconnects.

[0129] Programming environments and technologies may include intentional programming, JavaScript, no-code technologies, or other technologies.

[0130] Web development technologies may include browser based computation, WebAssembly and web technologies.

[0131] Enterprise applications may include geographic information system (GIS) databases, 3D database and building information systems, and enterprise integration technologies, such as XML based connectors. GIS refers a type of database having geographic data combined with software tools for managing, analyzing, and visualizing that data.

[0132] Blockchain technologies may include non-fungible tokens (NFTs), distributed ledgers, and off-chain data storage. NFTs refer to a financial security consisting of digital data stored in a blockchain, a form of distributed ledger. The ownership of an NFT is recorded in the blockchain and can be transferred by the owner, allowing NFTs to be sold and/or traded.

[0133] Described herein is hardware and techniques for an array of drones, drone pods, local area networks, hives, wide area networks, and the cloud integration.

[0134] Described herein are drones and techniques for operating drones that can have extremely powerful replaceable companion computers which can compensate for relatively short flight times using a large number of drones working together. Such drones are inherently vision-first with 360 x 360 vision systems and AI accelerators. The drones can use the advances in the smartphone industry and other industries to form “flying phones” rather than a drone with a companion processor. Instead of multifunction drones, each drone may be lightweight with a simple mission, such as imaging, providing lights for better imaging, or for relaying communications. The drones may be designed to be lightweight and may have kinetic reduction systems such as parachutes or passive encasement to slow them down.

These drones may also be designed with large slow-moving propellers that enable quieter and safer operation.

[0135] Some embodiments include the basic unit of work not being a single complex drone, but a resilient swarm of simpler lightweight drones having a single purpose (or minimal number of purposes) to further lower weight and power requirements. These arrays may be configured to allow any of number of drones to fail and still have the resilient swarm capable of continuing its mission. Having a reconfigurable resilient swarm considerably can further reduce the burden on planning, piloting, and processing systems. Thinking of drones as arrays allows for an expensive custom build drone with every possible sensor and capability built at the factory (where most of the components are not used on missions and require a large power source to carry the extra weight) to be replaced with a resilient swarm that can be tailored and assembled dynamically for every mission based on the mission’s requirements. Further still, each component may be an independent drone, which can be lighter, simpler, more reliable, and less expensive than current drones. With a resilient swarm, deadweight in the system can be minimized. For example, if a narrow view is needed, a small number of camera drones can be used. As another example, if the drones are needed for a short mission, small battery drones as opposed to larger battery drones can be configured.

[0136] A resilient swarm may operate faster because of its use of a single camera. Multiple cameras on multiple drones may instead be used to take images simultaneously, allowing for fast interpolation and higher accuracy than conventional techniques. The resilient swarm may allow hot spare drones to automatically move to become part of the array in the event of failure. An array of redundant drones, such as a resilient swarm, can also reduce regulatory hurdles as lighter drones have less kinetic energy and improved reliability (for example, if a drone fails, other drones can take over).

[0137] There can be multiple types of resilient swarms including multiple drones or resilient swarms doing different tasks that further simplify the drones’ scheduling and lower the cost and power needs for the drones. For instance, there can be a resilient swarm of imaging drones, an array of lighting drones to eliminate shadows, an array of communication relay drones if the LAN or WAN coverage is poor, or other array types.

[0138] Within an individual resilient swarm there may be many different types of drones. This can increase management flexibility so that if there are different requirements for sensing and directions, different types of drones, such as a lighting drones, three camera drones, and/or communications drones, can work in the same system to make the logical “drone” created by resilient swarm even more flexible.

[0139] In some embodiments, resilient swarms can be managed locally so each drone positions itself relatively to the rest of the drones and they collectively agree on their state. Arrays may also be managed by one or more controller drones, which may include multiple drones configured to direct the movement and position of subsidiary drones. Furthermore, a resilient swarm may be virtual such that each drone works independently, but the piloting software emulates the resilient swarm. These implementations can be flexible depending on how much compute power is available in the physical drones. For example, a resilient swarm may have a great deal of computational power so that each drone can maintain the state of ah other drones to manage themselves as peers. As another example, if a few drones have a great deal of compute then the drones can manage the entire array as secondary drones taking path information. Further still, if the drones do not have enough compute power, the array planning can be done in a remote controller, such as one or more hives, offloading the individual drone computations to the hive.

[0140] Some embodiments include treating drones not as individual sensors, but as a single camera system managed with a single global 3D shutter. When an object needs to be imaged, multiple drones may fly either in a resilient swarm or as individual drones that are synchronized with precise timing to take an image (or images) from multiple vantage points at the same time (e.g., within 1 second of one another, within .1 seconds of one another, within .01 seconds of each other, and the like). This single global shutter for an image can allow a precise 3D model to be created without artifacts that occur when photos are stitched together over time. [0141] Some embodiments include local communications systems that are dynamic 3D local area networks instead of a passive network that is point-to-point. The drones and hives can constantly re-compute the RF environment, mapping hot spots and measuring the communications needs, and all nodes can automatically move to locations as needed to maximize bandwidth. This reliable system enables smaller drones to replace larger more complex ones.

[0142] Some embodiments include the computing being in local compute clusters in hives instead of in individual drones. This moves heavy and power-intensive machine learning and 3D world modeling away from the drones. The overall system is capable of operating with compute on the ground and sensing in the drones. Multiple hives can work together creating a large-scale local compute cluster with high speed communications to many drones. Furthermore, instead of putting GPS receivers on drones, they can be placed on the ground, and can use RTK GPS or other positioning in combination with the visual position so that each drone can precisely position itself.

[0143] The local compute can be connected to the highest tier with a 3D WAN. The hives may constantly map the radio frequencies and bandwidth available to them and can reposition hives to better locations to ensure high bandwidth to the cloud systems.

[0144] Furthermore, the cloud computing environment can be closely integrated with the remote hardware at the mission site. This may allow direct processing of real-time data and can enable real-time collaboration while the drones are in flight, which can significantly shorten the time to analyze problems.

[0145] Some embodiments include software that is novel in structure at the edge, implements a uniform programming environment, and is vision and AI first. The on-drone systems may be containerized and may use standard IP protocols. Software running on the drone can automatically migrate from the hive to a drone or from one drone to another. Some embodiments include a general compute node rather than specialized flight software.

[0146] Uniform programming models exist across the drone, hive and cloud so that a single development effort can apply to any of the hardware environments and it enables much simpler simulation and testing. Drones may be programmed with intentions rather than waypoints and procedures. Drones or resilient swarms (which improve redundancy and simplify programming) may be managed as pods of functions. Orchestrated together, the drones may operate as one to perform a mission rather than following a series of waypoints, thereby simplifying the programming into generating the intentions or the desired state of the system and then having an orchestrator separately work to maintain that state.

[0147] The fundamental unit of work may include a set of points to be imaged rather than a specific plan. This separates the actual path planning details from the points that must be imaged and allows automatic failover and replacement as needed.

[0148] The software facilities may be vision first and 3D mapping focused. The native primitives in the system include building a 3D view of the world in the hive that is shared with all connected drones and then augmented by additional recognition and enhancement that can be done in the non-local cloud.

[0149] Some embodiments include a user interface that is browser-based. A browser-based user interface can separate the presentation of the data from the infrastructure in the hive or the cloud that is processing the data.

[0150] Some embodiments include the application workflow being significantly enhanced as it merges together the traditional manned aviation view of drone operations and automates it. Planning may be done with a 3D map rather than just 2D satellite and may use realistic 3D visual effects from computer graphics engines so that planning takes into account shadows and glare and other real world effects. Plans may be created assuming multiple drones acting together in a pod to accomplish missions (for example, so waypoints are not used for individual drones), but a “no code” approach can also be used for a drone pod that has a set collection of locations to image.

[0151] Piloting does not have to be an extension of manual piloting. A 3D map can be built using visual cues and may not be dependent on GPS, all the drones in the system, and can be shared amongst all drones. The drones can find each other in images and find known objects, such as hives, to position themselves. For example, the drones know the exact size and physical features of each other, which aids in positioning. Failures and problem with drones can be handled automatically as is re-planning of drone flights.

[0152] Some embodiments include processing of images in real-time using high speed transmission afforded by the resilient swarm and caching in the 3D-LAN. Reconstruction of 3D images may use NeRF and/or other AI techniques to generate 3D models and perform object recognition. The results can be immediately shared with the operator. In some embodiments, the cloud system can assist in this and also in the real-time transmission of models to remote users. The cloud system may also allow both remote conferencing and changes in flight plan.

[0153] Some of the technical benefits of the embodiments described herein relate to the hardware, software, application workflow, and the resulting small-scale per-mission overall result. Additional technical benefits are described with respect to improvements to the overall large-scale fleet-size.

[0154] The hardware that is enabled by motors, particularly motors used for drones, is still in its infancy. There are several large areas where this will have massive impacts.

[0155] The first area where a massive impact will be felt by the developments in motors includes flying drones, which is also referred to herein interchangeably as Unmanned Aerial Vehicles (UAVs). There are many different kinds of drones. As defined herein, a drone refers to an autonomous agent capable of navigating through physical environments (e.g., through air, solid surface, liquid surface, through water) without human intervention. The large categories include Vertical Take-off and Landing (VTOL) devices that have two or more motors. VTOL devices with two or more motors are generally referred to as helicopters. However, if the VTOL device includes four motors, it can be referred to as a quad-copter. A quad-copter can also be referred to as a quad-rotor, which refers to a specific implementation of a drone including four brushless DC motors and propellers. In addition, some VTOL devices having six motors are referred to as hexa-copters, and those including eight motors are referred to as octo-copters. Some VTOL devices may include even more motors, with the one, four, six, or eight motors of the helicopter, quad-copter, hexa-copter, or octo-copter, respectively, driving one or more rotors. [0156] Some additional types of VTOL devices include fixed wing UAVs. Fixed wing UAVs can travel greater distances than VTOL devices, however they are generally unable to hover in a given location for long. Fixed wing UAVs generally have greater range and efficiency than VTOL devices. Fixed wing UAVs may also include lighter-than-air crafts that use balloons or blimp structures filled with helium, hydrogen, or other gasses. In addition, VTOL devices may include hybrids of VTOL devices (e.g., rotating-wing craft that can take off vertically, and fly using a wing or copters) and fixed wing UAVs. Such hybrid devices may be powered by electricity using batteries, internal combustion, hydrogen, nuclear, or other power sources, or combinations thereof.

[0157] Another category of UAV includes Wheeled Drones or Autonomous Vehicles (AVs). The most common form are self-driving vehicles (e.g., cars, trucks, etc.). Unlike flying drones, AVs work in a 2-D space, such as on roads. AVs may include delivery drones that run on sidewalks, as well as tractors, and wheeled and tracked vehicles that can move off-road.

[0158] Another category of UAV includes Walking Drones or Robots. Walking drones are typically biped or quad-ped with legs that allow increased maneuverability.

[0159] Still another category of UAV includes Swimming Drones, Sailing Drones, or Submersible Drones, which may be referred to interchangeably as “underwater autonomous vehicles.” Underwater autonomous vehicles typically are deployed underwater, such as in nature or which float on the water. Underwater autonomous vehicles may use hydrofoils to lift their bodies out of the water and to submerge. Underwater autonomous vehicles may use propellers, water jets, rockets, or other forms of propulsion.

[0160] Yet still another category of UAV includes hybrid drones. Hybrid drones may a combination of characteristics. For example, a hybrid drone may include an amphibious drones having wheels or tracks, as well as being able to float, such as by using the motion of the tracks, or specific propellers, or jets.

[0161] Today, flying drones are commonly controlled by humans via remote control through a wireless link. There is typically a 1:1 ratio of operator to drone as mandated by current FAA regulations. Small UAV (sUAV) are regulated by various Federal aviation authority (FAA) regulations with Part 107 (and successors) being the most common used for commercial operations regulate the piloting of drones. As specified in Part 107, sUAVs are classified as those UAVs having a weight of less than 55 lbs. (25 kgs).

[0162] A drone’s typical operation focused on some common applications, as detailed below.

[0163] One application of a drone may include surveying. Surveying may relate to capturing images of buildings, homes, factories, facilities, agricultural fields, public spaces, or other geographical locations. VTOL devices are commonly used for surveying because VTOL devices are typically smaller and less expensive than manned aircraft and can obtain images having more features. Surveys are generally designed for precise physical measurements to provide a long-term measurement of structures, such as buildings or fields. Surveying is generally infrequently performed and added to archives, for example to record an exact “as- built” structure versus what is in design documents.

[0164] Another application of a drone may include inspecting or performing inspections. Structures like bridges, industrial plants, commercial buildings, towers, wind turbines, solar plants, or roofs need regular inspection to ensure they are working properly and not subject to failure. Each structure has specific needs for inspections. For example, inspection of bridges may include determining whether a given bridge has any cracks or failures in its structures or to detect an amount of rust present. With industrial plants, inspections include both determining whether unusual noises and smoke and steam that is not normal are present. And with commercial buildings, inspections may include determining whether the building has any cracks, leaks, or standing water, or other potential abnormal characteristics. Finally, cell towers and radio towers may require other specialized inspection. For example, there are many specific requirements for different industrial structures that require a general system for identifying specific assets and also flight planning differs dramatically for these different structures.

[0165] Another application of a drone may include reconstructing three-dimensional (3D) models, Digital Twins, or 4D Reconstruction. 4D, as described herein, refers to a three dimensional model with a time varying component showing visual and structural changes as a function of time. With enough images, a 3D model of a site can be built through the techniques of photogrammetry. Long term, a “time-stamped digital twin” of at first single sites and in the limit the entire planet can be constructed to determine what is happening at any location at any given time. This is process is generally referred to as performing a 4D reconstruction.

[0166] Still another application of a drone may include security and monitoring. Flying drones, in particular, are well suited to perform roving security where having a large number of fixed cameras is too expensive or impractical. These roving patrols allow security for large pipelines, large industrial assets, or other locations.

[0167] Still another application of a drone may include photography. The photography may include capturing of images for sale or lease, or for film and images. For example, drones may be used to capture images of property to be sold or exhibited. As another example, drones may be used for capturing images and/or video for artistic purposes.

[0168] Yet another application of a drone is delivery of items. Short and long range delivery may be performed via fixed wing drones.

[0169] Yet still another application of a drone is drone taxiing. Companies may use drones to taxi humans or other living creatures from location to location.

[0170] Some of the technical benefits of the hardware described herein may be noticeable at the drone level. For instance, drones may be produced that are cheaper, safer, and disposable compared with complex drones. For example, a 249 gram drone is not considered dangerous by the FAA and therefore has substantially fewer requirements. Furthermore, adding automatic airbags and other technologies can facilitate drones capable of floating rather than falling in flight.

[0171] At the hardware level of multiple drones in an array, the embodiments described herein can improve the drones reliability and speed of operation. For example, two 249 gram drones with two cameras will image an area twice as fast and is not subject to FAA regulations. Additionally, if one drone fails, the mission is not scrubbed but can continue with one or more of the remaining drones taking over the missed points. Having a high performance 3D LAN provides orders of magnitude more bandwidth to the drones, which allows them to offload more processing to the hives thereby further reducing weight. Additionally, knowing when high bandwidth will be available through the 3D RF imaging of the LAN enables rapid download and transmission of information at high fidelity. At the hardware level of drones with hives, the system described herein has a substantially better overall 3D map of the area being surveyed and can, in real-time, determine what areas have not yet been surveyed or need additional surveys. The compute power in the hive resides in something with significantly more weight and power capacity, so larger processing and storage systems, as well as GPS location technology, can be on the ground, resulting in better accuracy, faster processing, and faster sharing of information across all the drones. Furthermore, at the hardware level, having a broadband network that is monitored and maintained in real-time can bring much closer connection to the cloud, further providing more computation capabilities for better mapping and object recognition from data-center scale resources.

[0172] At the software level, some embodiments described herein may use lightweight drones with high-performance computers to homogenize the computing environment. This may allow more code to be shared between the drone, the hive, and the cloud, further speeding development. At the software level, some embodiments may use orchestration to separate the intent of the operation, such as scanning from the actual implementation, makes programming and managing many drones easier by dividing the problem between the intent to scan and the physical resources through orchestration. Therefore, the application programmer does not need to worry about how something is done, just what must be done. Also at the software level, some embodiments include separating the computation into micro services that run in virtual containers. Running the micro-services in virtual containers may allow the system to automatically migrate processes in containers between the drones, hive, and cloud, which may allow the system to dynamically adjust where the computation happens depending on the local network between the drone and hives, and the wide area network between the hives and the cloud. For example, if the LAN is high quality, more of the computation can happen in the hive reducing the load on the drones giving them longer flight times as an example. At the software level, some embodiments may use browser technology, such as WebAssembly and 3D GL, which may allow the system to put the user interface into a browser. Putting the user interface into a browser can significantly reduces the learning curve for users and can also simplify the development process. The user interface can call the micro- services in the system, bringing drones fully into the modern internet world. Still further, in some embodiments, the software architecture may be vision-first and 3D mapping focused, which can simplify the application workflow significantly. For example, instead of dealing with low level primitives, the system processes camera and other sensor data into a rich 3D world, thereby making it easier for the application workflow developer.

[0173] At the workflow level, some embodiments include simpler mission planning because the guesswork is eliminated trying to do 3D planning with 2D satellite maps. Further still, because the system can use realistic 3D gaming engine modeling of light, wind, or other physical effects, the planner can take into account these important effects when determining where and how to image an object. Rather than specifying what to do, the system can be configured to describe the intention or the desired state. This “no-code” approach enables users, who are better at describing the state desired (e.g., “I want a survey to 1 cm precision”) instead of manually specifying several waypoints. However, this technique requires significant engineering efforts to make work on a single drone. At the workflow level, some embodiments further include actually piloting the mission based on 3D models and drone pods to significantly reduce operator workload. For example, it may be easier to monitor the status of many drones by seeing where they are in a 3D map of the mission site. Also, operators can examine the real-time data feed of the drones in-flight rather than waiting until after the mission is over. In the processing step of the workflow, some embodiments include the real-time capability such that problems in scanning can be detected while the mission is in progress. Furthermore, because the system uses advanced 3D compression techniques, a latency associated with seeing these models in real-time and transmitting them can be reduced.

[0174] Taken together, these hardware, software, and application workflow improvements considerably improve the accuracy, speed, and cost of a drone mission. Additionally, these improvements can significantly increases the dependability of the system with improvements in reliability, availability, maintainability, and safety. For instance, separating the flight, safety, and applications hardware and systems can significantly improve reliability. These can also be assign dedicated communications so that a problem in the application layer, such as mission planning, may be minimally impact flight and safety systems. Also, by using smaller less complex drones, there are fewer items subject to failure. In particular, redundant arrays of drones of different types and base stations can increase the overall reliability of the system because each of them has a simpler single function. Multi-role systems of N components can have N 2 more ways to interact and fail. The availability may be improved because of the ability to have “failover” drones and overlapping coverage, as compared to systems including a small number of complex drones. In the latter case, if a drone goes down, then the entire mission may have to be scrubbed. However with the use of failover drones and overlapping coverage, an operator can tune the requirements to tolerate any number of drones failing. The maintainability may be improved because more complex drones, which have more parts that can fail (particularly in the harsh outdoor conditions where drones are used) are not needed or can be replaced with smaller, simpler drones with fewer parts capable of failing. The safety of the system can also be improved because the sizes of the drones and the kinetic energy of the drones is less than those of more complex and larger drones. For example, although the system can be used with existing “heavy” small unmanned aerial vehicles (sUAVs), ultralight drones, such as those in the sub 250 gram category, have inherently less kinetic energy. Therefore, the terminal velocity due to wind resistance or auto rotation may be comparatively low. With these techniques a small drone can have essentially zero kinetic energy and may “float” when/if it fails.

[0175] Furthermore, from a large-scale fleet-wide view, for achieving Internet- scale, some embodiments described herein employ an overall system that can significantly increase manageability, interoperability, programmability, and scalability, as well as improving privacy and security. With respect to manageability, some embodiments use a hierarchy of base stations and drones, simplifying management. For example, instead of thousands of individual parameters, grouping the drones into arrays and networks allows a group to be managed and controlled at that level. For instance, a task to image can be assigned to a resilient swarm and that system issuing the directive does not have to manage the number of drones failures and re-pathing that is required when obstacles are encountered. With respect to the interoperability, the system may allow any drone to be integrated into operations. For example, the technical solutions can be software for an existing fleet and the drone management software. With respect to the programmability, some embodiments include simulating the system using conventional systems. For instance, because the key data components may include a 3D world model and a layered model, any level can be simulated. As an example, a drone mission can be run against a 3D world model, instructions can be generated to simulate hives traversing that world, and then drones can be deployed in the simulation. Individual modules can be easily tested just by simulating the systems above and below. In addition, by using a declarative language, complex path planning can be handled at the lowest level, thereby considerably simplifying a programmer’s workload. Some cases include the system being trained using real-world missions. For example, machine learning can be applied by taking similar missions and seeing what actions actually work. With respect to scalability, some embodiments include a system that is designed from the ground up to work with thousands of drones in a single mission and to handle a mission like this with the same infrastructure that works across millions of drones through the layering of concerns approach. That is, each level can take a declarative description of the state the higher level desires. For example, the mission planner just says “Image this house at 1 cm resolution with 10 drones” and this means that the actual path planner is decoupled and can be replaced and managed separately. In other words, the complexity of the system can scale linearly. With respect to security and privacy, some embodiments include privacy being built into the system (for example, from how it images to how it reads system data), inherently making the system more secure. For example, having the ability to revoke 3D object access enables to the system to respect privacy laws, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). For instance, if an individual is recognized, that individual may control the access rights via an NFT that is their identity across the entire system. They can also revoke access as needed.

[0176] The system described herein may be simpler for end-users to operate and use correctly. For instance, with respect to simplicity, having a layered system can enable different specialists to work on different parts of the system. As an example, a mission planner does not need to worry about the maintenance functions or the possibility of a drone not being ready. Instead, when a drone is out of service, the mission planner may specify the resources required and the resource schedule allocates the actual hardware. Also, because the main interface for data output can be a 3D world model, some embodiments include a single uniform way to read the data and write into it. Further still, connecting different surveys through NFTs and a distributed ledger may enable users to use an easy to understand market rather than complex business negotiations to purchase data.

[0177] Rather than specifying what to do, some embodiments enable the intention or the desired state to be described. This process can be referred to as “no-code,” as it allows users to describe the state wanted (e.g., “I want a survey to 1 cm precision than in specifying many way points”). In some embodiments, human can declare what they want and computers can run an optimization to move from the current state to the desired state. This minimizes the amount of error-prone procedural logic in the system, reducing bugs and other problems.

[0178] Additionally, with every mission run there is an opportunity to improve the system by automatic learning from previous missions. Everything from RF models to path planning can have a continuous feedback loop between the actual execution in the physical world and the simulated world.

Architecture

[0179] The overall system architecture, which may include multiple drones, multiple base stations (hives), and the cloud are described in detail below.

[0180] FIG. 6 is a schematic diagram of a drone 600, in accordance with various embodiments. A single drone, such as drone 600, may include a flight computer 6000, a safety computer 6100, and a companion computer 6200. Flight computer 6000 may comprise battery 6410, an IMU 610, rotors 620 (one or more, two or more, four or more, etc.), radios 650, a frame and landing system 660, a charge system 6420, a GPS 630, motors 640, electronic speed control (ESC) circuitry 645, markers 670, cameras 695, a flight processor 690, one or more lights 697, and a LAN 6310. Safety computer 6100 may comprise active safety system 6520, destruct system 6530, a safety processor 6105, one or more USBs 6295, and LAN 6320. Companion computer 5200 may comprise a companion processor 6202, an AI accelerator 6205, cameras 6210, GPS 6220, IMU(s) 6230, storage 6240, ultra-wide band UWB 6230, lights 6250, audio 6260, a user interface 6270, a WAN 6280, and LAN 6290. Although a single instance of each of the aforementioned components is depicted within FIG. 6, persons of ordinary skill in the art will recognize that multiple instances of some or all of the components may be present within drone 600, and a single instance of each component is depicted to avoid obfuscating features.

[0181] Companion computers, as described herein, refer to one or more computing units capable of additional perception, communication, and sensing, and which can be connected to the main flight critical autopilot either through a wired or wireless connection and may be networked.

[0182] In some embodiments, flight computer 690 may be configured to perform low level control of the actual operation of drone 600. IMU 610 may be configured to provide compass, accelerometer, or other movement data. GPS 630 may be configured to report location. Radios 650 may refer to a telemetry radio, which may be configured to provide movement instructions or instructions for flight. In some embodiments, motors 640 may be actuated to power rotors 620 using ESC circuitry 645 to keep speed consistent. Drone 600 may further include frame and landing system 660, as well as various visual markers 670, such as QR codes or calibrated length measures. In some embodiments, drone 600 may include one or more sensors, such as Fcameras 695 or other sensors such as Lidar, and may have navigation lights 697. Lights 697 may be configured to flash patterns to signal. Drone 600 may further be connected to other systems with a flight LAN 6310 or serial bus.

[0183] Safety computer 6100 may be configured to isolate the mission-critical flight hardware from companion computer 6200. Safety computer 6100 may be configured to monitor data requests and flows, and can restrict companion computer 6200 in the event it is determined that companion computer 6200 is not functioning properly. Safety computer 6100 may include active safety devices 6520, such as parachutes or inflating balloons designed to lower the speed in case of failure and may encase rotating or sharp parts. Further still, safety computer 6100 may include a destruct system 6530 configured to turn drone 600 into components that have less kinetic energy or fewer sharp objects in the event of failures.

[0184] Companion computer 6200 may include AI accelerators or GPUs 6205 to improve neural network performance, a sensor payload 6210 of one or more cameras, lights, or other sensors configured to provide imaging of the surrounding environment, IMU 6230 and GPS 6220 configured to provide base location information, compass, accelerometer, or other data relating to a physical location of drone 600, and storage 6240 for storing programs and data. Companion computer 6200 may also include a user interface mechanism 6270, such as a touchpad and screen, and may be configured to communicate with other modules on drone 600 through a LAN 6320, USB 6295, or other communications systems. In some embodiments, drone 600 may include an ultra-wide band 6230 and/or BT 6280 for proximity sensing and local communications. Drone 600 may also include wLAN 6290 implementing technology that is cellular, Wi-Fi, or other protocol based, or combinations thereof. These antennas may both send information and passively record the signal strengths as the drone travels enabling mapping of the 3D environment.

[0185] In some embodiments, drone 600 may also include charging circuitry 6420 for batteries 6410, and attachments for active safety 6520, such as airbags, balloons, and/or parachutes, as well self-destruct hardware 6530. Drone 600 may include a lighting system 6250 for identification and visibility and an audio system6260, which may include one or more speakers, one or more microphones, or other audio input/output components. Some embodiments include USB 6295, or other serial connections, to attach additional peripheral devices.

[0186] FIG. 7 is an illustrative diagram of an example hive architecture 700, in accordance with various embodiments. Hive architecture 700 may include (drone) hive 7100 as well as one or more landing or takeoff systems. In some embodiments, the landing/takeoff systems may be used with passive visual lighting, active lighting, or RF beacons to allow drones to localize landing site 7160 and may include recharge 7150. Recharge 7150 may be configured to automatically connect to a drone (e.g., drone 600) when the drone lands. Recharge 7150 may be further configured to charge the drone upon recharge 7150 connecting to the drone.

[0187] Hive architecture 700 may include one or more markers 7180, which may be static optical or lit in a pattern. In some embodiments, hive architecture 700 may also include multiple lights 7190 allowing blink patterns to aid in localization. In addition, hive 700 may include multiple RF beacons 7230, such as Bluetooth LE or UWB that provide precise localization on the various part of hive 700. In some embodiments, hive 700 may have markers, such as QR codes, to calibrate measurement rulers as part of any of these systems.

[0188] In some embodiments, hive 700 may include one or more LAN access points 7250 that may cooperate to produce MU-MIMO, thereby allowing multiple streams to multiple drones simultaneously. In some embodiments, hive 700 may include one or more wide area network links 7270 to the cloud, such as a satellite or cellular modem. Hive 700 may include one or more GPS receivers 7290 that may be high precision using differential GPS or RTK. These antenna systems may be attached or separate or placed on separate positioning mounts 7210 that may themselves be autonomous and movable (to obtain the best signal). In some embodiments, hive 700 may include AI Computer 7110, which may include a disk array for storage 7120. Hive 700 may also include a power supply 7130 for the hive itself and to recharge drones, such as via recharge port 7150. In some embodiments, hive 700 may include the network switch and routing hardware 7140 to connect these components. Network switch and routing hardware 7140 may be intelligent and capable of handling multiple connections upstream to the cloud or other drones and to individual drones. Hive 700 may also include recharging for the drones and also for the onboard power systems. In some embodiments, hive 700 may be autonomous and may include an input to a drive interface 7200 of a vehicle 7100 to enable movement of the entire hive.

[0189] FIG. 8A is an illustrative diagram of an overall system 800 including one or more drone pods 810, in accordance with various embodiments. A drone pod 810 may include a resilient swarm of one or more drones, such as drones 820, 830, 840, 850, etc., which may operate as a unit and perform missions together for planning purposes. Each drone pod 810 may have different functions, so that drones 820, 830, 840, and 850 may communicate through a Drone LAN 8300 created by one or more Hive LAN Access Points 8100. System 800 may include one or more hives 8200 that may be connected to one or more drone wide area networks 8300. Drone pod 810 may have include cameras, lighting devices to illuminate shadows, LIDAR, radar, audio, or other sensing devices. These pods may connect to one another via WAN 8300 and to cloud services 8400. Cloud services8400 may be hosted remotely and may include services such as computer services 8410, storage services 8420, networking and routing services 8420, or other services. [0190] FIG. 8B is an illustrative diagram of an overview of the system 8500, in accordance with various embodiments. Ultralightweight and inexpensive microdrones that can be dynamically configured and reconfigured. They may come together as a unit to form a resilient swarm to provide fault tolerance and temporary multi-mission capabilities and are flown like a single drone. See 85100.

[0191] The drone-aware local communications network architecture, drone LAN is a 3D dynamic local area network or 3D-LAN, instead of a static point-to-point network. This allows the drones, acting as nodes within the dynamic 3D wireless network, to constantly communicate and map the RF environment, dynamically creating an RF spectrum map that’s used to optimize the flight trajectories in order to find the most efficient balance between flight time, network coverage, bandwidth and communication latency. See 85200.

[0192] Computing capability is performed on the base station (hive) to augment the drones. This moves heavy and power-hungry machine learning and 3D world modeling away from the drones. Multiple hives can work together creating a large-scale local compute cluster with high speed communications to many drones. Instead of putting GPS receivers on drones, they are placed on the ground and can use RTK GPS or other positioning in combination with the visual position so that each drone can precisely position itself. See 85300.

[0193] This local compute is in turn connected to the highest tier with a 3D wide area network or 3D-LAN that is a drone aware dWAN. The hives constantly map the radio frequencies and bandwidth available to them and can reposition hives to better locations to ensure high bandwidth to the cloud systems. See 85400.

[0194] Finally, the cloud computing environment is much more closely integrated with the remote hardware at the mission site. This allows direct processing of real-time data and enables real-time collaboration while the drones are in flight significantly shortening the time to analyze problems in the field. See 85500.

[0195] FIG. 9 is a schematic diagram of drone software architecture 900, in accordance with various embodiments. Hardware architecture 800 may run software blocks. [0196] Drone software architecture 900 may include one or more instances of a flight computer 9200, which may be configured to control low level systems. Flight computer 9200 may be configured to operate one or more of modules. For instance, flight computer 9200 may include a camera processing system 9240. Flight computer 9200 may include a telemetry system 9220 configured to records the IMU, motor state, and/or other flight data, such as apparent wind. Flight computer 9200 may include a flight control module 9230, which may be configured to convert flight commands stored in command module 9210 into detailed operating commands for flight control actuators 9250 and motor control software modules 9260.

[0197] In some embodiments, a drone may include one or more instances of a safety computer 9400, which can be configured to monitor commands. Safety computer9400 may include a safety control system 9410 configured to monitor the state of the drone, initiate emergency actions, such as to return to home, self-destruct, or depowering of the system, or the firing of parachute or other safety systems. Safety computer9400 may include a security agent 9420 that can be configured to validate commands going to command system 9210 to ensure commands are valid and from valid software in the system. Communications between the flight computer 9200 and one or more proxy modules 9430 that can be configured to monitor, and potentially block, network transfers between computers.

[0198] In some embodiments, the drone may include one or more instances of a companion computers 9300 that may have their own software modules configured to monitor IMU information from IMU interface 9510 and telemetry information from the telemetry modules 9220 or one or more flight cameras 9240 via camera interfaces 9530. In some embodiments, the drone may have one or more GPS interfaces 9520 and one or more sensor interfaces. In some embodiments, the drone may include one or more sensor fusion systems 9310 that can be configured to process, clean, and merge the information. This module may send information to pose estimator system 9320 to estimate the six-degrees-of-freedom (6DoF) pose of the drone to determine its position in space both relative and absolute in the world.

Six degrees of freedom refers to the freedom of movement of a rigid body in three- dimensional space (movement in either of two directions along three axes). A pose refers to a position and altitude, where the position may be presented in either local or global coordinate system reference frames. One or more instances of pose estimator software modules may feed one or more local mapping modules 9330, which may be configured to determine the location of objects in the world and determine if there are obstacles and the navigable path for a specific drone. One or more local mapping modules 9330 may be configured to connect with one or more local atlas modules 9340, which may be configured to determine the position of the drone in global space and may determine if the drone has made a loop and has arrived at an already seen location. This information may be sent to one or more local motion planner systems 9350 that can take into account the current position, the current state, and the desired mission plan to find the optimal path for the drone. The local motion planner system 9350 and sensor fusion system 9310 may forward images, pose, and other information to the network cache 9360 for transmission to one or more hives through hive radio system 9540.

[0199] In some embodiments, local motion planner system 9350 may be configured to receive updates over hive radio 9540 from the hive providing additional information such as additions to local mapping module 9330 or local atlas 9340 based on other drones or other hive information. Local motion planner 9350 may keep track of its relative position to other drones to maintain a resilient swarm and ensure it is a set distance from other drones in the array. There may be a local Array Controller 9980 to do additional management such as fault recovery and configuration that may work on the drone or other drones in the array.

[0200] As the done travels, it may collect RF signal strength data from the hive radio 9540, and other sensor information directly such as IMU information from IMU interface 9510,

GPS information from GPS interface9520, image data from camera interface 9530 or cameras 9240, or telemetry data from telemetry system 9220. This information may be sent to hive- wide planning.

[0201] Drone Time Sync module 9370 may synchronize its clock from various sources including timing information from GPS interface 9520, and may receive time corrections from the hives over Hive Radio 9540. Drone Time Sync module 9370 may provide services and precise time stamps for all sensor systems and transmission. Drone Time Sync module 9370 may also be used by the Planning modules to more precisely determine mission and travel times. [0202] The drone software processes can be kept in containers and the modules may be kept in companion computer 9300, and they may be managed by orchestrator 9920, which may be configured to determine which modules to run on which components of companion computer 9300. Orchestrator9920 may communicate with the hive to migrate pods from the companion to the hive if there is too much work to be done. The overall management of companion computer 9300 can be handled in manager 9940, which may include a user interface, statuses, logging, and other information. Integrator 9960 may provide an interface to external software that may run on the drone and provides an integration interfaces for foreign software that may extend the function of the system.

[0203] FIG. 10 is a schematic diagram of a hive software architecture 1000, in accordance with various embodiments. Hive software architecture 1000, which may also be referred to herein interchangeably as a hive 1000, may include software modules that include a base system that schedules other software modules across the actual hive hardware. Hive 1000 may include a drone manager 10100 configured to provide services for the drones under the hive’s control. This may include an orchestra module 10110 configured to manage what containers are running on the drone hardware and what hardware pods of the drones are available under the hive’s control. The hive may include a Pose Estimator 10120 that can be configured to monitor the drones and provides an estimate capable of being used to correct a position of some or all of the drones. Object detectors 10140 may provide additional object detection that augment the drone onboard systems. Global Planner 10160 may manage the overall flights of all the drones. Calibration module 10180 may be configured to correct for distortions and sensor errors in individual drones or resilient swarms. Fault monitor and planner 10190 may receive notifications of faults and may initiate changes needed in planning. Drones that do not use an onboard controller to manage them as arrays may Array Controller 10195, which may be configured to provide corrections to the onboard resilient swarm Controllers 9980 for configuration, positioning, and/or fault recovery.

[0204] Internal systems, such as hive systems 10200, may be configured to manage the hive. Hive systems 10200 may include a 10210 Hive Planner that can handle the routing of the hive and coordination with other hives. Local RF models 10260 may manage the local RF environment and can provide a 3D map and time based prediction (4D mapping) of the RF environment locally and the RF environment characteristics for WAN communications. This may be used by the 10100 Drone Manager to plan flights based on data download and upload requirements. Hive Planner 10210 may be configured to move the hive or synchronize with other system modules through Data Sync 10290. Weather model 10220 may be configured to model wind, light, or other dynamic conditions over time and may predict future conditions for use by planning modules throughout the system. The objects in the mission may be kept in a 4D Model system 10280 that has both the 3D and models the changes in time. Mission Store 10240 may manage the mission information, such as paths and desired goals for the missions. Time Manager 10230 may provide a precise clock for all systems including all drones and other hives to allow precise timing for every sensor and other measurements being taken.

[0205] Cloud Manager 10300 in the hive may be configured to manage communications to the cloud. Cloud Manager 10300 may include a WAN Manager 10320 to optimizes and schedule transmissions to the cloud, a cloud caching system 10340 that may store and forward information to the cloud from drones or other hives, and an Operator and Integration interface 10360 that can provides status and management facilities from the cloud or a local operator.

[0206] Hive 1000 may also include a base set of modules 10900 configured to provide common services to all modules. This may include an Orchestra 10920 to manage which software modules to isolate into containers run on which hardware pods. Modules 10900 may also include a manager 10940 that monitors the overall system performance, handles real time events and logs critical data. Modules 10900 may also include an Integration system 10960 that allows foreign software to interact with the system through a set of connectors or other facilities.

[0207] FIG. 11 is a schematic diagram of a cloud software architecture 1100, in accordance with various embodiments. In some embodiments, cloud software architecture 1100 may include software modules, such as one or more instances of a real-time mission system 11200 for running live missions. Real-time mission system 11200 may include one or more mission goal modules 11210 that may be configured to track the intent of the mission, including, for example, locations to be scanned and a quality of the scans required. It may include one or more 3D model builders and object recognizers 11220 that can take the real-time data from hives or other sources and perform 3D object reconstruction using techniques including, but not limited to, NeRF, photogrammetry or other techniques. The system may also include a real-time 3D RF model 11230 of the site to aid in real-time planning of missions and include communications bandwidth in mission re-planning 11250. These modules may communicate with the hives over the hive communications interface WAN Communications interface 11240.

[0208] In some embodiments, cloud software architecture 1100 may include a sensor processing pipeline 11300. Sensor processing pipeline 11300 may include a fine grained 3D model 11330, which may run in batch mode and can provide higher resolution to the real time 3D model builders and object recognizers. Cloud software architecture 1100 may include high-resolution non-real time object recognition and characterization system 11340, and one or more 4D model integrators 11350 configured to that take information from a specific mission and integrate them into a planet-wide multi-mission derived 4D database model 11420.

[0209] In some embodiments, cloud software architecture 1100 may include one or more AI training systems 11360 that take the log, path, and other information, such as real images, and may be configured to train the object recognition model 11340 and model builder 11330.

[0210] In some embodiments, cloud software architecture 1100 may include one or more simulation systems 11500, which may be configured to aid in the test and planning and other system functions. This software service may include one or more graphic simulators 11510 that may be very high fidelity in the imaging including real-life effects and may include one or more motion simulators 11520 that can model the physics of the drones in motion. In some embodiments, the service may have one or more simulators that integrates these two models. Simulation system 500 may include a test pipeline 11530, which can allow any module to be replaced in software to provide simulation of any software component in the cloud, hive or drones. Simulation system 500 may include an RF simulator 11540 that may be configured to model the real world RF effects of specific sites based on radio data and other information in 4D global model 11420. Workflow

[0211] FIG. 12 is a schematic diagram of an example workflow 1200, in accordance with various embodiments. Workflow 1200 may be applied to a drone mission, and may be performed across one or more drones 12100, one or more hives 12200 with one or more cloud services 12300 with connections to a local operator 12265, and remote operation services 12400 that may be performed away from a mission site.

[0212] Workflow 1200 describes an example flow of how information moves and the system works. An operator, who may be a human or another computer module, may provide new mission data 12410 to remote operations services 12400. For example, modules such as fleet status, training, and test module 12390 may run modeling of missions for resource planning to determine fleet requirements in the future based on prior missions. Or if new software or systems in place, the specification may include a list of modules with versions and times to be run to run virtual missions or test the system.

[0213] In some embodiments, a new mission specification may be entered into the new mission module 12410. A declarative document may be created that specifies the mission goals such as the resolution and type of sensors to be used in a scan, the objects of interest or areas of interest, what dynamic or moving or unexpected objects are being searched for if not a fixed presence, the resources they have for the goals, such as one or more drones 12100, which may be grouped into one or more pods with one or more unique characteristics. The number and type of hives 12200 available for the drones and their capabilities may vary. It may include the frequency and repetition rate desired. And it may include stored procedures, conditionals and other constructs to specific events and how the mission would change. The specification may also include observability constraints for example it may specify a noise limit for an area to ensure drones are not disruptive, it may specify visibility parameters such as flight paths should not be viewable from particular locations or the drone should be very small to prevent observation assuming a given visual acuity or zoom level. The specification may also include areas that are not to be scanned or have different privacy requirements that may inhibit certain sensors from imaging sensitive objects. These objects may be stationary or they may be dynamic with parameters, and may be detectable using object recognition 12360, which may include face recognition or object types such as underage children. [0214] The objects may be converted by mission program module 12420 into a mission document 12460 that describes the intention of the mission and stored in a mission database 12450. The operator may load, from mission database 12450, and extend, modify, or write new missions or link to missions.

[0215] The cloud planning system may takes the current 3D models over time as a 4D model 12310. This may include previous data that is at a similar time and date as well as weather, wind, sun locations and other data. The 4D model may synthesize, interpolate, or otherwise estimate the model of the world for the mission at a given time and weather condition. The model may also include RF signal models, audio signals that appear in the system. It may also include an acoustic propagation model to determine how far audio will travel. It may also include markers or other precisely located physical attributes such as a bridge that act as positioning points that are available visually or by radio or other characteristics such as broadcast towers or cellular towers with known RF characteristics.

[0216] In the cloud, simulation system 12320 may start with these parameters. Simulation system 12320 may consist of one or more specialized simulators connected together. As an example, a detailed physics engine may model the wind, weather and the drone movement in great detail and this may feed locations to a detailed visual simulator that has high fidelity imaging. The simulation may insert hardware failures, weather conditions changes and other unexpected events to test the simulation. The simulation may also include a radio frequency model to determine what frequencies are available for communications. It may also model the RF environment and provide data on expected transmission bandwidth and interference given parameters for antennas such as transmit power or beam forming. It may also model interference from other sources such as existing radio transmitters. The simulation may also model the storage available for sensor and imaging and other uses as well as the predicted power used by each device for missions. The simulation may also model the expected GPS or other positioning signals available including the expected multipath and other losses. The simulation may also include an audio model that may include drone noise, the audio signals expected to be found and parameters for recording audio such as anomalous engine or other no ses. [0217] Optimizer 12330 may then analyzes and segments the mission into a number of segments which are allocated to physical drone pods to complete that run on one or more hives and may ensure manage the APs and other RF devices power and beamforming and may optimize the positioning of various antennas such as for GPS, drone LAN (dLAN), which refers to a LAN implemented using a swarm of homogenous, heterogeneous, or an amalgamation thereof of autonomous or semi-autonomous multi-modal drones, or dWAN. A swarm may be referred to herein interchangeably as a drone cluster or drone swarm. The optimizer may run a series of simulations and models with different mission document 12460 parameters and may optimize the mission based on loss function which may include mission time, power required to complete the mission, redundancy required and probability of mission failure or success. The optimizer may add multiple hives, as an example, if it determines wind or path are too long so the optimizer may launch drones upwind and they may land downwind to a waiting hive. Or the optimizer may change hive path plans so a hive may launch at one location and recover drones at another. The optimizer may also determine fixed locations where drones may land and image systems to optimize battery life or other parameters. As an example, if a flight is repetitive, the optimizer may land a drone and put it into a sleep mode away from hives to minimize travel time and power usage and then “pop up” as necessary to take images and land again. Or if a suitable point is found for imaging, the optimizer may fly one or more drones to fixed locations to image to reduce flight times. The optimizer may analyze expected communication needs vs the available backhaul capacity is analyzed. The optimizer may allocate drones and hives to be part of a temporary backhaul network from drones to hives and also from hives to the cloud.

[0218] Scheduler 12340 may determine what drone and other hardware such as hives are available for the mission. The scheduler may bid for drone resources from the internal pool and any externally sourced drones owned by third parties. The scheduler may merge existing missions that are to be run in the mission database 12450. and may calculate new virtual missions that may include multiple missions so that a single physical mission run may include multiple desired scans. For example, if two buildings are physically close together, the scheduler may run both missions simultaneously with drones already in flight. Or if the RF environment prevents one hive from doing an upload, it may merge missions so that hives across missions may pass data to the cloud or other locations. It may place these packed missions into mission database 12450 and run the mission planner repetitively to optimize the resource requirements one or more times.

[0219] Cloud system 12300 may run the merged missions through the simulation system 12320 to ensure correctness, monitor the reliability of the network through fault injection.

This may occur repetitively or may follow a sequence of events.

[0220] The allocated hives and drones for a mission may then be scheduled by scheduler 12340, which may schedule hierarchically allocating hives in pods or groups and the allocation pods of drones. Scheduler 12340 may provide each hive with a specific part of the mission plan 12345 to complete the goals with the resources available. Mission plan 12345 may compensate for possible hardware failures, by providing additional hives, drones or other hardware to provide redundancy and fault tolerance. For example, if three drones in a pod are required for a mission, the mission plan may allocate two extra drones to tolerate up to two drones failing or not having enough power.

[0221] Hives 12200 may then be piloted by humans with instructions included in mission plan 12345 or may pilot autonomously and state arrival process 12210. In arrival process 12210, hives 12200 may position themselves in multiple locations.

[0222] In deploy and calibrate module 12220, the hives may be configured to move to the correct locations and turn on their positioning and identifications systems. The hive may do high precision positioning based on previous survey markers and differential GPS, RTK and using the cellular network to act as the base point for drone flights. This may augment the GPS or other visual positioning that may be carried on drones 12100. The calibration may use existing passive markers on the hives or other equipment or may activate active lights and RF markers 12230 to provide additional positioning information for the drones.

[0223] Drones 12100 may upon hive arrival begin their preflight process 12105 to check their systems and ensure they can operate safely.

[0224] Drones 12100 may then enter their launch and calibrate module 12110 and perform basic checks to ensure each drone is operating nominally. Drones 12100 may image known objects to ensure that optical and other systems are working properly. They may for example orbit their launching hive and observe the visual and RF markers 12230 to calibrate their visual systems and may correct optical flaws such as dirty lens or estimate the visibility issues such as fog or rain. They may also calibrate their RF systems and ensure that there are no unanticipated network problems. Prior to takeoff, the hive (with accelerometers) may determine that the system is stationary and level. Once stationary and level, the hive places the drones in calibration mode where they zero out biases on the accelerometers and rate gyroscopes. The drones take off, may have calibrated markers that are place on the hive or in previously deployed markers in view to get an initial 6DoF pose, they do a compass calibration by slowing yawing at a speed of less than 40 degree /second and calibrating for local magnetic disturbances.

[0225] Drones 12100 in formup module 12120 may operate in arrays so failures at the array level are handled there and there are problems the resilient swarms compensate with additional flights if possible or more likely by using spare drones. The drones may use visual markers and active markers on one or more hives to provide additional localization and this may be used to ensure the separation between drones is adequate and localization is accurate.

[0226] The drones in the mission executor 12130 may load the mission parameters from the mission store 12235. The mission execution may have drones that may “hop” from one hive to another to reach the final site for imaging. Drones may be assigned to different tasks depending on their capabilities either singly or in a pod. For example some may stay to act as overall controllers of the scene. Drones or hives may localize points and keep other drones in view to provide an overall view of the objects and the drones and hives. Others drones may be allocated to be communications backhaul tasks if there are dark regions that lack coverage or there is the need for additional bandwidth.

[0227] As with drones, the privacy check system 12140 may detect existing markers or identify locations that have attached NFTs in the 4D map. The drones mainly examine the privacy rights information kept on those NFTs and they may have distributed ledgers or other data restricting access for different kinds of imaging. The privacy check may terminate or may cryptographically seal the data they capture for only authorized entities. [0228] As the drones fly, they can activate image and drop module 12150. They may image the scene with various sensors, they may sample the RF environment, and/or they may sample the weather and atmospheric conditions for clouds. They may, if in the correct location, drop artifacts such as position markers both passive and active visual or RF markers to note the exact positions. These markers may be permanent and allow more efficient rescans later. The drones may themselves become markers and land at specific locations providing both camera, optical, RF markers as well as provide input to other systems.

[0229] Drones may drop NFT markers as appropriate to mark the survey space and serve as physical and virtual tags that the data was acquired. If there are existing markers, the survey information will be added to those tokens. These act as the equivalent of a physical “door hanger” saying that your property was just surveyed and leaving the name of the surveyor. This provides a distributed ledger or all parties that have surveyed an area.

[0230] Drones may transmit their imaging and other data to one or more hive data stores 12240 that may cache this information and may delay their transmission until they determine or the mission plan or the hive determines there is available bandwidth. They may alter course to fly into RF hotspots to optimize the transmission time and bandwidth across the entire system.

[0231] Hive data stores 12240 may transmit any or all of the raw sensor data to cloud 3D model 12250 depending on communications bandwidth.

[0232] Drones may use a 3D map update 12160 to reconstruct the 3D world around them. They may perform both object avoidance flags for unexpected obstacles in their path. They may automatically recognize objects and place them into a 3D world that may include both the immediate world around them and they may also place themselves into an Atlas of previously loaded words to locate themselves in the larger world.

[0233] One or more hives may individually or working together use the hive 3D model 12250 and may construct an RF Map and a 3D World map that may be an enhancement of the models loaded from cloud 3D model 12350. As various features are mapped, the hives may perform real-time SLAM to create a picture of the world and determine precise locations and corrections available to the hive because it has more inputs and more computation power. These corrections may be synchronized back to the drone 3D map update 12160.

[0234] Cloud 3D model 12350 may also generate precise information given the great computational power available in a cloud service and synchronize and correlate these differences with the hive 3D model. The onsite hives may use their dynamic backhaul network to relay information to the cloud.

[0235] The hive may update its RF, weather models 12260 based on information from other hives and drones to place into the 3D model.

[0236] The hive may use the object detection system 12270 to identify additional objects in the object detection system.

[0237] The drones may together or individually generate drone events 12170 different problems and notifications. For example, if a drone encounters physical problems in the environment such as an unexpected obstacle or failures such a power loss or unexpected low battery occur and these may result in the drone event initiating various actions such as return to home or stop traveling or land immediately.

[0238] Drone event 12170 may send these events to one or more hive multi-drone planner modules 12280 to reschedule and change missions. The hive drone event may schedule additional paths to be run, modify the path plans, launch preflight 12105 additional drones, or reassign existing hives and drones with new plans as needed to ensure a good mission. The planner may handle a wide variety of events. It may have drones return to hives for recharge, it may reallocate drones that are temporarily unavailable back to new drone pods and it may direct one or more drones to recharge at one or more different hives.

[0239] Local path planner 12180 may obtain drone events 12170 and complete its own mission plan for individual drones, and can operate without hive communications.

[0240] MultiDrone planner 12280 may send its path plans to drone local path planner 12180 when it determines that a better plan is available or that there is global change needed or that it has a better plan because it has more precise positioning. [0241] As the local path planner 12180 receives these events, it may return to formup 12120 if new formations are required and then can execute one or more mission modules 12130, 12140, 12150, 12160 and 12170 as appropriate to the missions

[0242] When the local path planner 12180 determines the mission is complete, the drones begin their landing 12190 sequence. The drones may return to the launching hive, the hive may move to a different location based on wind and weather patterns or the drones may return to a different hive as determined by Global Mission Planner 12280.

[0243] Hives 12200 may use the Next Site module 12290 to determine where they go next. The hives may return to pool awakening and the hives and drones may be split and sent to different sites or powered down awaiting new work.

[0244] The Next Site 12290 may send the logs that may include the mission actual paths and new world model sent to Fleet Status, Test and Training module 12390. The actual and predicted paths and resulting data sent to the cloud so that they may be used to train recognition, simulation and other systems to more closely match what was actually found.

[0245] Cloud 12300 may process the data in the New World model 12395. The 3D world data may be integrated into the global model 12310, the new objects found may be trained against the system and the simulator 12320 can be corrected for errors in prediction, and optimizer 12330 for errors in the paths planned.

[0246] An onsite operator may use one or more Operator Terminal 12265 to view the Hive 3D Model 12250 and assess the mission and flight. Operator Input 12275 may provide input into multi-drone planner 12280 that may modify the mission such as taking manual control of drones, ordering an early stop, adding new mission parameters.

[0247] Remote operators may access the live drone data from the Cloud 3D Model 12350 and may initial a virtual reality (VR) conference 12430 to allow multiple remote operators the ability to see and communicate through the remote terminals 12440. They may communicate with onsite Operator Terminals 12265 and may order changes or additions to the mission plan as needed to meet their requirements. [0248] Cloud 3D Model 12350 may integrate with other systems passing appropriate data through the Enterprise Integration 12460, which may connect with systems such as construction building information systems. Enterprise integration 12460 may allow automatic mission planner by injecting new missions 12410 autonomously into the system and may provide additional information in VR conference systeml2430.

[0249] Some embodiments, such as those where a single mission is to be flown at a single location by a few drones, may include 4D World 12310 being loaded with the location data and a single hive is in the Simulation 12320. Hive 12200 may have a human driver to take it to a new site. Then one or more drones 12100 (e.g., three drones), may begin Preflight 12105and then fly the missions without interruption. The drones can provide image information in Image and Drop 12150 and update the Hive Data Store 12240, which may subsequently update the Hive 3D Model 12250 and the Cloud 3D Model 12350. Cloud 12300 may then provide that data to Model Integration and Training system 12370 to update 4D world model 12310. Drones 12100 may land with Land 12190 and then Next Site Module 12290 can route them to the next site for additional survey.

[0250] An example use case may relate to planning a city wide survey system that feeds multiple enterprises and can use drones from multiple sources. In this case a number of different operators may manually or automatically, through Integration 12370, create a large set of missions each with different privacy settings as may be monitored in the drone Privacy Check 12140 at imaging time and may be filtered out or encrypted in the model with Privacy Control 12380. The Mission Planner 12420 may aggregate available hives 12200 and drones 12100 from many different owners and operators into a single Mission Database 12450. Optimizer 12330 can then simulate 12320 the entire multi-mission, multi-site model and determine the best as may be measured in lower cost, meeting required accuracy and faster as may be weighted by different missions requirements. Scheduler 12340 may dispatch multiple hives along distinct routes as specified in Hive Plan 12205. As each hive Arrives 12210, they will deploy and calibrate 12220 each hive and then launch and calibrate 12110 drones as needed in various sequences and in parallel to minimize potential for collisions. Finally, the drones may form up into pods to accomplish their missions as specified by the Mission Executor 12130. [0251] Each hive can manage a set of drones and reroute drones as appropriate in the MultiDrone Planner 12280. As drones complete missions and begin Landing 12190, the hives may asynchronously land the appropriate drones and move to the Next Site 12290. The various missions may finish at different times and asynchronously Cloud 3D Model 12350 can receive updates. Enterprise Integration 12460 can be notified and Privacy Control 12380 can ensure that the data rights are observed so only images destined for one client are shown to them and are invisible to others in the multi-tenant system.

[0252] FIG. 13 illustrates an example computer system 1300. In particular embodiments, one or more computer systems 1300 perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems 1300 provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 1300 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 1300. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate.

[0253] This disclosure contemplates any suitable number of computer systems 1300. This disclosure contemplates computer system 1300 taking any suitable physical form. As example and not by way of limitation, computer system 1300 may be an embedded computer system, a system-on-chip (SOC), a single -board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. Where appropriate, computer system 1300 may include one or more computer systems 1300; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1300 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 1300 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1300 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.

[0254] In particular embodiments, computer system 1300 includes a processor 1302, memory 1304, storage 1306, an input/output (I/O) interface 1308, a communication interface 1310, and a bus 1312. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.

[0255] In particular embodiments, processor 1302 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 1302 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1304, or storage 1306; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 1304, or storage 1306. In particular embodiments, processor 1302 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 1302 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 1302 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 1304 or storage 1306, and the instruction caches may speed up retrieval of those instructions by processor 1302. Data in the data caches may be copies of data in memory 1304 or storage 1306 for instructions executing at processor 1302 to operate on; the results of previous instructions executed at processor 1302 for access by subsequent instructions executing at processor 1302 or for writing to memory 1304 or storage 1306; or other suitable data. The data caches may speed up read or write operations by processor 1302. The TLBs may speed up virtual-address translation for processor 1302. In particular embodiments, processor 1302 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 1302 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 1302 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 1302. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.

[0256] In particular embodiments, memory 1304 includes main memory for storing instructions for processor 1302 to execute or data for processor 1302 to operate on. As an example and not by way of limitation, computer system 1300 may load instructions from storage 1306 or another source (such as, for example, another computer system 1300) to memory 1304. Processor 1302 may then load the instructions from memory 1304 to an internal register or internal cache. To execute the instructions, processor 1302 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 1302 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 1302 may then write one or more of those results to memory 1304. In particular embodiments, processor 1302 executes only instructions in one or more internal registers or internal caches or in memory 1304 (as opposed to storage 1306 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 1304 (as opposed to storage 1306 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 1302 to memory 1304. Bus 1312 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 1302 and memory 1304 and facilitate accesses to memory 1304 requested by processor 1302. In particular embodiments, memory 1304 includes random access memory (RAM). This RAM may be volatile memory, where appropriate This RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 1304 may include one or more memories 1304, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. [0257] In particular embodiments, storage 1306 includes mass storage for data or instructions. As an example and not by way of limitation, storage 1306 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 1306 may include removable or non-removable (or fixed) media, where appropriate. Storage 1306 may be internal or external to computer system 1300, where appropriate. In particular embodiments, storage 1306 is non-volatile, solid-state memory. In particular embodiments, storage 1306 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 1306 taking any suitable physical form. Storage 1306 may include one or more storage control units facilitating communication between processor 1302 and storage 1306, where appropriate. Where appropriate, storage 1306 may include one or more storages 1306. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.

[0258] In particular embodiments, EO interface 1308 includes hardware, software, or both, providing one or more interfaces for communication between computer system 1300 and one or more I/O devices. Computer system 1300 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 1300. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable EO device or a combination of two or more of these. An EO device may include one or more sensors. This disclosure contemplates any suitable EO devices and any suitable EO interfaces 1308 for them. Where appropriate, EO interface 1308 may include one or more device or software drivers enabling processor 1302 to drive one or more of these EO devices. EO interface 1308 may include one or more EO interfaces 1308, where appropriate. Although this disclosure describes and illustrates a particular EO interface, this disclosure contemplates any suitable EO interface. [0259] In particular embodiments, communication interface 1310 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 1300 and one or more other computer systems 1300 or one or more networks. As an example and not by way of limitation, communication interface 1310 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 1310 for it. As an example and not by way of limitation, computer system 1300 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 1300 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN or ultra- wideband WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 1300 may include any suitable communication interface 1310 for any of these networks, where appropriate. Communication interface 1310 may include one or more communication interfaces 1310, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.

[0260] In particular embodiments, bus 1312 includes hardware, software, or both coupling components of computer system 1300 to each other. As an example and not by way of limitation, bus 1312 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 1312 may include one or more buses 1312, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.

[0261] Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application- specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.

[0262] Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.

[0263] The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, any reference herein to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.




 
Previous Patent: DELAY TOLERANT EDGE COMPUTE PROTOCOL

Next Patent: COLD BREWER