Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AERIAL WATER CANNON
Document Type and Number:
WIPO Patent Application WO/2022/212065
Kind Code:
A1
Abstract:
Disclosed embodiments are related to fire extinguishing mechanisms and equipment, and in particular, to water cannons, deluge guns, and/or fire monitors that may be used for aerial firefighting among other purposes. Other embodiments are described and/or claimed.

Inventors:
BAXTER JEFFREY (US)
SASSER JOEL (US)
MCKAY AARON (US)
Application Number:
PCT/US2022/020659
Publication Date:
October 06, 2022
Filing Date:
March 16, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICKSON INCORPORATED (US)
International Classes:
A62C3/02; A62C37/38; G06Q50/10
Domestic Patent References:
WO2006083558A22006-08-10
Foreign References:
US20090078434A12009-03-26
US20130199806A12013-08-08
KR20040083198A2004-10-01
US20190100311A12019-04-04
Attorney, Agent or Firm:
STRAUSS, Ryan, N. et al. (US)
Download PDF:
Claims:
CLAIMS

1. A water cannon as shown and described herein.

2. A computing system configurable to control a water cannon as shown and described herein.

3. An aerial asset comprising: a water cannon system communicatively coupled with a control unit, the control unit arranged to control the water cannon system.

Description:
AERIAL WATER CANNON

CROSS REFERENCE TO RELATED APPLICATION

[1] The present application claims priority to U.S. Provisional Patent Application

No. 61/161,910, filed on March 16, 2021, the disclosure of which is hereby incorporated by reference.

FIELD

[2] The present disclosure generally relates to the fields of water cannons, deluge guns, and/or fire monitors, and in particular to water cannons, deluge guns, and/or fire monitors that may be used for aerial firefighting.

BACKGROUND

[3] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.

[4] Wildland fires affect human health and safety, communities, infrastructure, watersheds, soils, recreation and tourism, timber and non-timber products, cultural resources, biodiversity, and a host of other ecosystem services. The most acutely felt impact is loss of public and responder lives, highlighted by tragic events such as the Camp Fire in Northern California’s Butte County (see e.g., Baldassari, "Camp Fire death toll grows to 29, matching 1933 blaze as state’s deadliest: Dangerous conditions, strong winds fan Butte County blaze”, East Bay Times (11 Nov. 2018) available at: https://www.eastbaytimes.com/2018/ll/ll/crews-continue-to-ba ttle-strong-winds- in-deadly-camp-fire/L Exposure to smoke and other air pollutants can lead to further morbidity and mortality. Population growth, land-use change, expanded development of the wildland-urban interface, increased stress on ecosystems and lengthened fire seasons due to climate change all contribute to heightened global concerns over the impacts of fire.

[5] Today, wildland fires are contained or extinguished using ground based fire-fighters and aerial firefighting. Aerial firefighting refers to the use of aircraft in support of ground resources to combat wildfires. Aerial firefighting aircraft are equipped with mechanisms to deliver fire extinguishing material by air (referred to as "fire-fighting aircraft”). The aerial firefighting aircraft deliver the extinguishants by means of containers (e.g., bambi buckets or belly tanks) by dropping extinguishants from the aircraft, by means of extinguishant cannons by spraying extinguishants from the aircraft, and/or launching fire extinguishing projectiles (e.g., fire extinguishing bombs). Aerial firefighting in urban environments usually involve spraying extinguishants using extinguishant cannons. These fire extinguishing means are deployed directly into (or on to) the fire or in an area around the wildland fire for containment purposes.

BRIEF DESCRIPTION OF THE DRAWINGS

[6] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.

[7] Figure 1 shows an example Water Cannon ground testing according to various embodiments.

[8] Figure 2 shows Electric Drive system according to various embodiments.

[9] Figure 3 shows an example High-power electrical motors and drives according to various embodiments.

[10] Figure 4 shows an example Close up of high-power electrical drive unit according to various embodiments.

[11] Figure 5 shows an example control computer according to various embodiments.

[12] Figure 6 shows example components of a water cannon system according to various embodiments.

[13] Figure 7 shows an example aerial asset with an aerial water cannon according to various embodiments.

[14] Figure si illustrates an arrangement suitable for practicing various embodiments of the present disclosure.

[15] Figure s2 illustrates an example computing system suitable for practicing various aspects of the present disclosure in accordance with various embodiments. DETAILED DESCRIPTION

1. EXAMPLE WATER CANNON EMBODIMENTS

[16] Embodiments described herein are related to fire extinguishing mechanisms and equipment, and in particular, to improved water cannons, deluge guns, and/or fire monitors that may be used for aerial firefighting among other purposes. Water cannons, deluge guns, and/or fire monitors are amiable and controllable devices capable of producing a high-capacity jet of water or other material, which are often used for firefighting purposes. The terms "water cannon”, "deluge gun”, "fire monitor”, "deck gun”, and/or the like may be used interchangeably throughout the present disclosure even though these terms may be related to different concepts. Water cannons shoot a high- velocity stream of water or other fire extinguishant material. Typically, a water cannon can deliver a large volume of water, often over dozens of meters. In addition to being used for firefighting applications, water cannons may be used for large vehicle washing, riot control, mining, among other applications/use cases.

[17] Hitting a ground target with an aerial water cannon is a difficult task for a pilot to do unassisted. The present disclosure provides improved water cannon embodiments to provide pilots and/or other water cannon operators with more accurate and precise targeting capabilities in comparison with existing water cannon technologies.

[18] Conventional water cannons exist in a variety of forms, including the Simplex airbus cannon (e.g., SkyCannon®), the current S-64 Air Crane Cannon, and the Russian concept cannon to be mounted on a Kamov-32 helicopter. All of these conventional water cannons are manually steered and produce a relatively weak and ineffective water stream. Conventional water cannons are slow, inaccurate, and non-automatable.

[19] Conventional water cannons have short range water streams, no ability to point themselves without human intervention, and do not provide targeting support. Conventional water cannon implementations are either bolted to the airframe in a fixed position, or are pointed by an additional crew member. This drives up the crew workload in an already challenging flight environment such as for aerial firefighting. Conventional water cannons have a relatively short range that the aircraft is required to fly close to the dangerous environment, putting the crew and/or passengers at increased risk of injury or death. The fixed position style water cannon does not allow the aircraft to be flown into the wind (for greater engine safety and performance) while firing the cannon because the entire aircraft has to be pointed in the direction of the firing of the cannon.

[20] The water cannon of the embodiments discussed herein is more powerful than existing water cannons, increasing the effective range of the water stream. Increased range makes it easier to stand-off from aviation obstacles and still have an effective impact on, for example, fire suppression.

[21] In some embodiments, the water cannon is powered by electricity, instead of by an auxiliary power unit, hydraulic power unit, or other means, as is the case with conventional water cannons. This provides precise control of the water cannon’s fluid power, and thus, improves/increases the reach of the water cannon’s water stream. This is also significantly less complex and lighter than a comparable hydraulic drive used for existing water cannons.

[22] In some embodiments, the water cannon includes stability augmentation, allowing for the water cannon itself to compensate for motions of the aircraft (e.g., helicopter or the like), which reduces pilot workload

[23] In some embodiments, the water cannon includes a targeting system for the water stream, which reduces pilot workload. This targeting tool aids in tracking hot portions of a fire, or hot portions of an electrical distribution system in need of aerial-based water cleaning, or any other thermal target. In some embodiments, the targeting system may include infrared cameras and/or other sensors.

[24] The water cannon produces a relatively high flowrate in comparison with the flowrates produced by existing water cannons. The increased flowrate yields rapid turn around times, increasing overall effectiveness. In some embodiments, the flowrates produced by the water cannon is about 700 gallons per minute (g/min) whereas the flowrate of conventional water cannons is about 300 g/min.

[25] According to various embodiments, a high-powered drive system pumps a relatively large amount of water and generates a water stream that covers relatively long distances. This provides a positive effect on fire suppression (and other applications). If insufficient standoff distance is required (as with conventional water cannons), then the helicopter can fan the flames and make the fire incident worse. The water cannon discussed herein overcomes this issue, while adding stability control and thermal targeting to reduce pilot workload and increase future opportunities to automate processes like power line washing, coating structures in fire retardant gels, and/or the like. Furthermore, using the water cannon embodiments discussed herein not only improves extinguishant precision and accuracy, but also enables the use of autonomous fire-fighting mechanisms, such as using robots, drones, UAVs, and the like.

[26] The high performance aerial water cannon embodiments discussed herein can pinpoint firefighting responses such that helicopters can become effective at suppressing fires in high-rises, skyscrapers, and/or other buildings in urban or suburban environments and precision wildland fire response. Additionally, the water cannon embodiments can enable power line washing, building fire retardant gel coating, and disaster cleanup.

[27] Figure 1 shows an example water cannon 100 in a ground testing environment according to various embodiments. In this example, the water cannon 100 comprises a water feed line, 2 -axis gimbaling water cannon (also referred to as a "monitor”), and CAN Bus pressure transducers. In some implementations, the water feed line may be a 4” diameter feed line.

[28] Figure 2 shows an example electric drive system 200 according to various embodiments. In some embodiments, the electric drive system 200 may be used for a water cannon such as water cannon 100 of Figure 1. In this example, the electric drive system 200 comprises a water storage tank, a high flow rate impellor pump, water flow piping, one or more manual valves, one or more remote controlled valves, one or more pressure transducers, one or more flow meters, and a capacitor bank (e.g., the gray box with orange wires in Figure 2) that links a generator system with a pump system. The generator extracts energy from the helicopter, and the pump uses that energy after being stabilized with the capacitor bank, to pump or otherwise control the flow of water.

[29] In some implementations, the water flow piping and manual valves comprises 4 inch (4”) diameter water flow piping and/or 4” manual valves. In some implementations, the water storage tank is a 2650 gallon water storage tank. In some implementations, the one or more remote controlled valves, the one or more pressure transducers, and/or the one or more flow meters are used for ground testing only. [30] Figure 3 shows an example high-power electrical motors and drives 300 according to various embodiments. The high-power electrical motors and drives 300 may be used in the electric drive system 200 of Figure 2.

[31] The high-power electrical motors and drives 300 includes an electrical generator 302 that has black hoses attached thereto (see top right portion of Figure 3 within dashed ellipses). The electrical generator 302 is to be attached to the main transmission of the aerial asset (e.g., helicopter) and acts as a power source for the water cannon.

[32] The high-power electrical motors and drives 300 includes motor and generator controllers 304 with high-voltage orange cable protruding from both ends. In this example, the electrical motors and drives 300 are enclosed within an aluminum housing(s). The high-power electrical motors and drives 300 includes a water pump 306 (blue) and an electric motor 308 that drives that pump 306

[33] Figure 4 shows an example high-power electrical drive unit 400 according to various embodiments. The electrical drive unit 400 comprises a motor drive and generator control unit, and high voltage power cables for the three phases of the generator. In some implementations, the motor drive and generator control unit is a Cascadia PM 150 motor drive and generator control unit.

[34] Figure 5 shows an example control computer system 500 according to various embodiments. The control computer system 500 maybe partofthe capacitor bank shown by Figure 2. In this example, the control computer system 500 comprises an industrial PC running proprietary software to control the pumps, targeting, system safety states, and pilot interfaces. The control computer system 500 is also enclosed in a suitable housing and mounted to the capacitor bank. The control computer system 500 is configurable or operable to communicate and control all the elements of the water cannon system of Figures 1-4 using a suitable bus or interconnect, such as a CAN bus or the like. In some embodiments, the computer system 500 may have the same or similar components of platform s200 of Figure s2.

[35] Figure 6 shows example components of a water cannon system 600 according to various embodiments.

[36] Figure 7 shows an example aerial asset 700 with an aerial water cannon system 702 according to various embodiments. In this example implementation, the aerial asset 700 is an S-64 Air Crane® Helicopter provided by Erickson, Inc.® with a 2650 gallon (10031 Liter) water storage capacity and 72 gallon (272 liter) foam extinguishant capacity. The system weight is approximately 600 pounds (272 kg) and has a hover refill time of less than 60 seconds. The water cannon system 702 includes a 70 meter effective range, targets fire with an IR camera system, comprises thermal mapping system with data downlink for ground crews and auxiliary resources. The water cannon system 702 also includes a gimbal camera system for automatic target ranging and racking, as well as auto stabilization with auto sweep. The water cannon system 702 generates a water/foam stream of approx. 800 ga!!ons/minute (3,028 iiters/minute) and has an available electrical power of 75,000 waits

2. EXAMPLE INCIDENT COMMAND SYSTEMS AND ARRANGEMENTS

[37] Figure si shows an example Incident Command System (ICS) si 00 suitable for practicing the various wildland fire suppression embodiments of the present disclosure. In this example, the environment si 00 is an area where a wildland fire si 80 (also referred to as "wildfire sl80” or "rural fire sl80”) has ignited. The wildland fire sl80 may be an unplanned, unwanted, uncontrolled fire in an area of combustible vegetation and/or other fuels. Depending on the type of biome and/or vegetation present in the environment slOO, the wildland fire sl80 can also be classified more specifically as a forest fire, brush fire, bushfire, desert fire, grass fire, hill fire, peat fire, prairie fire, vegetation fire, or veld fire.

[38] As shown by Figure si, the ICS slOO includes client devices sllO (also referred to as "user devices si 10”, "client systems si 10”, "cloud devices si 10”, or the like), auxiliary resources si 15, aerial systems sl20, satellite systems sl30, Internet of Things (IoT) devices sl40 (also referred to as "sensors sl40” or the like), command center sl35, and service provider(s) sl55. For purposes of the present disclosure, each of the systems/devices depicted by Figure si may be collectively referred to as "ICS slOO,” "wildland fire fighting systems slOO,” "systems slOO,” or the like. Further, in alternate embodiments, the ICS slOO may have more or less of the various types of devices /systems.

[39] The various aerial assets sl20 (including aerial assets sl20a, sl20b, sl20c, and sl20i) are used for combating wildland fires sl80. The aerial assets sl20 include flying objects, such as aircraft including fixed-wing aircraft and rotorcraft/rotary-wing aircraft, some of which may be unmanned aerial vehicles (UAVs) or drones (e.g., aerial asset sl20c). Examples of fixed-wing aircraft may include smokejumper transportation planes, air tactical platforms, Single Engine Airtankers (SEATs), large airtankers, and large transport aircraft. Examples of rotorcraft/rotary-wing aircraft include helicopters, gyrodynes, and/or the like. According to various embodiments, any of the aerial assets sl20 may be equipped with the water cannon system of the embodiments discussed herein such as the water cannon system of Figures 1-7.

[40] Other types of aircraft may additionally or alternatively be used in other embodiments. Some or all of the aerial assets sl20 may be configured or fitted with means to deploy fire extinguishants such as water, water enhancers, borates, ammonium phosphates, polybrominated diphenyl ethers (PBDEs), fire retardant gels and/or foams (see e.g., National Fire Protection Association (NFPA), "Standard on Foam Chemicals for Fires in Class A Fuels”, NFPA Standard 1150 (2017)), and/or the like. The fire extinguishants are generally released in a single drop of one or more trails, the size of which is determined by the wind, volume, speed, and altitude of the airtanker. Additionally or alternatively, some or all of the aerial assets sl20 may be equipped to deploy smokejumpers and/or rappellers in and around the fire sl80. Furthermore, some or all of the aerial assets sl20 may implement aerial computing systems, such as, for example, flight management systems (FMS), inertial navigation systems (INS), electronic flight instruments, cockpit display systems (CDS), head-up display (HUD) systems, Integrated Modular Avionics (IMA) systems, and/or the like.

[41] In the example of Figure si, aerial asset sl20i is an Incident Awareness and Assessment (IAA) aircraft (also referred to here as "IAA aircraft sl20i” or "IAA system sl20i”) available from government agencies and/or the private sector to provide support to wildland fire operations. The IAA aircraft sl20i fly’s over the fire sl80 and collects sensor data sl07 for mapping the fire sl80. In this regard, the IAA aircraft sl20i is equipped with one or more sensor systems for three main wildland fire operations, including detection (e.g., using sensor data sl07 to detect and map locations of new fires sl80); large fire perimeter mapping (e.g., using sensor data sl07 to map a heat perimeter of large fires sl80); and tactical IAA (e.g., using sensor data sl07 to provide near real time situational awareness, spot fire detection, over watch of ground operations, and map the heat perimeter of smaller fires or active portions of large fires).

[42] As examples, the sensor systems used by IAA aircraft sl20i include active infrared (IR) systems, thermal imaging systems (also referred to as "thermal camera systems”, "thermographic camera systems”, and the like), light detection and ranging (LiDAR) sensor systems, and/or other like sensor systems such as those discussed herein. LiDAR is a remote-sensing technique that uses laser light to densely sample the surface of the earth. The LiDAR sensor system illuminates a target (e.g., desired geographic features) with a laser and measures a return/reflection timing of the light to the sensor, where differences in laser return times and wavelengths are used to make digital 3D representations of the target. LiDAR systems may produce mass point datasets that can be visualized and analyzed as (or using) a terrain dataset. In some implementations, the LiDAR systems may be used to generate digital elevation models (DEMs), digital surface models (DSM), and/or other 3D visual representations of terrain data for the mapping/geographic GUI. Additionally or alternatively, other technologies may be used to create terrain data such as, for example, sound navigation and ranging (SONAR) and/or photogrammetric techniques.

[43] The active IR systems include an IR light source and an IR camera(s) (or IR detector(s)), where the IR light source uses short-wave IR (SWIR) light/energy to illuminate an area of interest and the IR camera generates an image based on the IR energy that is reflected back to the IR camera. The IR camera may be a Near IR (NIR) silicon sensor, a SWIR camera (e.g., transparent silicon manufactured with gallium arsenide (GaA)), and/or any other like IR camera/sensor. [44] Aerial thermal imaging systems (TIS) also include an IR light source and IR camera(s) (or IR detector(s)), which collect thermal imaging data. TISs use mid-wavelength IR (MWIR) energy or long-wavelength IR (LWIR) energy to passively sense differences in heat, which is usually used to see through smoke billowing from the fires s 180. These heat signatures are then used for generating thermal imaging. The IR camera(s) of the TIS converts the IR energy into false color (including grayscale) visual images. The image is representative of the temperature of the objects in the image. In a white-hot grayscale IR image, dark gray represents colder temperatures, and light gray represents hotter temperatures. Because TISs operate in longer infrared wavelength regions than IR imaging systems (and visible light cameras), they do not capture reflected light, and therefore, are not affected by oncoming headlights, smoke, haze, dust, etc. This is because smoke particles strongly absorb visible light and partially absorb IR radiation. This allows the IR radiation to pass through smoke and be captured by the TISs. The result of measurements taken with a TIS may be referred to as "thermal imaging data”, a "thermogram”, or the like. Each point of a resulting image or thermogram is displayed according to the intensity of thermal radiation corresponding to the point of the scanned object. In some implementations, the TIS may be a MWIR camera, LWIR camera, Far- Infrared (FIR) camera, a forward-looking infrared (FLIR) camera system, and/or any other like IR and/or thermographic camera/sensor system. Additionally or alternatively, the TIS may include radiometric or non-radiometric thermal camera(s).

[45] In some implementations, the active IR and/or thermal imaging systems may include line scanner systems, step-stare imaging systems, and/or rotational gimbal mounted electro-optical (E0)/IR camera systems. Line scanner systems record the temperature profile across the entire cross-surface and plot it onto a false color image as a function of time. Line scanners and step-stare systems can quickly scan and map large fires sl80 and are usually used when fires sl80 is/are actively burning with open flames. EO/IR imaging balls are usually used to provide over watch of a specific area and are more sensitive to detecting smoldering heat sources. The step-stare systems rapidly build a large area coverage image mosaic from smaller images captured by narrow field camera and properly tiling the resulting images. EO/IR camera systems include both visible light and IR sensors usually in a single package, and is mounted on a gimbal for stabilized imaging and precision pointing in various directions. [46] Additionally, the aerial asset sl20i includes a communication system configured to communicate with other aerial assets sl20, client devices si 10 (also referred to as "user devices si 10”, "client systems si 10”, "cloud devices si 10”, or the like), auxiliary resources si 15, aerial systems sl20, satellite systems sl30, command center sl35 (e.g., service providers, cloud computing system, etc.), and/or Internet of Things (IoT) devices sl40 (also referred to as "sensors sl40” or the like) via respective data links sl05.

[47] In some implementations, the aerial asset sl20i performs imaging of the wildland fire sl80 based on the collected sensor data sl07. In other implementations, the aerial asset sl20i provides the sensor data sl07 to the command center sl35 and/or service provider sl55 for image processing. In either implementation, the imaging may involve generating IR images of an area of interest (AOI) using an IR imaging system, passively sensing the differences in heat using a TIC system, using visible light cameras to capture images of the AOI, and/or using other sensors to identify/determine various aspects of the wildland fire sl80 and/or the area surrounding the wildland fire sl80.

[48] In some implementations, the aerial asset sl20i also performs various mapping and flight path optimizations according to the various embodiments discussed herein. In these implementation, the aerial asset sl20i may generate a set of rank-ordered candidate drop zones based on various input data and optimization criteria, and at least provides highest ranking drop zones from among the set of rank-ordered candidate drop zones to fire manager aerial asset sl20a over a direct data link sl05 in the form of, for example, map files, vector files, or the like. The highest ranking drop zones from among the set of rank-ordered candidate drop zones may be referred to herein as "optimized drop zones” or the like. In some implementations, the rank-ordered candidate drop zones or the optimized drop zones may be provided to the manned fire-fighting aerial asset sl20b and/or unmanned fire-fighting aerial asset sl20c via respective direct data links sl05 (not shown by Figure si).

[49] In other implementations, the aerial asset sl20i provides the sensor data sl07 and/or imaging data to the command center si 35 and/or service provider sl55 for performing the various mapping and flight path optimizations of the embodiments discussed herein. In these implementations, a satellite-style connectivity between aerial asset sl20i and command center sl35 may take place via the satellite system sl30 (not shown by Figure si) for transfer of the sensor data sl07 and/or imaging data. Furthermore, the satellite-style connectivity and/or direct data links sl05 may be used to provide the candidate drop zones to the aerial assets si 20 in the form of, for example, map files, vector files, or the like.

[50] The fire manager within or operating aerial asset s 120a (also referred to as the "fire boss,” "spotter,” "Air Tactical Group Supervisor,” "supervisor,” or the like) orbits the wildland fire si 80 at a higher altitude than the other aerial assets sl20 to coordinate the efforts of smokejumpers (not shown by Figure si), manned aerial assets sl20b, unmanned aerial assets sl20c, auxiliary resources sll5, and/or ground personnel (e.g., operating user devices sllO). In particular, the fire manager aerial asset sl20a coordinates with various fire-fighting pilots (e.g., operating manned fire-fighting aerial assets si 20b) for the deployment of fire extinguishants using the set of candidate drop zones.

[51] The aerial assets sl20b and si 20c may be air tankers, water bombers, slurry bombers, and the like, which are fixed-wing aircraft equipped to drop fire retardants or suppressants. Examples include Single Engine Air Tankers (SEATs) such as the Air Tractor AT-802, the Canadair® CL-215 and CL-415, and the Soviet Antonov An-2 biplane; medium-sized modified aircraft (e.g., Modular Airborne FireFighting Systems (MAFFS)) such as Grumman S-2 Tracker, Conair Firecat, the Douglas DC-4 and DC-7, the Lockheed® C-130 Hercules, Lockheed® P-2 Neptune, the Lockheed® P-3 Orion; supertankers, such as the Boeing® 747 supertanker; and/or the like. Additionally or alternatively, the aerial assets s 120b and si 20c maybe helicopters fitted with tanks (referred to as "helitankers”) or may carry buckets (e.g., bambi buckets, and/or the like). Examples of these helicopters include Bell 204/205 and 212 helicopters produced by Bell Textron Inc., Boeing® Vertol, Sikorsky S-64 Aircrane® helitanker, Sikorsky® S-70, the Erickson S-64 Air Crane® helicopter, Mil Mi-26 ("Halo”), and/or the like. Usually, one or more of the aerial asserts sl20b, sl20c act as lead planes that fly at lower altitudes ahead of the airtankers to mark the trajectory for the extinguishant drop, and ensure overall safety for both ground-based and aerial firefighters. However, in some implementations, the set of candidate drop zones may alleviate the need for lead planes, thereby improving firefighting pilot safety.

[52] The client devices si 10 can be implemented as any suitable computing system or other data processing apparatus usable by users to access content/services provided by the various wildland fire fighting systems slOO. In this example, the client devices si 10 may be mobile devices such as mobile phones (e.g., a "smartphone”), tablet computers, portable media players, wearable computing devices, handheld transceivers (also referred to as "walkie-talkies”), head-mounted display (HMD) devices, an Android Team Awareness Kit (ATAK), Mounted Computing Environment (MCE) systems/devices such as Mounted Mission Command (MMC) transceiver and Mounted Android Tactical Assault Kit (MTAK), and/or the like, that may be operated by individual wildland fire suppression crew members while on the ground. However, other types of client devices si 10 may be used in other embodiments such as desktop computers, workstations, laptops, or other computing devices/systems capable of interfacing directly or indirectly with network infrastructure or other wildland fire fighting systems si 00.

[53] The auxiliary resources si 15 include one or more motorized vehicles equipped with controls used for driving and/or wildland fire suppression applications. The motors of the vehicles may include any devices/apparatuses that convert one form of energy into mechanical energy, including internal combustion engines (ICE), compression combustion engines (CCE), electric motors, hybrids (e.g., including an ICE/CCE and electric motor (s)), hydrogen fuel cells, and the like. Examples of such vehicles may include brush trucks, bulldozers, fire engines (e.g., type I engines), water tenders (type II engines), and/or wildland fire engines (type III engines), rescue vehicles, skidder units, vehicles retrofitted with booster pumps, transport vehicles, and the like. These vehicles may also include embedded devices that monitor and control various subsystems of the vehicles, and which allow the auxiliary resources si 15 to receive fire suppression instructions from the fire manager. Examples of the vehicle systems may be considered synonymous to, and may include any type of computer device used to control one or more systems of a vehicle, such as an electronic/engine control unit, electronic/engine control module, embedded system, microcontroller, control module, engine management system (EMS), onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), in-vehicle infotainment system, and/or the like. In some implementations, a client device si 10 may be employed as a vehicle system, for example, a smartphone or tablet computer communicatively coupled with a vehicle via USB, Bluetooth®, or the like.

[54] The satellite system sl30 provide navigation or geo-spatial positioning services, communication services, observation or surveillance services, weather monitoring services, and/or the like, to other systems slOO. The satellite system sl30 may include one or more terrestrial stations and one or more satellites (e.g., in low Earth orbit, geostationary orbit, geosynchronous orbit, polar orbit, etc.) that communicate with one another via respective links (not shown by Figure si). In some embodiments, the satellite system sl30 may collect imaging data that may be used to supplement the sensor data sl07 and/or imaging data collected by aerial asset sl20i. For example, the satellite system si 30 may include high accuracy Earth observing satellites that can detect a fire within about 10 minutes and are capable of transmitting that information to firefighters (e.g., operating client devices si 10), the fire manager, and/or command center sl35. Additionally or alternatively, the satellite system sl30 may provide geographic mapping services and/or spatial analysis services including wildland fire tracking services such as, for example, NASA’s Earth Observing System (EOS) satellites including Moderate Resolution Imaging Spectroradiometer (MODIS) instrumentation (e.g., Terra EOS and Aqua EOS). In these embodiments, the satellite system sl30 imaging may replace or supplement the thermal imaging provided by the IAA system sl20i.

[55] The command center sl35 may be any place that is used to provide centralized command for some purpose, such as coordinating the overall incident strategy and specific tactical actions, as well the tactical direction of resources. The command center sl35 may be, or include, an Incident Command Post (ICP), Incident Command System (ICS), dispatch center, and/or some other combination of facilities, equipment, personnel, and communications operating. The command center sl35 may be operated by a government agency (e.g., National Interagency Coordination Center (NICC) at the National Interagency Fire Center (NIFC) or the like), a private enterprise, or a combination thereof.

[56] The command center s 135 may include various ground equipment such as a network of radio transceivers managed by a central site computer or centralized computer network (e.g., ARINC Front End Processor System (AFEPS) in an ACARS), which handles and routes messages between the various systems slOO. In some implementations, the communication capabilities/functions may be contracted out to a datalink service provider (DSP) or to a separate service provider (e.g., one or more service provider sl55). In this example, the command center sl35 includes a directional antenna, which may be a high gain, long distance datalink anetanna(e.g., a satellite dish) for communicating with the IAA aircraft sl20i and/or other systems slOO.

[57] The command center sl35 may include one or more computing systems that may be in communication with one or more service provider sl55. The service provider sl55 includes one or more physical and/or virtualized systems for providing content and/or functionality (i.e., services) to one or more clients (e.g., any of systems slOO) over a network. The physical and/or virtualized systems include one or more logically or physically connected servers and/or data storage devices distributed locally or across one or more geographic locations. Generally, the service provider(s) sl55 is configured to use IP/network resources to provide web pages, forms, applications, data, services, and/or media content to systems slOO. As examples, the service provider(s) sl55 may provide cloud computing services, cloud analytics services, database (DB) services (e.g., mutli-tenancy and/or on-demand DB services), geographic/geometric and/or mapping services, and/or communication services such as Voice-over-Internet Protocol (VoIP) sessions, push-to-talk (PTT) sessions, ACARS messaging, and the like for the systems slOO. In one example, the service provider(s) sl55 may be a datalink service provider (DSP) for routing messages among the various systems slOO.

[58] Additionally or alternatively, the service provider(s) sl55 provides DB services for storing and/or processing geographic data according to a suitable DB model such as the geodatabase (GDB) model. A GDB is a collection of geographic datasets of various data types held in a file system or a multiuser relational database management system (DBMS). The GDB information model is implemented as a series of tables holding feature classes and attributes. Feature classes are homogeneous collections of features each having the same spatial representation and a set of attributes. Feature classes contain both the geometric shape of each feature as well as descriptive attributes. The feature classes are stored in a DB table, where the features are stored in table rows and the attributes are stored in table columns. In the context of GDBs, a feature is an object that stores its geographic representation, which is typically a point, line, or polygon, as one of its properties (or fields) in the row.

[59] In some implementations, the sensor data sl07 and/or imaging data may be supplemented using additional sensor data sl07 collected by IoT devices sl40. The IoT devices si 40 are uniquely identifiable embedded computing devices that comprise a network access technology designed for low-power applications utilizing short-lived links/connections. The IoT devices sl40 may capture and record data associated with an event, and communicate such data with one or more other devices over a network with little or no user intervention. As examples, IoT devices sl40 may be (or may include) autonomous sensors, gauges, meters, image capture devices, microphones, electro mechanical devices (e.g., switch, actuator, etc.), and/or the like. The IoT devices sl40 may be stationary (e.g., attached to objects or otherwise fixed in place) or mobile (e.g., disposed on or embedded in drones or other moveable devices). For purposes ofwildland fire detection, the IoT devices sl40 may include temperature sensors, humidity sensors, carbon dioxide (C02) sensors, carbon monoxide (CO) sensors, and/or other remote and/or in situ sensors. These sensors may be deployed in specific locations that are considered high risk for wildland fires sl80 to track the propagation of fires sl80 more precisely and in near real-time. In some implementations, unmanned aerial surveillance systems (e.g., same or similar to aerial system sl20c) may be equipped with sensors to collect data about the fire si 80. Additionally, the IoT devices sl40 maybe deployed along (at or around) a specific geographic area in a way such that few or no blind spots may remain under most conditions. In some implementations, an optimization algorithm is used to deploy IoT devices sl40 in a particular arrangement in or around the geographic area, and possibly with different sensor types, in order to maximize the quality of sensor information in the form of perceptional completeness, minimization of costs by optimizing the number of required sensors to provide full or nearly-full coverage of the geographic area, and maximization (or minimization) of the potential overlap of sensing areas.

[60] Each of the systems slOO include physical hardware elements and software components capable of executing one or more applications and accessing content and/or services provided by the other systems s 100. The wildland fire fighting systems slOO may communicate with one another using suitable communication protocol (s) over respective data links sl05. The respective data links sl05 comprise means of connecting one location to another for the purpose of transmitting and receiving digital information including the communication protocols governing the transfer of digital data from a data source to a data sink.

[61] Each data link sl05 is a connection between end systems on an aircraft sl20 to an end system located on the ground (e.g., command center sl35, client devices slid), onboard another aircraft sl20, satellites sl30, sensors sl40 or combinations thereof. Each end wildland fire fighting system slOO includes one or more information applications that are independent of, and use the data links sl05 to exchange information. Each data link sl05 is a communications "pipe” through which various messages/content are transferred between information applications that are operating in respective wildland fire fighting systems slOO. The data links sl05 may also be used to send control signals and receive telemetry to/from unmanned aircraft sl20c and/or auxiliary resources si 15.

[62] The data links sl05 may be airband radio links, satellite links, cellular communication links, and/or some other type of links/channels between the various systems slOO. In some implementations, the data links sl05 may be based on one or more civil aviation data link systems such as direct point-to-point (PTP) data links as defined by Aeronautical Radio Inc. (ARINC) Standard 429, ARINC Standard 629 ("Digital Autonomous Terminal Access Communication”), ARINC Standard 653 ("Avionics Application Standard Software Interface”, "Application Executive” or "APEX”), an Aircraft Data Network (ADN) as defined by ARINC Standard 664, Avionics Full-Duplex Switched Ethernet (AFDX) as defined by ARINC Standard 664 part 7, controller-pilot data link communications (CPDLC) (e.g., FANS-l/A, ICAO Doc 9705 compliant ATN/CPDLC systems, etc.), Aircraft Communications Addressing and Reporting System (ACARS) (see e.g., ARINC Standards 618, 633, etc.), TimeTriggered Ethernet (TTE), air-based Ethernet or air-to-air Ethernet, and/or the like. In some implementations, the data links sl05 may be based on one or more governmental or military data link systems such as a suitable Tactical Data Link (TDL) such as Multifuction Advanced Data Link (MADL), Link 16, Link 22, Enhanced Position Location Reporting System (EPLRS), Situation Awareness Data Link (SADL), etc.; SpaceWire (ECSS-E-ST-50-12C/11C); MIL-STD-1553 (SAE AS 15531); and/or the like. Any other communication systems and/or communication protocols may be used in other embodiments, examples of which may include Hypertext Transfer Protocol (HTTP) over Transmission Control Protocol (TCP) /Internet Protocol (IP), or one or more other protocols such as Extensible Messaging and Presence Protocol (XMPP); File Transfer Protocol (FTP); Secure Shell (SSH); Session Initiation Protocol (SIP) with Session Description Protocol (SDP), Real-time Transport Protocol (RTP), Secure RTP (SRTP), Real-time Streaming Protocol (RTSP), or the like; Simple Network Management Protocol (SNMP); WebSocket; Wireless Application Messaging Protocol (WAMP); Joint Range Extension Applications Protocol (JREAP) A, B, and C; User Datagram Protocol (UDP); QUIC (sometimes referred to as "Quick UDP Internet Connections”); Remote Direct Memory Access (RDMA); Stream Control Transmission Protocol (SCTP); Internet Control Message Protocol (ICMP); Internet Group Management Protocol (IGMP); Internet Protocol Security (IPsec); Military Standard (MTD-STD) 1553, MTD-STD-1773; X.25; and/or the like.

[63] In a first example implementation, the fire manager aerial asset sl20a includes a computing system that processes sensor data sl07 provided by the IAA aircraft sl20i. This computing system includes graphics processing circuitry (e.g., graphics processing unit (GPU) or the like) to process the sensor data si 07 and renders/displays thermal imaging graphical user interface (GUI) on a suitable display device. The computing system also includes processing circuitry that operates a suitable client application or web application that includes the GUI. In one example, the computing system implemented by the fire manager and/or aerial asset sl20a is a suitable mission computer such as a data acquisition and processing unit (DAPU) with pilot interface panel (PIP) or control and display unit (CDU), the General Dynamics Advanced Mission Computer (AMC), Advanced Telecommunications Computing Architecture (ATCA) mission computer system, Common Open Secure Mission Computer (COSMC) provided by Raytheon Technologies Corp., Aircraft Piloting Inertial Reference System (APIRS) provided by Safran®, and/or any other suitable computing system implemented in or by an aerial asset sl20. In another example, the computing system implemented by the fire manager and/or aerial asset sl20a is a mobile device such as a laptop computer, tablet, and/or the like.

[64] In a second example implementation, the command center sl35 includes one or more computing systems that process the sensor data sl07 and provides data to the fire manager and/or aerial asset sl20a for rendering/displaying the GUI on a suitable display device. This may provide for more processing power and/or capabilities, which may allow for more dynamic maps to be provided to the fire manager. In this example implementation, the fire manager in the aerial asset sl20a may still command and coordinate fire suppression activities, while taking advantage of the greater processing capabilities of a ground-based computing service, such as the command center sl35, a cloud computing service, an edge computing network, and/or the like. [65] In a third example implementation, the GUI is operated by the command center sl35, which includes one or more computing systems that perform the various functions of the first and/or second example implementation on the ground. In this example implementation, the GUI may be a distrubuted application hosted and served by a cloud computing service, an edge computing network, and/or other like network. This may allow an Incident Commander and/or Incident Management Team on the ground to manage fire suppresion activities rather than requiring a fire manager aerial asset sl20a to circle above the wildland fire si 80 as discussed previously.

[66] In any of the aforementioned example implementations, the computing system rendering the thermal imaging data (e.g., the fire manager computing system and/or the command center sl35 system, collectively referred to as a "command computing system”) may receive a single file that indicates the thermal data and/or water cannon targeting locations, which are then generated/rendered by a client app operated by the command computing system (CCS) and superimposed on top of a mapped geographic area.

[67] In any of the aforementioned example implementations, the GUI includes various visual representations (or "visualizations”) of data including a visual representation of the geographic area that includes the wildland fire sl80 (e.g., a "geographic map”), visual representations of the wildland fire sl80 on or in the geographic area (e.g., a "heat map”, "thermal map”, "mosaic”, or the like), and visual representations of the candidate/optimal deployment zones overlaid on top of the geographic map and the thermal map. The client app may render/display the geographic map using mapping data of the geographic area (e.g., satellite images, aerial photography, GIS data, and/or the like), thermal data for generating the visual representations of the wildland fire sl80, and data indicating the optimized drop zones to be overlaid on top of the thermal and/or geograpic map(s).

[68] In some implementations, the mapping data may be in the form of a tiled web map such as those defined by the Open Source Geospatial Foundation (OSGeo), the OpenGIS® Web Map Tile Service Implementation Standard version 1.0.0, no. 07-057r7 (06 Apr. 2010), Open Geospatial Consortium (OGC) Volume 1: CDB Core Standard: Model and Physical Data Store Structure version 1.1.0, no. 15-113r5 (19 Dec. 2018), and/or any other suitable OpenGIS® Standards (available at: https://www.ogc.org/docs/is), each of which is hereby incorporated by reference in their entireties. Additionally or alternatively, the mapping data may comprise a mosaic map data or orthomosaic map data, which is made up of multiple images of a geographic area that are stitched together and geometrically corrected (e.g., "orthorectified” or the like). Additionally or alternatively, the mapping data may comprise a radiometric orthomosaic (Thermal Map)

[69] According to various embodiments, the CCS may obtain a single file that indicates the target areas for shooting a water stream using the attached water cannon of the embodiments discussed herein, which are then generated/rendered by the client app and superimposed on top of the mapped geographic area or on top of a graphical/visual representation of the imaged area (e.g., buildings, etc.). In some embodiments, this single file may also include new/updated geographic/mapping data and/or thermal data along with the optimized drop zone data. In other embodiments, the new/updated mapping data and/or the thermal data may be provided in separate files than the single file including the optimized drop zone data.

[70] Additionally, the GUI and/or the visualizations themselves may include graphical control elements (GCEs) that allows the user (e.g., pilot, fire manager, inciedent manager, Infrared Interpreters (IRINs), etc.) to transform or otherwise manipulate the visualizations and/or the data represented by the visualizations. In some embodiments, one or more GCEs may allow the user to change the ratings and/or factors that are being optimized for the water cannon targetting.

[71] Additionally or alternatively, one GCE (e.g., a button or toggle) may enable (or disable) a "Hotspt Highlight” feature to highlight/distinguish detected fires or other hotspots in the mapping data using, for example, different distinguishing effects including colorization, shading, brightness, saturation, textures, and/or the like. The Hotspot Highlight feature may also distinguish between different levels of heat within a fire area, for example, by using different distinguishing effects for different temperature ranges. The Hotspot Highlight makes delineating fires or other thermal anomolies much easier for the user because relatively small points of heat are made more apparent when colorized, shaded, or otherwise distinguished from non-thermal areas in the map. Additionally or alternatively, the Additionally, one or more GCEs may allow the user to mark known locations with points to reference later in-flight (e.g., using markers, pins, labels, and/or the like).

[72] In any of the aforementioned implementations, the CCS may generate the visualizations and/or request additional data from the IAA aircraft sl20i and/or data from other systems in Figure si in respone to activation of one or more of the GCEs. The single file indicating the optimal deployment zones, geographic/mapping data, and/or thermal data may be, for example, Keyhole Markup Language (KML) document, KML Zipped (KMZ) file, and/or KML SuperOverlay; Geography Markup Language (GML); Geospatial extensible Access Control Markup Language (GeoXACML); Geospatial Data Abstraction Library (GDAL), Tag Image File Format (TIFF), a radiometric JPEG (R-JPEG), computer-aided drafting (CAD) files, a Global Information System (GIS) file format such as ARC Digitized Raster Graphics (ADRG), Esri grid, GeoTIFF, JPEG 2000, Raster Product Format (RPF), USGS Digital Elevation Model (DEM), Digital Terrain Elevation Data (DTED) Spatial Data Transfer Standard (SDTS); Shapefile (SHP), Web Map Service (WMS); and/or the like. Additionally or alternatively, the single file, the mapping data, thermal data, and/or optimized deployment zone data may be served or otherwise provided by a suitable server (e.g., TMS server, ArcGIS Server, and/or the like). Additionally or alternatively, the client app may be any suitable mapping application, and may be implemented as a desktop or native application, a web browser capable of running a web mapping app within the browser, a hybrid app, and/or any other application. As examples, the client app may be Google Maps®, Google Earth®, Environmental Systems Research Institute (Esri) or ArcGIS® (e.g., ArcReader, ArcMap, etc.), Android Team Awareness Kit (ATAK), EarthWatch® and/or SecureWatch® provided by Maxar Technologies, and/or any other application that is capable of rendering and displaying terrain data such as three dimensional (3D) terrain data.

3. EXAMPLE SYSTEM AND DEVICE CONFIGURATIONS AND ARRANGEMENTS

[73] Figure s2 illustrates an example of an computing system s200 (also referred to as "platform s200,” "device s200,” "appliance s200,” or the like) in accordance with various embodiments. The system s200 may be suitable for use as any of the computer devices discussed herein including the aforementioned CCS, the client devices si 10, IoT devices si 40, an on-board aerial asset/avionics computing system, a server operated by the service provider sl55 and/or the command center sl35, a computing device embedded in the satellite system sl30, one or more cloud computing nodes of a cloud computing service, and/or any other computing device discussed herein. The components of system s200 may be implemented as an individual computer system, or as components otherwise incorporated within a chassis of a larger system. The components of system s200 maybe implemented as integrated circuits (ICs) or other discrete electronic devices, with the appropriate logic, software, firmware, or a combination thereof, adapted in the computer system s200. Additionally or alternatively, some of the components of system s200 maybe combined and implemented as a suitable System-on-Chip (SoC), System-in- Package (SiP), multi-chip package (MCP), or the like.

[74] The system s200 includes physical hardware devices and software components capable of providing and/or accessing content and/or services to/from the remote system s255. The system s200 and/or the remote system s255 can be implemented as any suitable computing system or other data processing apparatus usable to access and/or provide content/services from/to one another. As examples, the system s200 and/or the remote system s255 may comprise desktop computers, a work stations, laptop computers, mobile cellular phones (e.g., "smartphones”), tablet computers, portable media players, wearable computing devices, server computer systems, an aggregation of computing resources (e.g., in a cloud-based environment), or some other computing devices capable of interfacing directly or indirectly with network s250 or other network. The system s200 communicates with remote systems s255, and vice versa, to obtain/serve content/services using, for example, Hypertext Transfer Protocol (HTTP) over Transmission Control Protocol (TCP)/Internet Protocol (IP), or one or more other common Internet protocols such as File Transfer Protocol (FTP); Session Initiation Protocol (SIP) with Session Description Protocol (SDP), Real-time Transport Protocol (RTP), or Real-time Streaming Protocol (RTSP); Secure Shell (SSH), Extensible Messaging and Presence Protocol (XMPP); WebSocket; and/or some other communication protocol, such as those discussed herein.

[75] Referring now to system s200, the system s200 includes processor circuitry s202, which is configured to execute program code, and/or sequentially and automatically carry out a sequence of arithmetic or logical operations; record, store, and/or transfer digital data. The processor circuitry s202 includes circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as serial peripheral interface (SPI), inter-integrated circuit (I 2 C) or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input-output (I/O), memory card controllers, interconnect (IX) controllers and/or interfaces, universal serial bus (USB) interfaces, mobile industry processor interface (MIPI) interfaces, Joint Test Access Group (JTAG) test access ports, and the like. The processor circuitry s202 may include on-chip memory circuitry or cache memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. Individual processors (or individual processor cores) of the processor circuitry s202 maybe coupled with or may include memory/storage and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system s200. In these embodiments, the processors (or cores) of the processor circuitry s202 are configured to operate application software (e.g., logic/modules s283) to provide specific services to a user of the system s200. In some embodiments, the processor circuitry s202 may include special-purpose processor/controller to operate according to the various embodiments herein.

[76] In various implementations, the processor (s) of processor circuitry s202 may include, for example, one or more processor cores (CPUs), graphics processing units (GPUs), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, digital signal processors (DSP), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), Application Specific Integrated Circuits (ASICs), SoCs and/or programmable SoCs, microprocessors or controllers, or any suitable combination thereof. As examples, the processor circuitry s202 may include Intel® Core™ based processor(s), MCU-class processor(s), Xeon® processor(s); Advanced Micro Devices (AMD) Zen® Core Architecture processor(s), such as Ryzen® or Epyc® processor(s), Accelerated Processing Units (APUs), MxGPUs, or the like; A, S, W, and T series processor (s) from Apple® Inc., Snapdragon™ or Centriq™ processor(s) from Qualcomm® Technologies, Inc., Texas Instruments, Inc.® Open Multimedia Applications Platform (OMAP)™ processor(s); Power Architecture processor(s) provided by the OpenPOWER® Foundation and/or IBM®, MIPS Warrior M-class, Warrior I-class, and Warrior P-class processor(s) provided by MIPS Technologies, Inc.; ARM Cortex-A, Cortex-R, and Cortex- M family of processor(s) as licensed from ARM Holdings, Ltd.; the ThunderX2® provided by Cavium™, Inc.; GeForce®, Tegra®, Titan X®, Tesla®, Shield®, and/or other like GPUs provided by Nvidia®; or the like. Other examples of the processor circuitry s202 may be mentioned elsewhere in the present disclosure.

[77] In some implementations, the processor (s) of processor circuitry s202 may be, or may include, one or more media processors comprising microprocessor-based SoC(s), FPGA(s), or DSP(s) specifically designed to deal with digital streaming data in real-time, which may include encoder/decoder circuitry to compress/decompress (or encode and decode) Advanced Video Coding (AVC) (also known as H.264 and MPEG-4) digital data, High Efficiency Video Coding (HEVC) (also known as H.265 and MPEG-H part 2) digital data, and/or the like.

[78] In some implementations, the processor circuitry s202 may include one or more hardware accelerators. The hardware accelerators maybe microprocessors, configurable hardware (e.g., FPGAs, programmable ASICs, programmable SoCs, DSPs, etc.), or some other suitable special-purpose processing device tailored to perform one or more specific tasks or workloads, for example, specific tasks or workloads of the subsystems of the extinguishant optimization system discussed herein, which may be more efficient than using general-purpose processor cores. In some embodiments, the specific tasks or workloads maybe offloaded from one or more processors of the processor circuitry s 202. In these implementations, the circuitry of processor circuitry s202 may comprise logic blocks or logic fabric including and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. Additionally, the processor circuitry s202 may include memory cells (e.g., EPROM, EEPROM, flash memory, static memory (e.g., SRAM, anti-fuses, etc.) used to store logic blocks, logic fabric, data, etc. in LUTs and the like.

[79] In some implementations, the processor circuitry s202 may include hardware elements specifically tailored for machine learning functionality, such as for operating the subsystems of the CCS and/or deployment zone optimization system discussed herein. In these implementations, the processor circuitry s202 may be, or may include, an AI engine chip that can run many different kinds of AI instruction sets once loaded with the appropriate weightings and training code. Additionally or alternatively, the processor circuitry s202 may be, or may include, AI accelerator(s), which may be one or more of the aforementioned hardware accelerators designed for hardware acceleration of AI applications. As examples, these processor(s) or accelerators may be a cluster of artificial intelligence (AI) GPUs, tensor processing units (TPUs) developed by Google® Inc., Real AI Processors (RAPs™) provided by AlphalCs®, Nervana™ Neural Network Processors (NNPs) provided by Intel® Corp., Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU), NVIDIA® PX™ based GPUs, the NM500 chip provided by General Vision®, Hardware 3 provided by Tesla®, Inc., an Epiphany™ based processor provided by Adapteva®, or the like. In some embodiments, the processor circuitry s202 and/or hardware accelerator circuitry may be implemented as AI accelerating co-processor(s), such as the Hexagon 685 DSP provided by Qualcomm®, the PowerVR 2NX Neural Net Accelerator (NNA) provided by Imagination Technologies Limited®, the Neural Engine core within the Apple® All or A12 Bionic SoC, the Neural Processing Unit (NPU) within the HiSilicon Kirin 970 provided by Huawei®, and/or the like. In some hardware-based implementations, each of the subsystems of the CCS and/or deployment zone optimization system may be operated by the respective AI accelerating co-processor(s), AI GPUs, TPUs, or hardware accelerators (e.g., FPGAs, ASICs, DSPs, SoCs, etc.), etc., that are configured with appropriate logic blocks, bit stream(s), etc. to perform their respective functions.

[80] In some implementations, the processor (s) of processor circuitry s202 may be, or may include, one or more custom-designed silicon cores specifically designed to operate corresponding subsystems of the CCS. These cores may be designed as synthesizable cores comprising hardware description language logic (e.g., register transfer logic, verilog, Very High Speed Integrated Circuit hardware description language (VHDL), etc.); netlist cores comprising gate-level description of electronic components and connections and/or process-specific very-large-scale integration (VLSI) layout; and/or analog or digital logic in transistor-layout format. In these implementations, one or more of the subsystems of the CCS may be operated, at least in part, on custom-designed silicon core(s). These "hardware-ized” subsystems may be integrated into a larger chipset but may be more efficient that using general purpose processor cores.

[81] The system memory circuitry s204 comprises any number of memory devices arranged to provide primary storage from which the processor circuitry s202 continuously reads instructions s282 stored therein for execution. In some embodiments, the memory circuitry s204 is on-die memory or registers associated with the processor circuitry s202. As examples, the memory circuitry s204 may include volatile memory such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), etc. The memory circuitry s204 may also include nonvolatile memory (NVM) such as high-speed electrically erasable memory (commonly referred to as "flash memory”), phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc. The memory circuitry s204 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth.

[82] Storage circuitry s208 is arranged to provide persistent storage of information such as data, applications, operating systems (OS), and so forth. As examples, the storage circuitry s208 may be implemented as hard disk drive (HDD), a micro HDD, a solid-state disk drive (SSDD), flash memory cards (e.g., SD cards, microSD cards, xD picture cards, and the like), USB flash drives, on-die memory or registers associated with the processor circuitry s202, resistance change memories, phase change memories, holographic memories, or chemical memories, and the like.

[83] The storage circuitry s208 is configured to store computational logic s283 (or "modules s2830”) in the form of software, firmware, microcode, or hardware-level instructions to implement the techniques described herein. The computational logic s283 may be employed to store working copies and/or permanent copies of programming instructions, or data to create the programming instructions, for the operation of various components of system s200 (e.g., drivers, libraries, application programming interfaces (APIs), etc.), an OS of system s200, one or more applications, and/or for carrying out the embodiments discussed herein. The computational logic s283 may be stored or loaded into memory circuitry s204 as instructions s282, or data to create the instructions s282, which are then accessed for execution by the processor circuitry s202 to carry out the functions described herein. The processor circuitry s202 accesses the memory circuitry s204 and/or the storage circuitry s208 over the interconnect (IX) s206. The instructions s282 to direct the processor circuitry s202 to perform a specific sequence or flow of actions, for example, as described with respect to flowchart(s) and block diagram(s) of operations and functionality depicted previously. The various elements may be implemented by assembler instructions supported by processor circuitry s202 or high- level languages that may be compiled into instructions s281, or data to create the instructions s281, to be executed by the processor circuitry s202. The permanent copy of the programming instructions may be placed into persistent storage devices of storage circuitry s208 in the factory or in the field through, for example, a distribution medium (not shown), through a communication interface (e.g., from a distribution server (not shown)), over-the-air (OTA), or any combination thereof.

[84] Computer program code for carrying out operations of the present disclosure (e.g., computational logic s283, instructions s282, instructions s281 discussed previously) may be written in any combination of one or more programming languages, including an object oriented programming language such as Python, PyTorch, ArcPy, Ruby, Scala, Smalltalk, Java™, C++, C#, or the like; a procedural programming languages, such as the "C" programming language, the Go (or "Golang”) programming language, or the like; a scripting language such as JavaScript, Server-Side JavaScript (SSJS), JQuery, PHP, Pearl, Python, Ruby on Rails, Accelerated Mobile Pages Script (AMPscript), Mustache Template Language, Handlebars Template Language, Guide Template Language (GTL), PHP, Java and/or Java Server Pages (JSP), Node.js, ASP.NET, JAMscript, and/or the like; a markup language such as Hypertext Markup Language (HTML), Extensible Markup Language (XML), Java Script Object Notion (JSON), Apex®, Cascading Stylesheets (CSS), JavaServer Pages (JSP), MessagePack™, Apache® Thrift, Abstract Syntax Notation One (ASN.l), Google® Protocol Buffers (protobuf), or the like; some other suitable programming languages including proprietary programming languages and/or development tools, or any other languages tools. The computer program code for carrying out operations of the present disclosure may also be written in any combination of the programming languages discussed herein. The program code may execute entirely on the system s200, partly on the system s200, as a stand-alone software package, partly on the system s200 and partly on a remote system s255 or entirely on the remote system s255 or server. In the latter scenario, the remote system s255 may be connected to the system s200 through any type of network, including a LAN or WAN, or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider).

[85] In one example, such as when the system s200 is a client device or some other computing system operated by a fire manager, the computational logic s283 includes the mapping geographic GUI and/or client application discussed previously.

[86] In another example, the computational logic s283 includes a deployment zone optimization engine (e.g., or "optimizer”) that determines optimal fire extinguishant deployment/drop zones according to the various embodiments discussed herein. The deployment zone optimzer does this using one or multiple mathematical optimization models to determine the optimal drop zones based on various factors such as those discussed herein. In some implementations, this engine/optimizer may be developed using one or more commercially-available software packages or platforms for linear programming, mixed-integer programming, and quadratic programming can be used. These software packages and platforms are often referred to as "mathematical programming solvers” (or simply "solvers”), "optimization engines,” or "optimizers.” Examples of software packages and/or platforms that can be used for these purposes include the IBM® CPLEX Optimizer®, the ODH-CPLEX engine provided by AIMMS B.V., the Linear, Interactive, and Discrete Optimizer (LINDO) software package provided by Lindo Systems, Inc., and a RESTful Analytic Solver Object Notation (RASON®) service and Analytic Solver® provider by Frontline Systems Inc. The deployment zone optimzer then provides the optimal drop zones to a user (e.g., fire manager, IRINs, incident commander, etc.) in an information object that can be used to generate and render a mapping GUI, which allows fire managers to coordinate fire suppression actions based on desired goals and existing resources. Additionally or alternatively, the deployment zone optimzer provides the optimal fire extinguishant drop zones as respective instructions/commands to individual firefighting UAVs/drones sl20c, which then perform firefighting actions accordinly (e.g., dropping extinguishants while travelling along a flight path indicated by the instructions/commands).

[87] In another example, the computational logic s283 includes a mosaic engine that stitches together the many images generated from various sensor data sl07 into a single mosaic image (e.g., as a GeoTiff fike, KML SuperOverlay file, or the like). A mosaic is a combination or merge of two or more images into a single image. In some implementations, the mosaic engine creates a single raster dataset from multiple individual raster datasets by mosaicking the individual raster datasets together. The mosaic engine may alter or adjust the mosiac based on different user interactions with the displayed/rendered image such as when the user performs or issues a zoom-in or zooms-out command. The mosaic engine enables quality control right after a data capture/imaging flight.

[88] Additionally or alternatively, the computational logic s283 may include a geoprocessing engine (or "geoprocessor”) that performs various operations on a dataset such as the collected and/or fused sensor data sl07, and creates a resulting output dataset. The geoprocessor peforms various geoprocessing tasks such as polygon and/or geographic feature overlay, feature selection and analysis, topology processing, data conversion, complex regression analysis, image classification, spatial analysis, and/or the like. In one example, the geoprocessor operates the aforementioned optimizers to determine the optimal deployment zones as discussed herein. In another example, the geoprocessing engine obtains the optimal deployment zones and overlays the optimal deployment zones on the geograph/terrain map. The geoprocessor may be a client-side application and/or a server-side application (e.g., operated by remote system s255) accessible by a client-side application (e.g., the aforementioned mapping GUI).

[89] In another example, the computational logic s283 may include a Normalized Difference Vegetation Index (NDVI) engine that generates mapping data and/or graphical indicators indicating whether the imaged area contains green vegetation or relative biomass based on the differential reflection in the red and infrared (IR) bands of the sensor data sl07. In addition to identifying biomass and/or wildland fire fuel vegitation, the NDVI engine can also be configured to indicate bodies of water using a suitable thresholding technique. Polygons representing bodies of water can be generated and applied to the thermal map and/or other geographic/terrain imagery in real time or near real time. This can be useful for finding and marking water sources (e.g., using a suitable GUI or the like) in areas without accurate maps such as rural areas.

[90] In another example, the computational logic s283 may include a data fusion (or data integration) engine configured to generate composite information from various sensor data sl07 collected from various sensors and/or other sources. The data fusion engine may employ direct fusion technique(s) or indirect fusion technique(s). Direct fusion combines data acquired directly from multiple sensors, which may be the same or similar (e.g., all sensors perform the same type of measurement) or different (e.g., different sensor types, historical data, etc.). Indirect fusion utilizes historical data and/or known properties of the environment and/or human inputs to produce a refined data set. Additionally, the data fusion engine may include one or more additional/alternative fusion algorithms such as a smoothing algorithm (e.g., estimating a value using multiple measurements in real-time or not in real-time), a filtering algorithm (e.g., estimating an entity’s state with current and past measurements in real-time), and/or a prediction state estimation algorithm (e.g., analyzing historical data (e.g., geolocation, speed, direction, and signal measurements) in real-time to predict a state (e.g., a future signal strength/quality at a particular geolocation coordinate)). As examples, the data fusion algorithm may be or include a structured-based algorithm (e.g., tree-based (e.g., Minimum Spanning Tree (MST)), cluster-based, grid and/or centralized-based), a structure-free data fusion algorithm, a Kalman filter algorithm and/or Extended Kalman Filtering, a fuzzy-based data fusion algorithm, an Ant Colony Optimization (ACO) algorithm, a fault detection algorithm, a Dempster-Shafer (D-S) argumentation-based algorithm, a Gaussian Mixture Model algorithm, a triangulation based fusion algorithm, Pau multi-sensor data fusion framework, and/or any other like data fusion algorithm.

[91] Additionally or alternatively, any of the aforementioned examples may utilize machine learning (ML) techniques. ML involves using algorithms to perform specific task(s) without using explicit instructions to perform the specific task(s), but instead relying on learnt patterns and/or inferences. The ML technique may be any of the supervised, unsupervised, and/or reinforcement learning algorithms/models discussed herein. In one example software-based implementation, the subsystems of the CCS and/or deployment zone optimization system may be developed using a suitable programming language, development tools/environments, etc., which are executed by the processor circuitry s202.

[92] The operating system (OS) of system s200 may be a general purpose OS or an OS specifically written for and tailored to the computing system s200. For example, when the system s200 is a server system (e.g., the same or similar to remote system s255), a desktop computer, or a laptop computer, the OS may be Unix or a Unix-like OS such as Linux e.g., provided by Red Hat Enterprise, Windows 10™ provided by Microsoft Corp.®, macOS provided by Apple Inc.®, or the like. In another example where the system s200 is a mobile device, the OS maybe a mobile OS, such as Android ® provided by Google Inc. ® , iOS ® provided by Apple Inc. ® , Windows 10 Mobile ® provided by Microsoft Corp. ® , KaiOS provided by KaiOS Technologies Inc., or the like. In another example, where the system s200 is (or is implemented in) a UAV or drone (e.g., aerial asset sl20c) the OS may be Robot Operating System (ROS) provided by Open Robotics, FlytOS™ provided by FlytBase, Inc., LynxOS real-time OS (RTOS) provided by Lynx Software Technologies, Inc. and/or the like.

[93] The OS manages computer hardware and software resources, and provides common services for various applications. The OS may include one or more drivers or APIs that operate to control particular devices that are embedded in the system s200, attached to the system s200, or otherwise communicatively coupled with the system s200. The drivers may include individual drivers allowing other components of the system s200 to interact or control various I/O devices that may be present within, or connected to, the system s200. For example, the drivers may include a display driver to control and allow access to a display device, a touchscreen driver to control and allow access to a touchscreen interface of the system s200, sensor drivers to obtain sensor readings of sensor circuitry s221 and control and allow access to sensor circuitry s221, actuator drivers to obtain actuator positions of the actuators s222 and/or control and allow access to the actuators s222, ECU drivers to obtain control system information from one or more of the ECUs s224, a camera driver to control and allow access to an embedded image capture device, audio drivers to control and allow access to one or more audio devices. The OSs may also include one or more libraries, drivers, APIs, firmware, middleware, software glue, etc., which provide program code and/or software components for one or more applications to obtain and use the data from other applications operated by the system s200, such as the various subsystems of the CCS discussed previously.

[94] The components of system s200 communicate with one another over the interconnect (IX) s206. The IX s206 may include any number of IX technologies such as industry standard architecture (ISA), extended ISA (EISA), inter-integrated circuit (I 2 C), an serial peripheral interface (SPI), point-to-point interfaces, power management bus (PMBus), peripheral component interconnect (PCI), PCI express (PCIe), Intel® Ultra Path Interface (UPI), Intel® Accelerator Link (IAL), Common Application Programming Interface (CAPI), Intel® QuickPath Interconnect (QPI), Intel® Omni-Path Architecture (OPA) IX, RapidIO™ system interconnects, Ethernet, Cache Coherent Interconnect for Accelerators (CCIA), Gen-Z Consortium IXs, Open Coherent Accelerator Processor Interface (OpenCAPI), and/or any number of other IX technologies. The IX s206 may be a proprietary bus, for example, used in a SoC based system.

[95] The communication circuitry s209 is a hardware element, or collection of hardware elements, used to communicate over one or more networks (e.g., network s250) and/or with other devices. The communication circuitry s209 includes modem s210 and transceiver circuitry ("TRx”) 812. The modem s210 includes one or more processing devices (e.g., baseband processors) to carry out various protocol and radio control functions. Modem s210 may interface with application circuitry of system s200 (e.g., a combination of processor circuitry s202 and CRM 860) for generation and processing of baseband signals and for controlling operations of the TRx s212. The modem s210 may handle various radio control functions that enable communication with one or more radio networks s250 via the TRx s212 according to one or more wireless communication protocols. The modem s210 may include circuitry such as, but not limited to, one or more single-core or multi-core processors (e.g., one or more baseband processors) or control logic to process baseband signals received from a receive signal path of the TRx s212, and to generate baseband signals to be provided to the TRx s212 via a transmit signal path. In various embodiments, the modem s210 may implement a real-time OS (RTOS) to manage resources of the modem s210, schedule tasks, etc.

[96] The communication circuitry s209 also includes TRx s212 to enable communication with wireless networks s250 using modulated electromagnetic radiation through a non solid medium. TRx s212 includes a receive signal path, which comprises circuitry to convert analog RF signals (e.g., an existing or received modulated waveform) into digital baseband signals to be provided to the modem s210. The TRx s212 also includes a transmit signal path, which comprises circuitry configured to convert digital baseband signals provided by the modem s210 to be converted into analog RF signals (e.g., modulated waveform) that will be amplified and transmitted via an antenna array including one or more antenna elements (not shown). The antenna array may be a plurality of microstrip antennas or printed antennas that are fabricated on the surface of one or more printed circuit boards. The antenna array may be formed in as a patch of metal foil (e.g., a patch antenna) in a variety of shapes, and may be coupled with the TRx s212 using metal transmission lines or the like.

[97] The TRx s212 may include one or more radios that are compatible with, and/or may operate according to any one or more of the following radio communication technologies and/or standards including but not limited to: a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology, for example Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), 3GPP Long Term Evolution (LTE), 3 GPP Long Term Evolution Advanced (LTE Advanced), Code division multiple access 2000 (CDM2000), Cellular Digital Packet Data (CDPD), Mobitex, Third Generation (3G), Circuit Switched Data (CSD), High-Speed Circuit-Switched Data (HSCSD), Universal Mobile Telecommunications System (Third Generation) (UMTS (3G)), Wideband Code Division Multiple Access (Universal Mobile Telecommunications System) (W-CDMA (UMTS)), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), High-Speed Uplink Packet Access (HSUPA), High Speed Packet Access Plus (HSPA+), Universal Mobile Telecommunications System-Time -Division Duplex (UMTS-TDD), Time Division-Code Division Multiple Access (TD-CDMA), Time Division- Synchronous Code Division Multiple Access (TD-CDMA), 3rd Generation Partnership Project Release 8 (Pre-4th Generation) (3GPP Rel. 8 (Pre-4G)), 3GPP Rel. 9 (3rd Generation Partnership Project Release 9), 3GPP Rel. 10 (3rd Generation Partnership Project Release 10) , 3GPP Rel. 11 (3rd Generation Partnership Project Release 11), 3GPP Rel. 12 (3rd Generation Partnership Project Release 12), 3GPP Rel. 8 (3rd Generation Partnership Project Release 8), 3GPP Rel. 14 (3rd Generation Partnership Project Release 14), 3GPP Rel. 15 (3rd Generation Partnership Project Release 15), 3GPP Rel. 16 (3rd Generation Partnership Project Release 16), 3GPP Rel. 17 (3rd Generation Partnership Project Release 17) and subsequent Releases (such as Rel. 18, Rel. 19, etc.), 3GPP 5G, 3GPP LTE Extra, LTE-Advanced Pro, LTE Licensed-Assisted Access (LAA), MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UMTS Terrestrial Radio Access (E-UTRA), Long Term Evolution Advanced (4th Generation) (LTE Advanced (4G)), cdmaOne (2G), Code division multiple access 2000 (Third generation) (CDM2000 (3G)), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (1st Generation) (AMPS (1G)), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Digital AMPS (2nd Generation) (D-AMPS (2G)), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), OLT (Norwegian for Offentlig Landmobil Telefoni, Public Land Mobile Telephony), MTD (Swedish abbreviation for Mobiltelefonisystem D, or Mobile telephony system D), Public Automated Land Mobile (Autotel/PALM), ARP (Finnish for Autoradiopuhelin, "car radio phone"), NMT (Nordic Mobile Telephony), High capacity version of NTT (Nippon Telegraph and Telephone) (Hicap), Cellular Digital Packet Data (CDPD), Mobitex, DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Circuit Switched Data (CSD), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Bluetooth(r), Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6L0WPAN), WirelessHART, MiWi, Thread, 1600.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low-Power Wide-Area-Network (LPWAN), LoRaWAN™ (Long Range Wide Area Network), Digital Enhanced Cordless Telecommunications (DECT) (including New Generation DECT (NG-DECT), DECT Ultra Low Energy (DECT ULE), etc.), Sigfox, Wireless Gigabit Alliance (WiGig) standard, mmWave standards in general (wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802. Had, IEEE 802.1 lay, etc.), technologies operating above 300 GHz and THz bands, (3GPP/LTE based or IEEE 802. lip and other) Vehicle-to-Vehicle (V2V) and Vehicle-to-X (V2X) and Vehicle-to-Infrastructure (V2I) and Infrastructure-to-Vehicle (I2V) communication technologies, 3GPP cellular V2X, DSRC (Dedicated Short Range Communications) communication systems such as Intelligent-Transport-Systems and others, the European ITS-G5 system (i.e. the European flavor of IEEE 802. lip based DSRC, including ITS-G5A (i.e., Operation of ITS-G5 in European ITS frequency bands dedicated to ITS for safety related applications in the frequency range 5,875 GHz to 5,905 GHz), ITS- G5B (i.e., Operation in European ITS frequency bands dedicated to ITS non- safety applications in the frequency range 5,855 GHz to 5,875 GHz), ITS-G5C (i.e., Operation of ITS applications in the frequency range 5,470 GHz to 5,725 GHz)), etc.. In addition to the standards listed above, any number of satellite uplink technologies may be used for the TRx s212 including, for example, radios compliant with standards issued by the ITU (International Telecommunication Union), or the ETSI (European Telecommunications Standards Institute), among others, both existing and not yet formulated. [98] Network interface circuitry/controller (NIC) s216 maybe included to provide wired communication to the network s250 or to other devices using a standard network interface protocol. The standard network interface protocol may include Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, or may be based on other types of network protocols, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway , PROFIBUS, or PROFINET, among many others. Network connectivity may be provided to/from the system s200 via NIC s216 using a physical connection, which maybe electrical (e.g., a "copper interconnect”) or optical. The physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.). The NIC s216 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols. In some implementations, the NIC s216 may include multiple controllers to provide connectivity to other networks using the same or different protocols. For example, the system s200 may include a first NIC s216 providing communications to the cloud over Ethernet and a second NIC s216 providing communications to other devices over another type of network. In some implementations, the NIC s216 maybe a high-speed serial interface (HSSI) NIC to connect the system s200 to a routing or switching device.

[99] Network s250 comprises computers, network connections among various computers (e.g., between the system s200 and remote system s255), and software routines to enable communication between the computers over respective network connections. In this regard, the network s250 comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc.), and computer readable media. Examples of such network elements may include wireless access points (WAPs), a home/business server (with or without radio frequency (RF) communications circuitry), a router, a switch, a hub, a radio beacon, base stations, picocell or small cell base stations, and/or any other like network device. Connection to the network s250 may be via a wired or a wireless connection using the various communication protocols discussed infra. As used herein, a wired or wireless communication protocol may refer to a set of standardized rules or instructions implemented by a communication device/system to communicate with other devices, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and the like. More than one network may be involved in a communication session between the illustrated devices. Connection to the network s250 may require that the computers execute software routines which enable, for example, the seven layers of the OSI model of computer networking or equivalent in a wireless (or cellular) phone network.

[100] The network s250 may represent the Internet, one or more cellular networks, a local area network (LAN) or a wide area network (WAN) including proprietary and/or enterprise networks, Transfer Control Protocol (TCP)/Internet Protocol (IP)-based network, or combinations thereof. In such embodiments, the network s250 may be associated with network operator who owns or controls equipment and other elements necessary to provide network-related services, such as one or more base stations or access points, one or more servers for routing digital data or telephone calls (e.g., a core network or backbone network), etc. Other networks can be used instead of or in addition to the Internet, such as an intranet, an extranet, a virtual private network (VPN), an enterprise network, a non-TCP/IP based network, any LAN or WAN or the like.

[101] The remote system s255 (also referred to as a "service provider”, "application server(s)”, or the like) comprises one or more physical and/or virtualized computing systems owned and/or operated by a company, enterprise, and/or individual that hosts, serves, and/or otherwise provides information object(s) to one or more users (e.g., system s200). The physical and/or virtualized systems include one or more logically or physically connected servers and/or data storage devices distributed locally or across one or more geographic locations. Generally, the remote system s255 uses IP/network resources to provide information objects such as electronic documents, webpages, forms, applications (e.g., web apps), data, services, web services, media, and/or content to different user/client devices. As examples, the service provider s255/sl55 may provide mapping and/or navigation services; cloud computing services; search engine services; social networking, microblogging, and/or message board services; content (media) streaming services; e-commerce services; blockchain services; communication services such as Voice-over-Internet Protocol (VoIP) sessions, text messaging, group communication sessions, and the like; immersive gaming experiences; and/or other like services. According to various embodiments, the remote system s255 may provide thermal imaging services such as those discussed herein. In these embodiments, the remote system s255 may be or operate a TMS server, an ArcGIS Server, and/or some other applications and/or services for providing geographic mapping services and/or spatial analysis and the like. The user/client devices that utilize services provided by remote system s255 may be referred to as "subscribers” or the like. Although Figure s2 shows only a single remote system s255, the remote system s255 may represent multiple remote system s255, each of which may have their own subscribing users.

[102] The external interface s218 The input/output (I/O) interface s218 is configurable to connect or coupled the system s200 with external devices or subsystems. The external interface s218 may include any suitable interface controllers and connectors to couple the system s200 with the external components/devices, such as an external expansion bus (e.g., Universal Serial Bus (USB), FireWire, PCIe, Thunderbolt, etc.), used to connect system s200 with external components/devices, such as sensor circuitry s221, actuators s222, electronic control units (ECUs) s224, positioning system s245, I/O device(s) s256, and/or other devices or subsystems not shown by Figure s2. In some cases, the I/O interface circuitry s218 may be used to transfer data between the system s200 and another computer device (e.g., a laptop, a smartphone, or some other user device) via a wired connection. I/O interface circuitry s218 may include any suitable interface controllers and connectors to interconnect one or more of the processor circuitry s202, memory circuitry s204, storage circuitry s208, communication circuitry s209, and the other components of system s200. The interface controllers may include, but are not limited to, memory controllers, storage controllers (e.g., redundant array of independent disk (RAID) controllers, baseboard management controllers (BMCs), input/output controllers, host controllers, etc. The connectors may include, for example, busses (e.g., IX s206), ports, slots, jumpers, interconnect modules, receptacles, modular connectors, etc. The I/O interface circuitry s218 may also include peripheral component interfaces including, but are not limited to, non-volatile memory ports, USB ports, audio jacks, power supply interfaces, on-board diagnostic (OBD) ports, etc.

[103] The sensor circuitry s221 may include devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, etc. In some implementations, at least some of the sensors s221 are configured to perform remote sensing and/or in situ sensing. Remote sensing is a technology used to acquire information about an object by detecting energy reflected or emitted by that object when the distance between the object and the sensor is much greater than any linear dimension of the sensor. In situ sensing is a technology used to detect a phenomena of interest close or near a particular sensors, and/or to acquire information about an object when the distance between the object and the sensor is comparable to or smaller than any linear dimension of the sensor. In situ sensing may also be referred to as "proximate sensing”, "close-range sensing”, and/or the like.

[104] Examples of such sensors s221 include, inter alia, inertia measurement units (IMU) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., visible light cameras, thermographic camera and/or thermal imaging camera (TIC) systems, forward-looking infrared (FLIR) camera systems, radiometric thermal camera systems, active IR camera systems, ultraviolet (UV) camera systems, etc.); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., IR radiation detector and the like), depth sensors, ambient light sensors, ultrasonic transceivers; microphones; etc. Some of the sensor circuitry s221 may be sensors used for various aerial asset and/or vehicle control systems such as, for example, exhaust sensors including exhaust oxygen sensors to obtain oxygen data and manifold absolute pressure (MAP) sensors to obtain manifold pressure data; mass air flow (MAF) sensors to obtain intake air flow data; intake air temperature (IAT) sensors to obtain IAT data; ambient air temperature (AAT) sensors to obtain AAT data; ambient air pressure (AAP) sensors to obtain AAP data; catalytic converter sensors including catalytic converter temperature (CCT) to obtain CCT data and catalytic converter oxygen (CCO) sensors to obtain CCO data; vehicle speed sensors (VSS) to obtain VSS data; exhaust gas recirculation (EGR) sensors including EGR pressure sensors to obtain ERG pressure data and EGR position sensors to obtain position/orientation data of an EGR valve pintle; Throttle Position Sensor (TPS) to obtain throttle position/orientation/angle data; a crank/cam position sensors to obtain crank/cam/piston position/orientation/angle data; coolant temperature sensors; pedal position sensors; accelerometers; altimeters; magnetometers; level sensors; flow/fluid sensors, barometric pressure sensors, vibration sensors (e.g., shock & vibration sensors, motion vibration sensors, main and tail rotor vibration monitoring and balancing (RTB) sensor(s), gearbox and drive shafts vibration monitoring sensor(s), bearings vibration monitoring sensor(s), oil cooler shaft vibration monitoring sensor(s), engine vibration sensor(s) to monitor engine vibrations during steady-state and transient phases, etc.), force and/or load sensors, remote charge converters (RCC), rotor speed and position sensor(s), fiber optic gyro (FOG) inertial sensors, Attitude & Heading Reference Unit (AHRU), fibre Bragg grating (FBG) sensors and interrogators, tachometers, engine temperature gauges, pressure gauges, transformer sensors, airspeed-measurement meters, vertical speed indicators, and/or the like. The sensor circuitry s221 may include other sensors such as and/or other like sensors/systems.

[105] The positioning circuitry s245 includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States’ Global Positioning System (GPS), Russia’s Global Navigation System (GLONASS), the European Union’s Galileo system, China’s BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan’s Quasi-Zenith Satellite System (QZSS), France’s Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry s245 comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry s245 may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry s245 may also be part of, or interact with, the communication circuitry s209 to communicate with the nodes and components of the positioning network. The positioning circuitry s245 may also provide position data and/or time data to the application circuitry, which may use the data to synchronize operations with various infrastructure (e.g., radio base stations), for turn-by-turn navigation, or the like.

[106] In some embodiments, sensor circuitry s221 may be used to corroborate and/or refine information provided by positioning circuitry s245. In a first example, input from a camera s221 may be used by the processor circuitry s202 to measure the relative movement of an object/image and to calculate the vehicle movement speed and turn speed to calibrate or improve the precision of a position sensor s221/s245. In a second example, input from a barometer may be used in conjunction with the positioning circuitry s245 to more accurately determine the relative altitude of the aerial asset sl20, and determine the position of the aerial asset sl20 relative to a mapped coordinate system. In a third example, images or video captured by a camera s221 or image aperture device s221 may be used in conjunction with the positioning circuitry s245 to more accurately determine the relative distance between the aerial asset sl20 and particular feature or landmark associated with the mapped coordinate system, or a relative distance between the aerial asset sl20 and another aerial asset sl20. In a fourth example, input from an inertial sensor s221 may be used by the processor circuitry s202 to calculate and/or determine speed, turn speed, and/or position of an aerial asset sl20. Based on information received from user input and/or user recognition device/system, processor circuitry s202 may be configurable to initialize, customize, adjust, calibrate, or otherwise modify the functionality of system s200 to accommodate a particular user.

[107] Individual ECUs s224 may be embedded systems or other like computer devices that control a corresponding subsystem of the system s200 and/or some other device (e.g., aerial asset sl20 or the like). In embodiments, individual ECUs s224 may each have the same or similar components as the system s200, such as a microcontroller or other like processor device, memory device(s), communications interfaces, and the like.

[108] One or more ECUs s224 may be, or include, Data Acquisition Units (DAUs), health and usage monitoring systems (HUMS), Engine Interface Unit (EIU), Over Torque Warning Unit (OTWU), Bus Adapter Unit (BAU), tanker or bucket controllers (e.g., microprocessor-controlled tank doors), Drivetrain Control Unit (DCU), an Engine Control Unit (ECU), an Engine Control Module (ECM), EEMS, a Powertrain Control Module (PCM), a Transmission Control Module (TCM), a Brake Control Module (BCM) including an anti lock brake system (ABS) module and/or an electronic stability control (ESC) system, a Central Control Module (CCM), a Central Timing Module (CTM), a General Electronic Module (GEM), a Body Control Module (BCM), a Suspension Control Module (SCM), a Door Control Unit (DCU), a Speed Control Unit (SCU), a Human-Machine Interface (HMI) unit, a Telematic Control Unit (TTU), a Battery Management System (which may be the same or similar as battery monitor s226) and/or any other entity or node in a vehicle system. In some embodiments, the one or more of the ECUs s224 and/or system s200 may be part of or included in a Portable Emissions Measurement Systems (PEMS).

[109] The actuators s224 are devices that allow system s200 to change a state, position, orientation, move, and/or control a mechanism or system. The actuators s222 comprise electrical and/or mechanical devices for moving or controlling a mechanism or system, and converts energy (e.g., electric current or moving air and/or liquid) into some kind of motion. The actuators s222 may include one or more electronic (or electrochemical) devices, such as piezoelectric biomorphs, solid state actuators, solid state relays (SSRs), shape-memory alloy-based actuators, electroactive polymer-based actuators, relay driver integrated circuits (ICs), and/or the like. The actuators s222 may include one or more electromechanical devices such as pneumatic actuators, hydraulic actuators, electromechanical switches including electromechanical relays (EMRs), motors (e.g., linear motors, DC motors, brushless motors, stepper motors, servomechanisms, ultrasonic piezo motor with optional position feedback, screw-type motors, etc.), mechanical gears, magnetic switches, valve actuators, fuel injectors, ignition coils, wheels, thrusters, propellers, claws, clamps, hooks, an audible sound generator, and/or other like electromechanical components. The system s200 may be configurable to operate one or more actuators s222 based on one or more captured events and/or instructions or control signals received from various ECUs s224 or system s200. In embodiments, the system s200 may transmit instructions to various actuators s222 (or controllers that control one or more actuators s222) to change the state of the actuators s222 or otherwise control operation of the actuators s222.

[110] The input/output (I/O) devices s256 may be present within, or connected to, the system s200. The I/O devices s256 include input device circuitry and output device circuitry including one or more user interfaces designed to enable user interaction with the system s200 and/or peripheral component interfaces designed to enable peripheral component interaction with the system s200. The input device circuitry includes any physical or virtual means for accepting an input including, inter alia, one or more physical or virtual buttons (e.g., a reset button), a physical keyboard, keypad, mouse, touchpad, touchscreen, microphones, scanner, headset, and/or the like. The output device circuitry is used to show or convey information, such as sensor readings, actuator position(s), or other like information. Data and/or graphics may be displayed on one or more user interface components of the output device circuitry. The output device circuitry may include any number and/or combinations of audio or visual display, including, inter alia, one or more simple visual outputs/indicators (e.g., binary status indicators (e.g., light emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display devices or touchscreens (e.g., Liquid Chrystal Displays (LCD), LED displays, quantum dot displays, projectors, etc.), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the system s200. The output device circuitry may also include speakers or other audio emitting devices, printer(s), and/or the like. In some embodiments, the sensor circuitry s221 may be used as the input device circuitry (e.g., an image capture device, motion capture device, or the like) and one or more actuators s222 may be used as the output device circuitry (e.g., an actuator to provide haptic feedback or the like). In another example, near-field communication (NFC) circuitry comprising an NFC controller coupled with an antenna element and a processing device may be included to read electronic tags and/or connect with another NFC-enabled device. Peripheral component interfaces may include, but are not limited to, a non-volatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc.

[Ill] A battery s224 may be coupled to the system s200 to power the system s200, which may be used in embodiments where the system s200 is not in a fixed location, such as when the system s200 is a mobile or laptop client system. The battery s224 may be a lithium ion battery, a lead-acid automotive battery, or a metal-air battery, such as a zinc- air battery, an aluminum-air battery, a lithium-air battery, a lithium polymer battery, and/or the like. In embodiments where the system s200 is mounted in a fixed location, such as when the system is implemented as a server computer system, the system s200 may have a power supply coupled to an electrical grid. In these embodiments, the system s200 may include power tee circuitry to provide for electrical power drawn from a network cable to provide both power supply and data connectivity to the system s200 using a single cable. [112] Power management integrated circuitry (PMIC) s226 maybe included in the system s200 to track the state of charge (SoCh) of the battery s224, and to control charging of the system s200. The PMIC s226 may be used to monitor other parameters of the battery s224 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery s224. The PMIC s226 may include voltage regulators, surge protectors, power alarm detection circuitry. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The PMIC s226 may communicate the information on the battery s224 to the processor circuitry s202 over the IX s206. The PMIC s226 may also include an analog-to-digital (ADC) convertor that allows the processor circuitry s202 to directly monitor the voltage of the battery s224 or the current flow from the battery s224. The battery parameters may be used to determine actions that the system s200 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.

[113] A power block s228, or other power supply coupled to an electrical grid, may be coupled with the PMIC s226 to charge the battery s224. In some examples, the power block s228 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the system s200. In these implementations, a wireless battery charging circuit may be included in the PMIC s226. The specific charging circuits chosen depend on the size of the battery s224 and the current required.

[114] The system s200 may include any combinations of the components shown by Figure s2, however, some of the components shown maybe omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations. In one example where the system s200 is or is part of a server computer system, the battery s224, communication circuitry s209, the sensors s221, actuators s222, and/or POS s245, and possibly some or all of the I/O devices s256 maybe omitted.

4. GEOGRAPHIC MAPS, THERMAL MAPS, AND DROP ZONE OPTIMIZATION ASPECTS

[115] As discussed previously, example maps that are generated based on collected terrain data and/or sensor data sl07 (e.g., LiDAR measurements, thermal/IR measurements, and/or other sensor data sl07 collected by IAA system sl20i, IoT devices sl40, and/or other like sensors). In implementations where multiple sensor data sl07 is collected from multiple sources, a suitable sensor fusion algorithm may be used to combine the sensor data for geographic map and/or thermal map generation.

[116] The term "terrain data” or "terrain dataset” refers to any information that can be used to describe terrain of a geographic area and/or used to generate terrain data models and/or geographic/mapping graphics. In one example, a terrain dataset is a multi resolution, TIN-based surface built from measurements stored as features in a GDB. A TIN is a vector data structure that partitions geographic space into contiguous, nonoverlapping triangles. The vertices of each triangle are sample data points with coordinate values (e.g., x-, y-, and z -values). The vertices are connected with a series of edges to form a network of triangles. There are different methods of interpolation to form these triangles, such as Delaunay triangulation or distance ordering. In another example, a terrain dataset comprises one or more raster datasets. Rasters may be generated from digital aerial photographs, imagery from aerial assets sl20 and/or satellites sl30, digital pictures, or even scanned maps. Individual rasters may be displayed as data layers along with other geographic data on a graphical map. A raster includes a matrix of cells or pixels organized into a grid comprising rows and columns. Each cell contains a value representing information, such as temperature data and/or the like. In most implementations, the area or surface represented by each cell/pixel may have the same width and height, and each cell/pixel may represent an equal portion of the entire surface represented by the raster. For example, a raster covering an area of 100 square kilometers (km 2 ) may have 100 cells, where each cell represents 1 km 2 (1 km x 1 km). The dimension of the cells/pixels can be larger or smaller than 1 km 2 such as a square meter (m 2 ), square foot (ft 2 ), square centimeter (cm 2 ), or some other amount of area or surface area. Additionally or alternatively, the size/dimensions of the cells/pixels may be based on the spatial resolution of the scanning equipment/sensors and/or the height above ground level (AGL) of the scanning equipment/sensors. The location of each cell/pixel is defined by the row and column where it is located within the raster dataset, which may be expressed using a Cartesian coordinate system. In another example, the terrain dataset may be (or may be included in) a single KML file, which may contain features of different geometry types including one or both of vector /TIN and raster data.

[117] For purposes of the present disclosure, the thermal maps may be considered to be a type of heatmap. A heatmap is a representation of data in the form of a map or diagram in which data values are represented using one or more distinguishing effect such as, for example, colors, shading, brightness, saturation, textures, and/or the like. A heatmap can be used on mathematical models through the use of matrices where each cell represents a portion of space in a given measuring distance system and the distinguishing effect applied to each cell represents the intensity of a studied event that happened on each mapped cell. In some implementations, the aforementioned matrix may be an entire raster image where each cell is a pixel in the raster image. For the thermal maps, the intensity of the applied distinguishing effect represents the amount of heat (thermal data) detected in that area at a specific time

[118] Additionally, each cell/pixel in the thermal map may be coded to represent a specific temperature or temperature range that was detected in a corresponding area. In this case, a hot pixel may be a pixel representing an area having a detected temperature that is at or above a threshold temperature value or is within a threshold temperature range. In these implementations, a hot pixel does not necessarily indicate the actual coordinates of a fire; rather the hot pixel may only indicate that a fire is within the area represented by the pixel. The pixel coding may involve applying some distinguishing effect to a pixel based on the measured thermal data (e.g., in Celsius, Kelvins, or the like) or other data for that pixel. For example, color coding may be used where a specific color is applied to a pixel to indicate that a temperature or range of temperatures was detected in the pixel’s corresponding area. Additional or alternative distinguishing effects may be used such as applying shading, textures, saturation, and/or brightness effects.

[119] In one example, the aerial thermal imaging of a fire includes the ambient ground area being greyscaled, and the fire being colorized from red to orange to yellow (some other pixel coloring scheme may be used in other implementations). In this example, the red pixels may indicate a greatest detected temperature, the yellow pixels may indicate a lowest detected temperature, and the orange pixels may indicate a detected temperature between the temperature ranges used for the red pixels and the yellow pixels. Here, the fire is evident by the dramatic spike in thermal pixel values as compared to surrounding thermal pixel values.

[120] The image processing algorithm used to generate the thermal maps may use thresholding to distinguish the fire from ground temperatures. In some implementations, the thresholding techniques may include relative thresholding, absolute thresholding, or both. In general, thresholding is an image processing method that creates a bitonal (binary) image based on setting a threshold value on the pixel intensity of the original image. While most commonly applied to grayscale images, it can also be applied to color images. The threshold of image intensity (relative image lightness) is set manually or automatically at a specific value. Absolute thresholding changes all pixels that have value above or equal to a threshold value to a first distinguishing effect (e.g., color, etc.) and changes all pixels that have a value below the threshold value to second distinguishing effect (e.g., color, etc.). Relative thresholding (also referred to as adaptive, dynamic, or local thresholding) establishes the threshold for converting a pixel to a particular distinguishing effect at a regional level where each pixel (or thermal value associated with a pixel) is compared to one or more neighboring pixels or one or more pixels in a sample region of the image. The region sampled and method of evaluation may vary between applications. Additionally or alternatively, multi-class or multi-level thresholding may be used where multiple thresholds are selected including a threshold for the foreground (i.e., regions of interest (ROIs)), a threshold for the background, and one or more thresholds for different termperature/thermal ranges. These termperatur e/thermal range thresholds may be between the foreground and background thresholds.

[121] Additionally or alternatively, other methods may be used to generate the thermal maps discussed herein such as using a suitable neural network and/or other machine learning technique, such as those discussed in Yun et al., "Deep Neural Networks for Pattern Recognition", arXiv: 1809.09645 (25 Sep. 2018), Valero et ah, "Automated location of active fire perimeters in aerial infrared imaging using unsupervised edge detectors", Int'l J ofWildland Fire, voh 27, no. 4, pp. 241-256 (23 Apr. 2018), Toulouse et ah, "Computer vision for wildfire research: An evolving image dataset for processing and analysis", Fire Safety Journal, Elsevier, voh 92, pp.188-194 (1 Sep. 2017), and Akhloufi et ah, "Unmanned Aerial Systems for Wildland and Forest Fires: Sensing, Perception, Cooperation and Assistance", arXiv preprint arXiv:2004.13883 (30 Apr. 2020), each of which is hereby incorporated by reference in their entireties.

[122] In additional or alternative implementations, a radiometric thermogram may be used as the thermal map. Radiometric thermograms may be generated based on radiometric data captured by radiometric thermal cameras/sensors. Radiometric thermograms allow retrospective adjustment of measured parameters (e.g., emissivity, distance from object(s), reflected temperature, humidity, etc.). Additionally, each pixel of radiometric thermograms contains information about one or more of the measured values.

[123] As mentioned previously, the CCS may receive a single information object that includes various data (e.g., optimized deployment zone data, geographic/mapping data, thermal data, etc.) that is used to generate and render geographic and/or thermal maps. The single file may include various attributes related to the optimized deployment zones, geography/terrain, sensor measurements, and the like that include the various data items such as optimized deployment zone data, geographic/mapping data, sensor (e.g., thermal) data, and the like. Examples of such attributes may include those described by Table 1.

Table 1

[124] The various attributes in Table 1 are merely examples of the attributes that may be used, and in some implementations, the some of the attributes in Table 1 may be combined or split into different attributes, or the information object may include additional attributes or alternative attributes than those shown by Table 1. Additionally or alternatively, the attributes in Table 1 may be supplemented or replaced with any of the attributes and/or data elements discussed in Burggraf, "OGC KML 2.3", Open Geospatial Consortium Inc., vl.O, no. OGC 12-007r2 (04 Aug. 2015), available at: http://www.opengis.net/doc/kml/2.3, which is hereby incorporated by reference in its entirety.

[125] An example of thermal map generation may begin with generating thermal images from raw thermal data, which may be combined using a suitable mosaic function. The mosaicked raw thermal imagery may include normalized pixel values. Next, temperature thresholding is applied to the thermal imaging mosaic. Then, temperature thresholding may be applied to the raw mosaicked thermal imagery. Additionally or alternatively, brighter pixels may be used to indicate greater detected thermal data than the thermal data of less bright pixels. The brighter pixels may indicate areas with thermal data above a fire temperature threshold. For repeat flights over an area, a live feed of thermal imagery can be used to show the efficacy of prior extinguishant drops. As an example, a Continuous Change Detection and Classification (CCDC) algorithm may be applied to the thermal map to identify any changes in pixel values over time and/or to generate a change analysis containing a model of the results (see e.g., Zhu et al., "Continuous change detection and classification of land cover using all available Landsat data", Remote Sensing of Environment, vol. 144, pp. 152-171 (25 Mar. 2014), which is hereby incorporated by reference in its entirety.).

5. EXAMPLE IMPLEMENTATIONS

[126] Example Z01 includes one or more computer readable media comprising instructions, wherein execution of the instructions by processor circuitry is to cause the processor circuitry to perform the method of any one of the examples and/or some other example(s) herein. Example Z02 includes a computer program comprising the instructions of example Z01. Example Z03a includes an Application Programming Interface defining functions, methods, variables, data structures, and/or protocols for the computer program of example Z02.

[127] Example Z03b includes an API or specification defining functions, methods, variables, data structures, protocols, etc., defining or involving use of any of examples XYZ or portions thereof, or otherwise related to any of examples XYZ or portions thereof. Example Z04 includes an apparatus comprising circuitry loaded with the instructions of example Z01. Example Z05 includes an apparatus comprising circuitry operable to run the instructions of example Z01. Example Z06 includes an integrated circuit comprising one or more of the processor circuitry of example Z01 and the one or more computer readable media of example Z01. Example Z07 includes a computing system comprising the one or more computer readable media and the processor circuitry of example Z01.

[128] Example Z08 includes an apparatus comprising means for executing the instructions of example Z01. Example Z09 includes a signal generated as a result of executing the instructions of example Z01. Example Z10 includes a data unit generated as a result of executing the instructions of example Z01. Example Zll includes the data unit of example Z10 and/or some other example(s) herein, wherein the data unit is a datagram, network packet, data frame, data segment, a Protocol Data Unit (PDU), a

Service Data Unit (SDU), a message, or a database object. Example Z12 includes a signal encoded with the data unit of examples Z10 and/or Zll. Example Z13 includes an electromagnetic signal carrying the instructions of example Z01. Example Z14 includes any of examples Z01-Z13 and/or one or more other example(s) herein, wherein the computing system and/or the processor circuitry comprises one or more of a System-in- Package (SiP), Multi-Chip Package (MCP), a System-on-Chips (SoC), a digital signal processors (DSP), a field-programmable gate arrays (FPGA), an Application Specific Integrated Circuits (ASIC), a programmable logic devices (PLD), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or the computing system and/or the processor circuitry comprises two or more of SiPs, MCPs, SoCs, DSPs, FPGAs, ASICs, PLDs, CPUs, GPUs interconnected with one another. Example Z15 includes an apparatus comprising means for performing the method of any one of examples XYZ and/or some other example(s) herein.

[129] Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. Implementation of the preceding techniques may be accomplished through any number of specifications, configurations, or example deployments of hardware and software. It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components or modules, in order to more particularly emphasize their implementation independence. Such components may be embodied by any number of software or hardware forms. For example, a component or module may be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A component or module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. Components or modules may also be implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module.

[130] Indeed, a component or module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices or processing systems. In particular, some aspects of the described process (such as code rewriting and code analysis) may take place on a different processing system (e.g., in a computer in a data center), than that in which the code is deployed (e.g., in a computer embedded in a sensor or robot). Similarly, operational data may be identified and illustrated herein within components or modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions.

6. TERMINOLOGY

[131] In the preceding detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.

[132] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.

[133] The description may use the phrases "in an embodiment,” or "in embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising,” "including,” "having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous. Where the disclosure recites “a" or "a first” element or the equivalent thereof, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of such elements unless otherwise specifically stated.

[134] As used herein, the singular forms "a,” "an” and "the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises” and/or "comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof. The phrase "A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment,” or "In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising,” "including,” "having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.

[135] The terms "coupled,” "communicatively coupled,” along with derivatives thereof are used herein. The term "coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term "directly coupled” may mean that two or more elements are in direct contact with one another. The term "communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.

[136] The term "coupling”, "coupling means”, or the like refers to device that mechanically and/or chemically joins or couples two or more objects together, and may include threaded fasteners (e.g., bolts, screws, nuts, threaded rods, etc.), pins, linchpins, r-clips, clips, pegs, clamps, dowels, cam locks, latches, catches, ties, hooks, magnets, rivets, assembled joineries, molded joineries, metallurgical formed joints/bonds (e.g., by welding, brazing, soldering, etc.), adhesive bonds, and/or the like. Additionally or alternatively, the term "coupling”, "coupling means”, or the like refers to the act of mechanically and/or chemically joining or coupling two or more objects together, and may include any type of fastening, welding, brazing, soldering, sintering, casting, plating, adhesive bonding, and/or the like.

[137] The term "fabrication” refers to the formation, construction, or creation of a structure using any combination of materials and/or using fabrication means. The term "fabrication means” as used herein refers to any suitable tool or machine that is used during a fabrication process and may involve tools or machines for cutting (e.g., using manual or powered saws, shears, chisels, routers, torches including handheld torches such as oxy-fuel torches or plasma torches, and/or computer numerical control (CNC) cutters including lasers, mill bits, torches, water jets, routers, laser etching tools/machines, tolls/machines for printed circuit board (PCB) and/or semiconductor manufacturing, etc.), bending (e.g., manual, powered, or CNC hammers, pan brakes, press brakes, tube benders, roll benders, specialized machine presses, etc.), forging (e.g., forging press, machines/tools for roll forging, swaging, cogging, open-die forging, impression-die forging (close die forging), press forging, cold forging automatic hot forging and upsetting, etc.), assembling (e.g., by welding, soldering, brazing, crimping, coupling with adhesives, riveting, fasteners, etc.), molding or casting (e.g., die casting, centrifugal casting, injection molding, extrusion molding, matrix molding, etc.), additive manufacturing (e.g., direct metal laser sintering, filament winding, fused deposition modeling, laminated object manufacturing techniques, induction printing, selecting laser sintering, spark plasma sintering, stereolithographic, three-dimensional (3D) printing techniques including fused deposition modeling, selective laser melting, selective laser sintering, composite filament fabrication, fused filament fabrication, stereo lithography, directed energy deposition, electron beam freeform fabrication, etc.), PCB and/or semiconductor manufacturing techniques (e.g., silk-screen printing, photolithography, photoengraving, PCB milling, laser resist ablation, laser etching, plasma exposure, atomic layer deposition (ALD), molecular layer deposition (MLD), chemical vapor deposition (CVD), rapid thermal processing (RTP), and/or the like).

[138] The terms "flexible,” "flexibility,” and/or "pliability” refer to the ability of an object or material to bend or deform in response to an applied force; "the term "flexible” is complementary to "stiffness.” The term "stiffness” and/or "rigidity” refers to the ability of an object to resist deformation in response to an applied force. The term "elasticity” refers to the ability of an object or material to resist a distorting influence or stress and to return to its original size and shape when the stress is removed. Elastic modulus (a measure of elasticity) is a property of a material, whereas flexibility or stiffness is a property of a structure or component of a structure and is dependent upon various physical dimensions that describe that structure or component.

[139] The term "wear” refers to the phenomenon of the gradual removal, damaging, and/or displacement of material at solid surfaces due to mechanical processes (e.g., erosion) and/or chemical processes (e.g., corrosion). Wear causes functional surfaces to degrade, eventually leading to material failure or loss of functionality. The term "wear” as used herein may also include other processes such as fatigue (e.g., he weakening of a material caused by cyclic loading that results in progressive and localized structural damage and the growth of cracks) and creep (e.g., the tendency of a solid material to move slowly or deform permanently under the influence of persistent mechanical stresses). Mechanical wear may occur as a result of relative motion occurring between two contact surfaces. Wear that occurs in machinery components has the potential to cause degradation of the functional surface and ultimately loss of functionality. Various factors, such as the type of loading, type of motion, temperature, lubrication, and the like may affect the rate of wear.

[140] The term "lateral” refers to directions or positions relative to an object spanning the width of a body of the object, relating to the sides of the object, and/or moving in a sideways direction with respect to the object. The term "longitudinal” refers to directions or positions relative to an object spanning the length of a body of the object; relating to the top or bottom of the object, and/or moving in an upwards and/or downwards direction with respect to the object. The term "linear” refers to directions or positions relative to an object following a straight line with respect to the object, and/or refers to a movement or force that occurs in a straight line rather than in a curve. The term "lineal” refers to directions or positions relative to an object following along a given path with respect to the object, wherein the shape of the path is straight or not straight. The term "maneuver” (sometimes spelled "manoeuvre” or "manoeuver”) refers to one or more movements bringing an actor (e.g., an aircraft or vehicle, a drone/robot, a pedestrian, etc.) from one position to another position. [141] The term "vertex” refers to a corner point of a polygon, polyhedron, or other higher dimensional polytope, formed by the intersection of edges, faces or facets of the object. A vertex is "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than p radians (180°); otherwise, it is a "concave" or "reflex" polygon. The term "slope” refers to the steepness or the degree of incline of a surface. The term "aspect” refers to an orientation of a slope, which may be measured clockwise in degrees from 0 to 360, where 0 is north-facing, 90 is east-facing, 180 is south-facing, and 270 is west-facing. The term "height above ground level”, "AGL”, or "HGL” is a height measured with respect to the underlying ground surface and/or the distance between an object and the surface of the Earth.

[142] The term "pixel” refers to the smallest controllable element of a picture represented on a display screen, which may include a physical point in a raster or bitmap image, or the smallest addressable element in an all points addressable display device.

[143] The term "texture” in the context of computer graphics and/or UX/UI design, may refer to the small-scale geometry on the surface of a graphical object. Additionally or alternatively, the term "texture” in this context may refer to the repetition of an element or pattern, called surface texel, that is then mapped on to the surface of a graphical object. Furthermore, a "texture” may be In a deterministic (regular) texture or a statistical (irregular) texture. Deterministic textures are created by repetition of a fixed geometric shape, where texels are represented by the parameters of the geometric shape. Statistical textures are created by changing patterns with fixed statistical properties, and may be represented by spatial frequency properties.

[144] Various fire terminology/definitions discussed in United States National Park Service and United States Department of Agriculture (USDA) Forest Service, "Fire Terminology”, available at: https://www.fs.fed.us/nwacfire/home/terminology.html. and Wikipedia, "Glossary of wildfire terms”, available at: https: //en.wikipedia.org/wiki/Glossary of wildfire terms each of which is hereby incorporated by reference in their entireties, may be applicable to the present disclosure.

[145] The term "optimize” or "optimal” may refer to identifying and/or determining effective and/or efficient deployment zones for dropping fire extinguishants; reducing or economizing fire-fighting resource usage/consumption including usage/consumption of fire extinguishants, aircraft fuel, etc.; reducing computational resource usage/consumption; reducing the amount of time to deploy extinguishants; decreasing the likelihood of personnel injury or other harms/damage; reducing the amount of time to process data and/or output predictions; producing a most accurate result set (predictions); or combinations thereof. "Optimal” may also refer to balancing these considerations differently depending on implementation and/or design choice (e.g., selecting to optimize for resource consumption over speed and accuracy, attempting to optimize for resource consumption, speed, and accuracy, etc.). Furthermore, "optimization” may refer to one or more processes or functions for identifying the optimal deployment zones and/or for improving the efficiency of fire-fighting processes and resource consumption.

[146] The term "drop zone” or "deployment zone” refers to a target area for air tankers, helitankers, and cargo dropping. The term "fire engine” refers to ground vehicles providing specified levels of pumping, water, and hose capacity but with less than the specified level of personnel. The term "Firefighting Resources” refers to all people, equipment, materials, and/or any other means that can or potentially could be assigned to fires and/or fire suppression activities. The term "wildland fire engine” refers to a fire apparatus specialized for accessing wildland fires with water, equipment, and crew. The term "performance envelope” or "flight envelope” refers to the capabilities of the design of a specific aircraft in terms of speed, acceleration, load factor, atmospheric density, and/or altitude which that aircraft cannot safely exceed.

[147] The term "fire behavior” refers to the manner in which a fire reacts to the influences of fuel, weather and topography. The term "fire weather” refers to the weather conditions that influence fire ignition, behavior and suppression), The term "fire height” refers to the average maximum vertical extension of flames at the leading edge of the fire front; occasional flashes that rise above the general level of flames are not considered. This distance is less than the flame length if flames are tilted due to wind or slope. The term "fire length” refers to the distance between the flame tip and the midpoint of the flame depth at the base of the flame (generally the ground surface); an indicator of fire intensity. The term "Fire Radiative Power” or "FRP” refers to the rate of emitted radiative energy by a fire at the time of the observation, and/or the pixel -integrated fire radiative power, each of which may be expressed in Watts (W) or megaW (MW) (see e.g., Wooster et al., "Retrieval of biomass combustion rates and totals from fire radiative power observations: FRP derivation and calibration relationships between biomass consumption and fire radiative energy release", J. of Geophysical Research, wol. 110, D24311, doi: 10.1029/2005JD006318 (31 Dec. 2005)).

[148] The term "fuel” may refer to any combustible material, which may include vegetation, such as grass, leaves, ground litter, plants, shrubs and trees, that feed a fire; additionally or alternatively, the term "fuel” may refer to a substance used to propel a mechanical device such as a vehicle (e.g., fossil fuels). The term "fuel loading” refers to amount of fuel present expressed quantitatively in terms of weight of fuel per unit area. The term "fuel model” refers to a simulated fuel complex (or combination of fuel types) for which all fuel descriptors required for the solution of a mathematical rate of (fire) spread model have been specified. The term "fuel moisture” or "fuel moisture content” refers to the quantity of moisture in fuel, which may be expressed as a percentage of the weight when thoroughly dried at 212 degrees Fahrenheit. The term "fuel type” refers to an identifiable association of fuel elements of a distinctive plant species, form, size, arrangement, or other characteristics that will cause a predictable rate of fire spread or difficulty of control under specified weather conditions.

[149] The term "extinguishant” refers to a substance or agent used in extinguishing a fire; examples of extinguishants may include liquids, foams, powders, fire retardants, among others. The term "fire retardant” refers to any substance (except plain water) that by chemical or physical action reduces the flammability of fuels or slows their rate of combustion; examples include retardant slurries, aqueous film forming foam (AFFF), and firefighting foam, among others.

[150] The term "infrared” or "IR” refers to electromagnetic radiation with wavelengths longer than those of visible light, approximately between 700 nanometers (nm) to 1.0 mm. The sub-divisions within the infrared spectrum are not fixed and vary between industries. Near-Infrared (NIR) is the portion of the IR spectrum immediately longer than red visible light, which is usually defined as light in the range of 0.750 micrometers (pm) to 1.0 or 1.1 pm (400 Terahertz (THz) to 272 THz or 300THz). Short-Wavelength Infrared (SWIR) is the portion of the IR spectrum longer than NIR, and is typically defined as light in the range of 1.0 pm to 3.0 pm (100 THz - 300 THz). Mid-Wavelength Infrared (MWIR) is the portion of the IR spectrum with wavelengths in the range of 3.0 pm to 8.0 pm (37 - 100 THz). Long-Wavelength Infrared (LWIR) is the portion of the IR spectrum with wavelengths in the range of 8.0 pm to 15.0 pm (20 - 37 THz). Far-Infrared (FIR) is the portion of the IR spectrum with wavelengths in the range of 15 pm and 1000 pm (0.3 - 20 THz).

[151] The term "circuitry” refers to a circuit or system of multiple circuits configurable to perform a particular function in an electronic device. The circuit or system of circuits may be part of, or include one or more hardware components, such as a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), programmable logic device (PLD), System-on-Chip (SoC), System-in-Package (SiP), Multi- Chip Package (MCP), digital signal processor (DSP), etc., that are configurable to provide the described functionality. In addition, the term "circuitry” may also refer to a combination of one or more hardware elements with the program code used to carry out the functionality of that program code. Some types of circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Such a combination of hardware elements and program code may be referred to as a particular type of circuitry.

[152] The term "element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity. The term "entity” may refer to (1) a distinct component of an architecture or device, or (2) information transferred as a payload. The term "device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term "controller refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.

[153] As used herein, the term "compute node” or "compute device” refers to an identifiable entity implementing an aspect of computing operations, whether part of a larger system, distributed collection of systems, or as a standalone apparatus. A compute node may operate as a client, server, network element, gateway, appliance, on-premise unit, and/or other like entities. The term "computing system,” "computer system,” or "system” refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term "computing system,” "computer system,” or "system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term "computing system,” "computer system,” or "system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configurable or operable to share computing and/or networking resources. The terms "computing system” and "computing device” may be considered synonymous to one another.

[154] The term "computer device” may describe any physical hardware device capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, equipped to record/store data on a machine readable medium, and transmit and receive data from one or more other devices in a communications network. A computer device may be considered synonymous to, and may hereafter be occasionally referred to, as a computer, computing platform, computing device, etc. The term "computer system” may include any type interconnected electronic devices, computer devices, or components thereof. Examples of "computer devices,” "computer systems,” etc. may include cellular phones or smart phones, feature phones, tablet personal computers, wearable computing devices, an autonomous sensors, laptop computers, desktop personal computers, video game consoles, digital media players, handheld messaging devices, personal data assistants, an electronic book readers, augmented reality devices, server computer devices (e.g., stand-alone, rack-mounted, blade, etc.), cloud computing services/systems, network elements, in-vehicle infotainment (IVI), in- car entertainment (ICE) devices, an Instrument Cluster (IC), head-up display (HUD) devices, onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), Electronic Engine Management System (EEMS), electronic/engine control units (ECUs), electronic/engine control modules (ECMs), embedded systems, microcontrollers, control modules, engine management systems (EMS), networked or "smart” appliances, machine-type communications (MTC) devices, machine-to-machine (M2M), Internet of Things (IoT) devices, and/or any other like electronic devices. Moreover, the term "vehicle-embedded computer device” may refer to any computer device and/or computer system physically mounted on, built in, or otherwise embedded in a vehicle.

[155] The term "server” as used herein refers to a computing device or system, including processing hardware and/or process space(s), an associated storage medium such as a memory device or database, and, in some instances, suitable application(s) as is known in the art. The terms "server system” and "server” may be used interchangeably herein that provides access to a pool of physical and/or virtual resources. The various servers discussed herein include computer devices with rack computing architecture component(s), tower computing architecture component(s), blade computing architecture component(s), and/or the like. The servers may represent a cluster of servers, a server farm, a cloud computing service, or other grouping or pool of servers, which may be located in one or more datacenters. The servers may also be connected to, or otherwise associated with one or more data storage devices (not shown). Moreover, the servers may include an operating system (OS) that provides executable program instructions for the general administration and operation of the individual server computer devices, and may include a computer-readable medium storing instructions that, when executed by a processor of the servers, may allow the servers to perform their intended functions. Suitable implementations for the OS and general functionality of servers are known or commercially available, and are readily implemented by persons having ordinary skill in the art.

[156] The term "architecture” as used herein refers to a computer architecture or a network architecture. A "network architecture” is a physical and logical design or arrangement of software and/or hardware elements in a network including communication protocols, interfaces, and media transmission. A "computer architecture” is a physical and logical design or arrangement of software and/or hardware elements in a computing system or platform including technology standards for interacts therebetween.

[157] The term "appliance,” "computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A "virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.

[158] The term "cloud computing” or "cloud” refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users. Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like). The term "computing resource” or simply "resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network. Examples of computing resources include usage/access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like. A "hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A "virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term "network resource” or "communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term "system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. The term "cloud service provider” or "CSP” indicates an organization which operates typically large-scale "cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud). In other examples, a CSP may also be referred to as a Cloud Service Operator (CSO). References to "cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or other constraints.

[159] The term "workload” refers to an amount of work performed by a computing system, device, entity, etc., during a period of time or at a particular instant of time. A workload may be represented as a benchmark, such as a response time, throughput (e.g., how much work is accomplished over a period of time), and/or the like. Additionally or alternatively, the workload may be represented as a memory workload (e.g., an amount of memory space needed for program execution to store temporary or permanent data and to perform intermediate computations), processor workload (e.g., a number of instructions being executed by a processor during a given period of time or at a particular time instant), an I/O workload (e.g., a number of inputs and outputs or system accesses during a given period of time or at a particular time instant), database workloads (e.g., a number of database queries during a period of time), a network-related workload (e.g., a number of network attachments, a number of mobility updates, a number of radio link failures, a number of handovers, an amount of data to be transferred over an air interface, etc.), and/or the like. Various algorithms may be used to determine a workload and/or workload characteristics, which may be based on any of the aforementioned workload types.

[160] As used herein, the term "data center” refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems. The term may also refer to a compute and data storage node in some contexts. A data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).

[161] The term "Internet of Things” or "IoT” refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or AI, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. IoT devices are usually low-power devices without heavy compute or storage capabilities.

[162] As used herein, the term "radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term "radio access technology” or "RAT” refers to the technology used for the underlying physical connection to a radio based communication network. The term "V2X” refers to vehicle to vehicle (V2V), vehicle to infrastructure (V2I), infrastructure to vehicle (I2V), vehicle to network (V2N), and/or network to vehicle (N2V) communications and associated radio access technologies (RATs). As used herein, the term "communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. The term "channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term "channel” may be synonymous with and/or equivalent to "communications channel,” "data communications channel,” "transmission channel,” "data transmission channel,” "access channel,” "data access channel,” "link,” "data link,” "carrier,” "radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term "link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.

[163] The term "content” refers to visual or audible information to be conveyed to a particular audience or end-user, and may include or convey information pertaining to specific subjects or topics. Content or content items may be different content types (e.g., text, image, audio, video, etc.), and/or may have different formats (e.g., text files including Microsoft® Word® documents, Portable Document Format (PDF) documents, HTML documents; interactive map and/or route planning data, audio files such as MPEG-4 audio files and WebM audio and/or video files; etc.). As used herein, the term "service” refers to a particular functionality or a set of functions to be performed on behalf of a requesting party, such as the system s200. As examples, a service may include or involve the retrieval of specified information or the execution of a set of operations. Although the terms "content” and "service” refer to different concepts, the terms "content” and "service” may be used interchangeably throughout the present disclosure, unless the context suggests otherwise.

[164] The term "data element” refers to an atomic state of a particular object with at least one specific property at a certain point in time, and may include one or more of a data element name or identifier, a data element definition, one or more representation terms, enumerated values or codes (e.g., metadata), and/or a list of synonyms to data elements in other metadata registries. Additionally or alternatively, data elements may include zero or more properties and/or zero or more attributes, each of which may be defined as database objects (e.g., fields, records, etc.), object instances, and/or other data elements. An "attribute” may refer to a markup construct including a name-value pair, and in some implementations, may exist within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’s behavior. Data elements may store data, which is referred to as the element’s content (or "content items”). Content items may include text content, attributes, properties, and/or other elements referred to as "child elements.” The term "information element” refers to a structural element containing one or more fields. The term "field” refers to individual contents of an information element, or a data element that contains content. The term "data element” or "DE” refers to a data type that contains one single data. The term "data frame” or "DF” refers to a data type that contains more than one data element in a predefined order.

[165] The term "information object” refers to a data structure that includes one or more data elements each of which includes one or more data values. Examples of information objects include electronic documents, database objects, data files, resources, and/or other like elements. Information objects may be stored and/or processed according to their data format. Data formats define the content/data and/or the arrangement of data items for storing and/or communicating the information objects. Each of the data formats may also define the language, syntax, vocabulary, and/or protocols that govern information storage and/or exchange. Examples of the data formats that may be used for any of the information objects discussed herein may include Accelerated Mobile Pages Script (AMPscript), Abstract Syntax Notation One (ASN.l), Backus-Naur Form (BNF), extended BNF, Bencode, BSON, ColdFusion Markup Language (CFML), comma-separated values (CSV), Control Information Exchange Data Model (C2IEDM), Cascading Stylesheets (CSS), DARPA Agent Markup Language (DAML), Document Type Definition (DTD), Electronic Data Interchange (EDI), Extensible Data Notation (EDN), Extensible Markup Language (XML), Efficient XML Interchange (EXI), Extensible Stylesheet Language (XSL), Free Text (FT), Fixed Word Format (FWF), Cisco® Etch, Franca Interface Definition Language (IDL), Geography Markup Language (GML), Geospatial extensible Access Control Markup Language (GeoXACML), Geospatial Data Abstraction Library (GDAL), Guide Template Language (GTL), Handlebars template language, Hypertext Markup Language (HTML), Interactive Financial Exchange (IFX), Keyhole Markup Language (KML) and/or KML Zipped (KMZ), JAMscript, Java Script Object Notion (JSON), JSON Schema Language, Apache® MessagePack™, Mustache template language, Ontology Interchange Language (OIL), Open Service Interface Definition, Open Financial Exchange (OFX), Precision Graphics Markup Language (PGML), Google® Protocol Buffers (protobuf), Quicken® Financial Exchange (QFX), Regular Language for XML Next Generation (RelaxNG) schema language, regular expressions, Resource Description Framework (RDF) schema language, RESTful Service Description Language (RSDL), Scalable Vector Graphics (SVG), Schematron, Shapefile (SHP), VBScript, text file (TXT), Web Application Description Language (WADL), Web Map Service (WMS), Web Ontology Language (OWL), Web Services Description Language (WSDL), wiki markup or Wikitext, Wireless Markup Language (WML), extensible HTML (XHTML), XPath, XQuery, XML DTD language, XML Schema Definition (XSD), XML Schema Language, XSL Transformations (XSLT), YAML ("Yet Another Markup Language” or "YANL Ain’t Markup Language”), Apache® Thrift, and/or any other language discussed elsewhere herein. Additionally or alternatively, the data format for the information objects discussed herein may be a Tactical Data Link (TDL) format including, for example, J-series message format for Link 16; JREAP messages; Multifuction Advanced Data Link (MADL), Integrated Broadcast Service/Common Message Format (IBS/CMF), Over-the-Horizon Targeting Gold (OTH-T Gold), Variable Message Format (VMF), United States Message Text Format (USMTF), and any future advanced TDL formats.

[166] As used herein, the terms "instantiate,” "instantiation,” and the like may refer to the creation of an instance, and an "instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. As used herein, a "database object”, "data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (A VP), key-value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and database entities (also referred to as a "relation”), and the like.

[167] The term "application” may refer to a complete and deployable package to achieve a certain function in an operational environment. The term "ML application” or "AI/ML application” or the like may be an application that contains some AI/ML models and application-level descriptions.

[168] The term "machine learning” or "ML” refers to the use of computer systems to optimize a performance criterion using example (training) data and/or past experience. ML involves using algorithms to perform specific task(s) without using explicit instructions to perform the specific task(s), but instead relying on learnt patterns and/or inferences. ML uses statistics to build mathematical model(s) (also referred to as "ML models” or simply "models”) in order to make predictions or decisions based on sample data (e.g., training data). The model is defined to have a set of parameters, and learning is the execution of a computer program to optimize the parameters of the model using the training data or past experience. The trained model may be a predictive model that makes predictions based on an input dataset, a descriptive model that gains knowledge from an input dataset, or both predictive and descriptive. Once the model is learned (trained), it can be used to make inferences (e.g., predictions). ML algorithms perform a training process on a training dataset to estimate an underlying ML model. An ML algorithm is a computer program that learns from experience with respect to some task(s) and some performance measure(s)/metric(s), and an ML model is an object or data structure created after an ML algorithm is trained with training data. In other words, the term "ML model” or "model” may describe the output of an ML algorithm that is trained with training data. After training, an ML model may be used to make predictions on new datasets. Additionally, separately trained AI/ML models can be chained together in a AI/ML pipeline during inference or prediction generation. Although the term "ML algorithm” refers to different concepts than the term "ML model,” these terms may be used interchangeably for the purposes of the present disclosure. ML techniques generally fall into the following main types of learning problem categories: supervised learning, unsupervised learning, and reinforcement learning.

[169] The term "supervised learning” refers to an ML technique that aims to learn a function or generate an ML model that produces an output given a labeled data set. Supervised learning algorithms build models from a set of data that contains both the inputs and the desired outputs. For example, supervised learning involves learning a function or model that maps an input to an output based on example input-output pairs or some other form of labeled training data including a set of training examples. Each input-output pair includes an input object (e.g., a vector) and a desired output object or value (referred to as a "supervisory signal”). Supervised learning can be grouped into classification algorithms, regression algorithms, and instance-based algorithms.

[170] The term "classification” in the context of ML may refer to an ML technique for determining the classes to which various data points belong. Here, the term "class” or "classes” may refer to categories, and are sometimes called "targets” or "labels.” Classification is used when the outputs are restricted to a limited set of quantifiable properties. Classification algorithms may describe an individual (data) instance whose category is to be predicted using a feature vector. As an example, when the instance includes a collection (corpus) of text, each feature in a feature vector may be the frequency that specific words appear in the corpus of text. In ML classification, labels are assigned to instances, and models are trained to correctly predict the pre-assigned labels of from the training examples. ML algorithms for classification may be referred to as a "classifier.” Examples of classifiers include linear classifiers, k-nearest neighbor (kNN), decision trees, random forests, support vector machines (SVMs), Bayesian classifiers, convolutional neural networks (CNNs), among many others (note that some of these algorithms can be used for other ML tasks as well)..

[171] The terms "regression algorithm” and/or "regression analysis” in the context of ML may refer to a set of statistical processes for estimating the relationships between a dependent variable (often referred to as the "outcome variable”) and one or more independent variables (often referred to as "predictors”, "covariates”, or "features”). Examples of regression algorithms/models include logistic regression, linear regression, gradient descent (GD), stochastic GD (SGD), and the like.

[172] The terms "instance-based learning” or "memory-based learning” in the context of ML may refer to a family of learning algorithms that, instead of performing explicit generalization, compares new problem instances with instances seen in training, which have been stored in memory. Examples of instance-based algorithms include k-nearest neighbor, and the like), decision tree Algorithms (e.g., Classification And Regression Tree (CART), Iterative Dichotomiser 3 (ID3), C4.5, chi-square automatic interaction detection (CHAID), etc.), Fuzzy Decision Tree (FDT), and the like), Support Vector Machines (SVM), Bayesian Algorithms (e.g., Bayesian network (BN), a dynamic BN (DBN), Naive Bayes, and the like), and ensemble algorithms (e.g., Extreme Gradient Boosting, voting ensemble, bootstrap aggregating ("bagging”), Random Forest and the like.

[173] The term "feature” in the context of ML refers to an individual measureable property, quantifiable property, or characteristic of a phenomenon being observed. Features are usually represented using numbers/numerals (e.g., integers), strings, variables, ordinals, real-values, categories, and/or the like. A set of features may be referred to as a "feature vector.” A "vector” may refer to a tuple of one or more values called scalars, and a "feature vector” may be a vector that includes a tuple of one or more features.

[174] The term "unsupervised learning” refers to an ML technique that aims to learn a function to describe a hidden structure from unlabeled data. Unsupervised learning algorithms build models from a set of data that contains only inputs and no desired output labels. Unsupervised learning algorithms are used to find structure in the data, like grouping or clustering of data points. Examples of unsupervised learning are K- means clustering, principal component analysis (PCA), and topic modeling, among many others. In particular, Topic modeling is an unsupervised machine learning technique scans a set of information objects (e.g., documents, webpages, etc.), detects word and phrase patterns within the information objects, and automatically clusters word groups and similar expressions that best characterize the set of information objects. The term S^semi-supervised learning” refers to ML algorithms that develop ML models from incomplete training data, where a portion of the sample input does not include labels.

[175] The term "reinforcement learning” or "RL” refers to a goal-oriented learning technique based on interaction with an environment. In RL, an agent aims to optimize a long-term objective by interacting with the environment based on a trial and error process. Examples of RL algorithms include Markov decision process, Markov chain, Q- learning, multi-armed bandit learning, and deep RL. The terms "artificial neural network”, "neural network”, or "NN” refer to an ML technique comprising a collection of connected artificial neurons or nodes that (loosely) model neurons in a biological brain that can transmit signals to other arterial neurons or nodes, where connections (or edges) between the artificial neurons or nodes are (loosely) modeled on synapses of a biological brain. The artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. The artificial neurons can be aggregated or grouped into one or more layers where different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. NNs are usually used for supervised learning, but can be used for unsupervised learning as well. Examples of NNs include deep NN (DNN), feed forward NN (FFN), a deep FNN (DFF), convolutional NN (CNN), deep CNN (DCN), deconvolutional NN (DNN), a deep belief NN, a perception NN, recurrent NN (RNN) (e.g., including Long Short Term Memory (LSTM) algorithm, gated recurrent unit (GRU), etc.), deep stacking network (DSN).

[176] Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments described herein be limited only by the claims.