Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR FUSING SENSOR DATA FROM DRONES IN A VIRTUAL ENVIRONMENT
Document Type and Number:
WIPO Patent Application WO/2024/097246
Kind Code:
A1
Abstract:
Systems and methods for fusing sensor data from drones in a virtual environment are described In one embodiment, a method for fusing sensor data from drones in a virtual environment includes obtaining geometry data describing a real-world landscape, drawing a map within a virtual environment using a 3-D visualization software and the geometry data, placing a plurality of projectors on the map within the virtual environment corresponding to sensors in the region of the real-world landscape, receiving sensor data and location data from the sensors; and projecting the sensor data onto the map at locations indicated by the location data using the projectors corresponding to the sensors that the sensor data is received from.

Inventors:
AUDRONIS TYRIS (US)
Application Number:
PCT/US2023/036518
Publication Date:
May 10, 2024
Filing Date:
October 31, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TEMPEST DRONEWORX INC (US)
International Classes:
G06T1/00; G06T3/00; G06T7/60; G06T7/70; G06T15/00
Attorney, Agent or Firm:
SUNG, Brian, K. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1 . A method for fusing sensor data from drones in a virtual environment, the method comprising: obtaining geometry data describing a real-world landscape; drawing a map within a virtual environment using a 3-D visualization software and the geometry data; placing a plurality of projectors on the map within the virtual environment corresponding to sensors in the region of the real-world landscape; receiving sensor data and location data from the sensors; and projecting the sensor data onto the map at locations indicated by the location data using the projectors corresponding to the sensors that the sensor data is received from.

2. The method of claim 1 , wherein the geometry data is retrieved from Esri ArcGIS.

3. The method of claim 1 , wherein at least some of the sensors are mounted to drones.

4. The method of claim 3, further comprising: receiving user input captured on graphical user interface; directing a drone identified by the user input to move as indicated by the user input.

5. The method of claim 3, where at least one of the drones is in motion.

6. The method of claim 3 wherein placing a plurality of projectors on the map further comprises: retrieving a status and type of each drone from a drone information database; retrieving telemetry and control information of each drone from the drone information database; receiving telemetry from each drone where the telemetry indicates a location of the drone; determine location coordinates in the coordinate system of the virtual environment using the telemetry; and placing a projector within the virtual environment at the determined location.

7. The method of claim 1 , wherein projecting the sensor data onto the map further comprises: retrieving telemetry of a drone; extracting sensor data from the telemetry; converting the sensor data to a visual format; determine an orientation of the sensor from the sensor data was received; rotating a projector corresponding to the sensor to match the sensor orientation; and projecting the visual format of the sensor data onto the map using the location data.

8. The method of claim 1 , wherein at least some of the sensor data is video.

9 The method of claim 1 , wherein at least some of the sensor data is invisible wavelength.

10. The method of claim 1 , further comprising rendering the virtual environment on a display.

11. A system for fusing sensor data from drones in a virtual environment, the system comprising: a plurality of sensors configured to collect sensor data by observing a real-world landscape; a command center computing system comprising: a processor; non-volatile memory comprising a sensor data integration platform application; where the sensor data integration platform application, when executed, instructs the processor to perform: obtaining geometry data describing the real-world landscape; drawing a map within a virtual environment using a 3-D visualization software and the geometry data; placing a plurality of projectors on the map within the virtual environment corresponding to sensors in the region of the real-world landscape; receiving sensor data and location data from the sensors; and projecting the sensor data onto the map at locations indicated by the location data using the projectors corresponding to the sensors that the sensor data is received from.

12. The system of claim 11 , wherein the geometry data is retrieved from Esri ArcGIS.

13. The system of claim 11 , wherein at least some of the sensors are mounted to drones.

14. The system of claim 11 , where the sensor data integration platform application, when executed, further instructs the processor to perform: receiving user input captured on graphical user interface; and directing a drone identified by the user input to move as indicated by the user input.

15. The system of claim 11 , where at least one of the drones is in motion.

16. The system of claim 11 wherein placing a plurality of projectors on the map further comprises: retrieving a status and type of each drone from a drone information database; retrieving telemetry and control information of each drone from the drone information database; receiving telemetry from each drone where the telemetry indicates a location of the drone; determine location coordinates in the coordinate system of the virtual environment using the telemetry; and placing a projector within the virtual environment at the determined location.

17. The system of claim 11 , wherein projecting the sensor data onto the map further comprises: retrieving telemetry of a drone; extracting sensor data from the telemetry; converting the sensor data to a visual format; determine an orientation of the sensor from the sensor data was received; rotating a projector corresponding to the sensor to match the sensor orientation; and projecting the visual format of the sensor data onto the map using the location data.

18. The system of claim 11 , wherein at least some of the sensor data is video.

19 The system of claim 11 , wherein at least some of the sensor data is invisible wavelength.

20. The system of claim 11 , where the sensor data integration platform application, when executed, instructs the processor to perform rendering the virtual environment on a display.

Description:
Systems and Methods for Fusing Sensor Data from Drones in a Virtual Environment

RELATED APPLICATIONS

[0001] The current application claims the benefit of and priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/381 ,652, entitled “Systems and Methods for Fusing Sensor Data from Drones in a Virtual Environment” to Tyris Monte Audronis, filed October 31 , 2022, the disclosure of which is hereby incorporated by reference in its entirety.

BACKGROUND

[0002] An unmanned vehicle, or drone is a type of vehicle that can be powered or unpowered without a person directly operating it onboard. The vehicle can be operated remotely (e.g., by a human operator/pilot) or autonomously (e.g., using sensors and/or navigational programming). Unmanned vehicles can be designed for different environments, such as, but not limited to, unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned surface vehicles (USV), and unmanned underwater vehicles (UUV).

[0003] Drones can utilize any of a variety of sensors such as, but not limited to, cameras, infrared (thermal) sensors, LiDAR (Light Detection and Ranging), sonar, etc. At an elevated vantage point, UAVs having onboard sensors can often collect data at a greater range and with less influence from obstructions than if they were on the ground. UGVs, USVs, and UUVs with sensors can traverse and collect information in environments that are difficult or undesirable for a human.

[0004] Drones for information gathering over a large area are particularly useful in emergency and disaster situations. With little to no direct human supervision, they can obtain visual and other information at great speed and effectiveness to enhance planning and remediation by responders. SUMMARY OF THE INVENTION

[0005] Systems and methods for fusing sensor data from drones in a virtual environment are described In one embodiment, a method for fusing sensor data from drones in a virtual environment includes obtaining geometry data describing a real-world landscape, drawing a map within a virtual environment using a 3-D visualization software and the geometry data, placing a plurality of projectors on the map within the virtual environment corresponding to sensors in the region of the real-world landscape, receiving sensor data and location data from the sensors; and projecting the sensor data onto the map at locations indicated by the location data using the projectors corresponding to the sensors that the sensor data is received from.

BRIEF DESCRIPTION OF FIGURES

[0006] Fig. 1A illustrates a system for collecting sensor data from drones in accordance with an embodiment of the invention.

[0007] Fig. 1 B illustrates a system for collecting sensor data from drones in accordance with an embodiment of the invention.

[0008] Fig. 2 conceptually illustrates a command center computing system in accordance with an embodiment of the invention.

[0009] Fig. 3 illustrates a process for collecting and displaying sensor data in accordance with an embodiment of the invention.

[0010] Fig. 4 illustrates a process for placing drones on a map in accordance with an embodiment of the invention.

[0011] Fig. 5 illustrates a process for projecting sensor data onto a map in accordance with an embodiment of the invention.

[0012] Fig. 6 illustrates a process for updating drone tasking in accordance with an embodiment of the invention.

[0013] Fig. 7 is an example of a graphical user interface screen for obtaining input concerning a location.

[0014] Fig. 8 is an example of a graphical user interface showing a map and several placed drones.

[0015] Fig. 9 is an example screen of a graphical user interface showing a map and choices of available sensor data.

[0016] Fig. 10 is an example graphical user interface screen showing a UAV’s current flight path.

[0017] Fig. 11 is an example graphical user interface showing a drone management screen.

DETAILED DISCLOSURE OF THE INVENTION

[0018] T urning now to the drawings, systems and methods for fusing sensor data from drones in a virtual environment are described. Drones can be used to collect information over large geographic areas via onboard sensors. As will be discussed further below, information from multiple drones can be merged or fused live or close to real-time into a virtual environment (a virtual representation within a computing system) that is representative of the physical real-world environment traversed by the drones. The virtual environment can be implemented in a computing system using 3-D visualization software such as a game engine (e.g., Unity, Unreal Engine, Godot, etc.). An initial map can be set up in the virtual environment using real-world geographic information concerning the area of interest (e.g., landscape, buildings, physical features, etc.), which can be referred to as geometry data. Information captured by sensors on the drones (e.g., a video feed) or other sensor systems (e.g., stationary cameras) can then be projected or superimposed onto the map built from geometry data. The drones can follow defined paths, navigate autonomously, or be manually controlled to cover as much of the area of interest as possible. A user interface displays the virtual environment and can provide controls for a user to direct a drone to a specific location. In this way, a user can visually review information over large areas live or in close to real-time via systematic navigation of the one or more drones.

Systems for Collecting Sensor Data from Drones

[0019] Fig. 1A illustrates a system 100 for collecting sensor data from drones in accordance with an embodiment of the invention that includes one or more drones 102, 104, and 106, a drone command center 110, a data center 112, and one or more client devices 108 and 110. In the illustrated embodiment, the entities can communicate over a wide area network 101 , such as the internet. Drones can include those adapted for different environments, such as, but not limited to, unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned surface vehicles (USV), and unmanned underwater vehicles (UUV). Each drone should include at least one sensor. Sensors can include, but are not limited to, cameras, infrared (thermal) sensors, LiDAR (Light Detection and Ranging), sonar, olfactory/particle sensors, auditory sensors, etc. Further embodiments of the invention can include cameras and/or other types of sensors 114 that are not mounted on drones. These sensors may be stationary, and may have an associated GPS (global positioning system) circuitry or system that identifies their location. For example, a camera or sensor can have an embedded GPS tracker or may be mounted to another system (e.g., a structure or a non-moving vehicle) that includes a GPS. Some stationary camera systems can include, for example, public wildfire monitoring systems.

[0020] The drone command center 110 can include controller interfaces for the drones. In several embodiments of the invention, each drone has its own associated controller interface, e.g., Pixhawk Cube. The drone command center 110 may also have one or more computing systems that can coordinate the controller interfaces, execute a 3-D visualization software application (e.g., game engine) for the virtual environment, and/or generate information for a user interface on the one or more client devices 108 and 110 to display the virtual environment. Processes that may be performed at a drone command center 110 include those discussed further below.

[0021] The data center 112 can include one or more databases. Databases can store drone information/metadata and geometry data. As will be discussed further below, drone metadata includes information about the capabilities of each drone or information to configure each drone. Geometry information includes mapping data of some location in the real world that can be used to render a virtual environment. Notably, in some embodiments, separate data centers can house databases for different types of information.

[0022] Fig. 1 B illustrates a system 150 for collecting sensor data from drones in accordance with another embodiment of the invention. Similar to system 100, the system includes one or more drones 152, 154, and 156, a drone command center 158, a data center 160, one or more client devices 162 and 164, and a stationary camera or other sensor 174. In the illustrated embodiment, different groups of entities in the system can communicate over different networks 170 and 172. Networks 170 and 172 can be, for example, two local area networks or one local network and the internet.

[0023] A computing system that may be utilized at a command center in accordance with embodiments of the invention is conceptually illustrated in Fig. 2. The computing system 200 includes a processor 202 and memory 204. The memory 204 contains processor instructions for executing an operating system 206, a sensor data integration platform 208, and a user interface application 210. The computing system 200 can access a data center 212 as mentioned further above. The computing system 200 may also interface with one or more drone controllers 206, 208, and 210 that are configured to control drones (e.g., drones 102, 104, and 106 as in Fig. 1 ). Drone controllers can be any suitable type or model, such as the Pixhawk Cube.

Geometry Data

[0024] Rendering a virtual environment in accordance with embodiments of the invention involves collecting initial geometry data for constructing a base landscape or map. Geometry data is information that gives a representation of the shape of bare ground (bare earth) topographic surface of the Earth. In some embodiments, geometry data can also include trees, buildings, and other surface objects. Geometry data can be obtained in any of a variety of ways, such as by retrieving from data sources that can be queried or have APIs. When the system has a network connection, Internet sources can include retrieving geometry data from a server such as, but not limited to, Esri ArcGIS, Google Earth, Google Maps, United States Geological Survey (USGS) digital elevation model (DEM), or Mapbox. When there is no network connection, or without needing to use a network connection, the system can accept locally generated geometry data. For example, drones or other devices can be used to collect geometry data using LiDAR.

[0025] While geometry data can be organized in any of a variety of ways, many embodiments utilize layers as logical collections of data for creating maps, scenes, and analysis. The data can include different aspects of an area, such as topography, elevation, natural features, buildings, etc. Map of Virtual Environment

[0026] In many embodiments of the invention, the virtual environment can be built using a 3-D visualization software application, such as a game engine, based on geometry data. Unity, Unreal Engine, and Godot are examples of game engines that may be utilized in accordance with embodiments of the invention.

[0027] The geographic background or structure of the virtual environment, which can be constructed from geometry data, can be referred to as a map. The map can be centered on a location as directed by a user or provided by the GPS of a device (e.g., a mobile device). For example, a user interacting with a graphical user interface may enter GPS coordinates (e.g., in WGS83 format) or click on a location in the interface. Alternatively, a location can be determined without user input from the GPS onboard a mobile device that the user is using to view the map. An example of a graphical user interface screen for obtaining input concerning a location for centering the map is shown in Fig. 7. If coordinates are given in a different coordinate system (e.g., hours/minutes/seconds), it can be converted to WGS83 or another common format that is accepted by the system.

Placing Drones and Sensor Data on Map

[0028] Active drones can be placed on the map as projectors within the virtual environment. Many 3-D visualization software application, such as a game engines, include a projector component, which is a class that can be instantiated and used to project any material onto a scene. As will be discussed in greater detail further below, image, video, or other sensor data collected by each drone can be projected onto the map with proper placement given telemetry data of the drone. In many embodiments, information on how to attain and parse telemetry data of a drone may be stored in and retrieved from a database such as data center 112 or 160 above. The database includes information for each drone to be used such as, but not limited to, drone type, how to obtain and parse telemetry, command hash table, control type, and format of video or other sensor data captured by the drone. Collectively this type of information can be referred to as drone metadata. Telemetry data can include coordinates of the current location of the drone and position of the camera or sensor (e.g., gimbal orientation), and may be stored in JSON or XML format. In several embodiments of the invention, the sensor data is provided in XML.

Processes for Fusing Sensor Data from Drones in a Virtual Environment

[0029] A computing system at a command center or elsewhere can coordinate multiple drones to collect sensor data for display in a virtual environment. A process for collecting and displaying sensor data in accordance with an embodiment of the invention is illustrated in Fig. 3. The process 300 includes drawing (302) a landscape geometry, or map, in a virtual environment using 3-D visualization software. As discussed further above, geometry data for rendering the map can be obtained from any of a variety of sources (e.g., Esri ArcGIS, Google Earth, Google Maps, USGS), Mapbox, etc.).

[0030] The active drones are placed (304) on the map using their locations (e.g., each as provided by their GPS). Drones are active when they are contributing sensor data. In many embodiments of the invention, the drones are implemented as projector components as provided by the 3-D visualization software within the virtual environment. Additional detail of processes for placing drones on a map will be discussed further below with respect to Fig. 4.

[0031] Sensor data from the active drones that have been placed on the map are projected (306) onto the map using the projectors corresponding to each active drone. Additional details of processes for projecting sensor data on a map will be discussed further below with respect to Fig. 5.

[0032] User input captured (308) on a graphical user interface may instruct a drone to travel to a location or in a particular direction. The user input may be entered as coordinates (e.g., WGS83 format). Alternatively, the user input may be indicative of a direction (e.g., elevation up/down, slide left/right, forward, and reverse). Updating drone tasking from user input will be discussed in greater detail below with respect to Fig. 6.

[0033] Although a specific process is described above with respect to Fig. 3, one skilled in the art will recognize that any of a variety of processes may be utilized in accordance with embodiments of the invention. Processes for Placing Drones on a Map

[0034] Processes in accordance with embodiments of the invention can retrieve information or metadata about the drones to be able to interface with them and facilitate conveying location and sensor data to the command center. Such processes maybe utilized, for example, in drone placement 304 of Fig. 3. A process in accordance with an embodiment of the invention is illustrated in Fig. 4.

[0035] The process 400 includes retrieving (402) the status and type of each live drone from the drone information database. The statuses can include active and inactive. Active status can indicate the drone is out in the real-world environment and ready to transmit data (e.g., a UAV is “launched”). Inactive status can indicate the drone is withdrawn from the field or powered down. In many embodiments of the invention, the area of interest is divided into sectors, and drones are assigned to sectors. The drones can be programmed using controllers such as those described further above to traverse their assigned sectors, for example, by providing waypoints.

[0036] A list can be created of drones having active status. Then for each of the drones in the list, telemetry and control information is retrieved (404) from the drone information database. Telemetry and control information can include, but is not limited to, a command hash table, drone control type, native sensor data format of sensor(s) on the drone, and/or information for converting the native sensor data format to a uniform format.

[0037] Telemetry is retrieved from one or more of the active drones. Telemetry can include, but is not limited to, location of the drone and sensor data captured by one or more sensors on the drone. In several embodiments of the invention, at least one sensor on a drone is a video camera providing a video feed or stream as sensor data.

[0038] The telemetry is parsed (406) and location coordinates for the drone(s) are converted into the coordinate system of the virtual environment. Telemetry may be in a suitable storage or messaging data format, such as JSON or XML. In some embodiments of the invention, drone location coordinates are provided as WGS83 format and converted into vector 3 coordinates (x, y, z) for the virtual environment. Furthermore, AGL (above ground level) altitude can be converted to MSL (mean sea level) altitude. An icon for the drone(s) is displayed (408) in the virtual environment at the vector 3 coordinates. A projector can be associated with the icon within the virtual environment for adding sensor data to be displayed on the map as will be described further below. Similarly, any new drones that are discovered or become active can be added to the map by an icon and projector. An example screen of a graphical user interface showing a map and several placed drones represented by icons in accordance with an embodiment of the invention is illustrated in Fig. 8.

[0039] Although a specific process is described above with respect to Fig. 4, one skilled in the art will recognize that any of a variety of processes may be utilized in accordance with embodiments of the invention.

Processes for Projecting Sensor Data from Drones onto a Map

[0040] Once the location(s) of drone(s) are known, the sensor data captured by the drone(s) can be displayed on the map. Fig. 5 illustrates a process for projecting sensor data onto a map in accordance with an embodiment of the invention.

[0041] The process 500 includes retrieving (502) telemetry of a drone. As mentioned above, telemetry can be in JSON or XML format. Sensor data captured by a sensor on the drone is extracted (504) from the telemetry. In some embodiments of the invention, the sensor data is a video frame or portion of video. In other embodiments, sensor data can be multi-spectral, chemical, auditory, or thermal. As discussed further above, any of a variety of sensors may be utilized on a drone to capture different information about the real-world environment.

[0042] The sensor data is converted (506) to a visual format. In some embodiments utilizing video, the video frame is converted into an image (e.g., JPEG).

[0043] The orientation of the sensor (e g., if it is on a gimbal) is determined (508). The orientation can be obtained, for example, from the telemetry, and can include the location or position of the sensor or gimbal; the degree of tilt, roll, and pan of the gimbal; and/or field of view of the camera. In some embodiments, when the location or position of the sensor or gimbal is not provided by the drone, it can be assumed to be at fixed default values or can be retrieved from the drone information database. Field of view may be provided in a measurement, such as degrees, or may be provided by calculating from lens size and sensor size of the camera. [0044] A projector associated with the drone is rotated (510) within the virtual environment to match the sensor/gimbal orientation. The visual representation of the sensor data (e.g., image) is projected by the projector onto the map at the proper location and orientation previously determined to match the drone. An example screen of a graphical user interface showing a map and choices of available sensor data as standard (RGB) and thermal (IR) spectrum in accordance with an embodiment of the invention is illustrated in Fig. 9.

[0045] In several embodiments of the invention, the projected visual sensor data remains on the map as the drone leaves the corresponding location in the real world. The map can be updated when new sensor data is available (e.g., from a different drone passing over the area or a next pass of the original drone).

[0046] Although a specific process is described above with respect to Fig. 5, one skilled in the art will recognize that any of a variety of processes may be utilized in accordance with embodiments of the invention.

Processes for Updating Drone Tasking

[0047] A user of the system may wish to get up-to-date information in a specific location on the map. They may utilize a controller interface at the command center to task a drone to visit that area. Fig. 6 illustrates a process for updating drone tasking in accordance with an embodiment of the invention.

[0048] The process 600 includes receiving (602) user input. An example graphical user interface showing a drone management screen is illustrated in Fig. 11. The drone management screen can display different filters by types of drones or locations of drones and allow a user to select a specific drone to control. An example user interface screen showing a UAV’s current flight path on the map is illustrated in Fig. 10.

[0049] The user input can be converted into instructions (604) for drone control. For example, if the user input is a set of WGS83 coordinates, it can be converted into directions for the drone to arrive at those coordinates. If the user input is selection of a location on a map, the coordinates of the location can be determined and then provided as directions for the drone. Alternatively, user input can be given as immediate controls (e.g., forward, reverse, slide or turn left/right, elevation up/down, rotate gimbal, etc.). One skilled in the art will recognize that other variations of drone control are possible. The drone is directed (606) using the instructions.

[0050] Updated sensor data is received (608) from the drone and displayed in the virtual environment (e.g., by the drone’s associated projector).

[0051] Although a specific process is described above with respect to Fig. 6, one skilled in the art will recognize that any of a variety of processes may be utilized in accordance with embodiments of the invention.

Conclusion

[0001]Although the description above contains many specificities, these should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments of the invention. Various other embodiments are possible within its scope. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.