Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
RETRIEVING AND ANALYSING DATA FROM A GEOSPATIAL DATA STORE
Document Type and Number:
WIPO Patent Application WO/2022/153020
Kind Code:
A1
Abstract:
A system and various methods for analysing geospatial data, including global-scale satellite imagery from a cloud-based repository of satellite data, and visualising the results on a map, using a visual programming graphical user interface (GUI) to allow complex analysis of larger data sets without the need for the user to write computer code instructions. The various methods are created in the GUI through the formation of individual "blocks". In this instance, a "block" is a visual representation of the underlying method written in a computer language. Each block may incorporate dropdown menus or other method of user input, so that the user can select properties of the data, or variables applicable to each algorithm. The visual programming GUI provides the user the option to assemble multiple blocks to form workflows. Workflows incorporate 2 or more blocks by visually linking together the blocks in a coherent sequence. The blocks do not allow the user to assemble incoherent combinations of blocks. Each block may represent a variety of different functions, some examples of which are described herein.

Inventors:
BURCHART BENJAMIN JAMES (GB)
FLEMING SAMUEL JAMES (GB)
PATENAUDE GENEVIEVE (GB)
WOODHOUSE IAN HECTOR (GB)
Application Number:
PCT/GB2021/050100
Publication Date:
July 21, 2022
Filing Date:
January 15, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
QUOSIENT LTD (GB)
International Classes:
G06F8/34; G06F16/242; G06F16/29
Foreign References:
US20140282367A12014-09-18
Other References:
XIN MOU ET AL: "Visual orchestration and autonomous execution of distributed and heterogeneous computational biology pipelines", 2016 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), IEEE, 15 December 2016 (2016-12-15), pages 752 - 757, XP033046440, DOI: 10.1109/BIBM.2016.7822615
GORELICK ET AL.: "Google Earth Engine: Planetary-scale geospatial analysis for everyone", REMOTE SENSING OF ENVIRONMENT, 2017
HANSEN ET AL.: "High-Resolution Global Maps of 21st-Century Forest Cover Change", SCIENCE, vol. 342, no. 6160, 2013, pages 850 - 853
MITCHARD, E.SAATCHI, S.WOODHOUSE, I.NANGENDO, G.RIBEIRO, N.WILLIAMS, M.RYAN, C.LEWIS, S.FELDPAUSCH, T.MEIR, P.: "Using satellite radar backscatter to predict above-ground woody biomass: A consistent relationship across four different African landscapes", GEOPHYS. RES. LETT., vol. 36, 2009, pages L23401
MARTIN SUDMANNSDIRK TIEDESTEFAN LANGHELENA BERGSTEDTGEORG TROSTHANNAH AUGUSTINANDREA BARALDITHOMAS BLASCHKE: "Big Earth data: disruptive changes in Earth observation data management and analysis?", INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2019
Attorney, Agent or Firm:
LINCOLN IP (GB)
Download PDF:
Claims:
34

Claims:

The invention claimed is:

1. A computer-implemented method of retrieving geospatial data from a geospatial data store comprising: a. Providing a visual programming interface comprising a plurality of blocks representative of one or more predetermined instructions; b. Constructing a workflow comprising a plurality of predetermined instructions by combining one or more blocks of the visual programming interface within a workspace; c. Collecting one or more user preferences as parameters of the workflow; d. Validating the workflow based on one or more of the predetermined instructions and/or the user preferences; and e. Executing the workflow to retrieve geospatial data from the geospatial data store in accordance with the user preferences.

2. The method of claim 1 , wherein validating the workflow comprises limiting the combination of blocks that are connected together to those which are compatible with one another.

3. The method of claim 1 or claim 2, wherein validating the workflow comprises comparing an attempted combination of blocks with a compatibility database which defines all possible valid combinations of blocks.

4. The method of any preceding claim, wherein validating the workflow comprises generating a new workflow responsive to an attempted combination of blocks being found invalid.

5. The method of claim 4, wherein the new workflow comprises an otherwise invalid combination of blocks.

6. The method of any preceding claim, wherein validating the workflow comprises limiting the user preferences available for selection dependent on a choice of data source. 35

7. The method of any preceding claim, wherein validating the workflow comprises limiting the choice of data source dependent on selected user preferences.

8. The method of any preceding claim, wherein constructing a workflow comprises dragging one or more blocks to the workspace from a list of available blocks.

9. The method of claim 8, wherein the list comprises blocks corresponding to a variety of inputs, processes, analysis types and/or outputs.

10. The method of any preceding claim, wherein the method comprises selecting at least one data source.

11. The method of claim 10, wherein the workflow comprises one or more blocks corresponding to the selected data source.

12. The method of claim 10 or claim 11, wherein the method comprises selecting parameters of the data source.

13. The method of any of claims 10 to 12, wherein validating the workflow comprises limiting the selectable parameters according to the selected data source.

14. The method of any preceding claim, wherein the method further comprises analysing the geospatial data.

15. The method of claim 14, wherein the workflow comprises one or more blocks corresponding to a selected analysis.

16. The method of claim 15, wherein the method comprises selecting parameters of the selected analysis.

17. The method of claim 15, wherein validating the workflow comprises limiting the selectable parameters according to the selected analysis.

18. The method of any preceding claim, wherein the method further comprises displaying the geospatial data. 19. The method of claim 18, wherein displaying the geospatial data comprises analysing the geospatial data.

20. The method of claim 18 or claim 19, wherein the workflow comprises one or more blocks corresponding to a selected display type.

21. The method of claim 20, wherein the method comprises selecting parameters of the selected display type.

22. The method of claim 21 , wherein validating the workflow comprises limiting the selectable parameters according to the selected display type.

23. The method of any preceding claim, wherein the method further comprises displaying the results of analysing the geospatial data.

24. The method of any preceding claim, wherein he geospatial data and/or the results of analysing the geospatial data are overlaid on a map.

25. The method of any preceding claim, wherein the geospatial data and/or the results of analysing the geospatial data are displayed in graph or chart form.

26. The method of any preceding claim, wherein collecting one or more user preferences as parameters of the workflow comprises defining an area of interest by drawing a polygon on a map within the visual programming interface.

27. The method of any of claims 1 to 25, wherein an area of interest is defined by uploading a vector file to the visual programming interface.

28. The method of any of claims 1 to 25, wherein an area of interest is selected from a drop down list.

29. The method of any preceding claim, wherein the method comprises generating code which corresponds to the workflow, and in particular the predetermined instructions represented visually by the blocks.

30. The method of claim 29, wherein the predetermined instructions are manipulated and/or organised within the code so as to make logical sense. 31. The method of claim 29 or claim 30, wherein the generated code is updated dynamically and responsive to changes in the workflow, for example addition or removal of blocks or changes to user preferences.

32. The method of any of claims 29 to 31 , wherein the method comprises compiling the code responsive to a user choosing to execute the workflow.

33. The method of any preceding claim, wherein the geospatial data store is a cloud server.

34. The method of claim 33, wherein the cloud server is connected to a user computer, on which the visual programming interface is provided, through electronic means.

35. The method of any preceding claim, wherein the method comprises selecting an area of interest, dragging an input block to the workspace and selecting a data source as a parameter of the input block, dragging an analysis block to the workspace and selecting an analysis type as a parameter of the analysis block, and dragging a visualisation block to the workspace and selecting a visualisation type as a parameter of the visualisation block, whereby when the workflow is executed data corresponding to the area of interest is obtained from the data source, analysed in accordance with the selected analysis type and the results of the analysis displayed in accordance with the selected visualisation type.

36. The method of claim 35, wherein the analysis block generates a time series, performs a comparison, applies one or more algorithms to the data, or performs any other processing on the data.

37. A computer program comprising instructions for implementing the method of any of claims 1 to 36.

38. A computing device configured to carry out the method of any of claims 1 to 36.

39. A computer-readable storage medium comprising the computer program of claim 37.

Description:
Retrieving and Analysing Data from a Geospatial Data Store

Field of the Invention

This invention relates to the task of retrieving data from a geospatial data store, in particular for the purpose of analysing and visualising large multi-dimensional data sets, for example as collected from various remote sensing systems and Earth observation satellites.

Background of the invention

Terabytes of data are collected by remote sensing systems and Earth observation satellites every day. These data are important for, amongst other things, weather forecasting, climate modelling, oceanography, agricultural monitoring, forest and deforestation mapping, flood mapping, post-disaster evaluation, disaster management, and environmental assessment (Sudmanns, et al, 2019). Such data volumes are considered too large to simply download to a local user’s desktop computer. Providing access to such data, and allowing processing across large areas and long periods of time, requires the data to be stored and processed on cloud platforms. For example, Google provides such a platform of data storage and analysis through Google Earth Engine. Similarly, through their AWS (Amazon Web Services) cloud service, Amazon also provides storage and analysis of data.

There is a significant “barrier to entry” to processing this Earth observation satellite imagery caused by the high level of technical expertise required to install, run, modify and/or operate the required software. Various open source software tools already exist for pattern recognition, image classification, manipulation and analysis. However, they all require some degree of mastery over software installation, command line instruction, or batch processing. While some systems may be easy to install on one operating system, they may be more cumbersome on another (compare, for example, the straightforward installation of QGIS on a Windows machine, compared to the multi-step installation on Mac OS). Commercial packages that offer an easier user-experience are either expensive and may also require all processing to be done exclusively online, or are task-specific, offering no functionality for the user to modify or customise the analysis. And although the freely available Google Earth Engine, mentioned above, was a pioneering step in opening access to an incredibly rich source of data, it requires a mastery of JavaScript which is beyond the capability of many students and others who might otherwise be able to exploit the power of satellite data.

There is therefore a need to reduce this barrier to entry by providing simpler means of accessing, analysing and manipulating the satellite data, while maintaining the user’s power to modify and customise the analytics and workflows.

Summary of the Invention

According to a first aspect of the invention, there is provided a computer-implemented method of retrieving geospatial data from a geospatial data store comprising: a. Providing a visual programming interface comprising a plurality of blocks representative of one or more predetermined instructions; b. Constructing a workflow comprising a plurality of predetermined instructions by combining one or more blocks of the visual programming interface within a workspace; c. Collecting one or more user preferences as parameters of the workflow; d. Validating the workflow based on one or more of the predetermined instructions and/or the user preferences; and e. Executing the workflow to retrieve geospatial data from the geospatial data store in accordance with the user preferences.

Preferably, validating the workflow comprises limiting the combination of blocks that are connected together to those which are compatible with one another. Optionally, validating the workflow comprises comparing an attempted combination of blocks with a compatibility database which defines all possible valid combinations of blocks. Optionally, validating the workflow comprises generating a new workflow responsive to an attempted combination of blocks being found invalid. The new workflow may comprise an otherwise invalid combination of blocks.

Preferably, validating the workflow comprises limiting the user preferences available for selection dependent on a choice of data source. Alternatively, or additionally, validating the workflow comprises limiting the choice of data source dependent on selected user preferences.

Preferably, constructing a workflow comprises dragging one or more blocks to the workspace from a list of available blocks. Preferably, the list comprises blocks corresponding to a variety of inputs, processes, analysis types and/or outputs.

Preferably, the method comprises selecting at least one data source. Preferably, the workflow comprises one or more blocks corresponding to the selected data source. Preferably, the method comprises selecting parameters of the data source. Preferably, validating the workflow comprises limiting the selectable parameters according to the selected data source.

Preferably, the method further comprises analysing the geospatial data. Preferably, the workflow comprises one or more blocks corresponding to a selected analysis. Preferably, the method comprises selecting parameters of the selected analysis. Preferably, validating the workflow comprises limiting the selectable parameters according to the selected analysis.

Preferably, the method further comprises displaying the geospatial data. Optionally, displaying the geospatial data comprises analysing the geospatial data. Preferably, the workflow comprises one or more blocks corresponding to a selected display type.

Preferably, the method comprises selecting parameters of the selected display type.

Preferably, validating the workflow comprises limiting the selectable parameters according to the selected display type. Optionally, the method further comprises displaying the results of analysing the geospatial data. The geospatial data and/or the results of analysing the geospatial data may be overlaid on a map. Alternatively, or additionally, the geospatial data and/or the results of analysing the geospatial data may be displayed in graph or chart form.

Optionally, collecting one or more user preferences as parameters of the workflow comprises defining an area of interest by drawing a polygon on a map within the visual programming interface. Alternatively, an area of interest is defined by uploading a vector file to the visual programming interface. Further alternatively, an area of interest is selected from a drop down list.

Preferably, the method comprises generating code which corresponds to the workflow, and in particular the predetermined instructions represented visually by the blocks. Preferably, the predetermined instructions are manipulated and/or organised within the code so as to make logical sense. Preferably, the generated code is updated dynamically and responsive to changes in the workflow, for example addition or removal of blocks or changes to user preferences. Preferably, the method comprises compiling the code responsive to a user choosing to execute the workflow. Optionally, the geospatial data store is a cloud server. Optionally, the cloud server is connected to a user computer, on which the visual programming interface is provided, through electronic means.

In a preferred embodiment of the first aspect of the invention, the method comprises selecting an area of interest, dragging an input block to the workspace and selecting a data source as a parameter of the input block, dragging an analysis block to the workspace and selecting an analysis type as a parameter of the analysis block, and dragging a visualisation block to the workspace and selecting a visualisation type as a parameter of the visualisation block, whereby when the workflow is executed data corresponding to the area of interest is obtained from the data source, analysed in accordance with the selected analysis type and the results of the analysis displayed in accordance with the selected visualisation type.

The analysis block may generate a time series, perform a comparison, apply one or more algorithms to the data, or perform any other processing on the data.

According to a second aspect of the invention, there is provided a computer program comprising instructions for implementing the method of the first aspect.

Embodiments of the second aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa.

According to a third aspect of the invention, there is provided a computing device configured to carry out the method of the first aspect.

Embodiments of the third aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa.

According to a fourth aspect of the invention, there is provided a computer-readable storage medium comprising the computer program of the second aspect.

Embodiments of the fourth aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa. According to another aspect of the invention, there is provided a computer-implemented method of retrieving geospatial data from a storage device comprising: a. Providing a visual programming interface comprising a plurality of blocks representative of one or more predetermined instructions; b. Constructing a workflow comprising a plurality of predetermined instructions by combining one or more blocks of the visual programming interface within a workspace; c. Collecting one or more user preferences as parameters of the workflow; d. Validating the workflow based on one or more of the predetermined instructions and/or the user preferences; and e. Executing the workflow to retrieve geospatial data from the storage device in accordance with the user preferences.

Embodiments of this aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa, and in particular the first aspect.

According to another aspect of the invention, there is provided a method for finding data on a storage device comprising a. A visual programming interface that collects user preferences of data source, and spatial and temporal boundaries; b. Determining the possible range of user choices given the choice of data source; c. Limiting user choices by constraining preference selection to ranges available in the accessible data store; d. Limiting the combination of choices or sequence of instructions such that the resulting instruction makes logical sense; e. Retrieving the data that meets the user requirements.

Optionally, the data storage device is a cloud server connected to the user computer through electronic means. Preferably, the data comprises geospatial data.

Embodiments of this aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa.

According to another aspect of the invention, there is provided a method for applying analytical processes to data retrieved from a data storage device comprising a. A visual programming interface that collects user preferences as parameters for the analytical process; b. Determining the validity or otherwise of the user preferences, as governed by the mode of analysis selected; c. Limiting the combination of choices or sequence of instructions such that the resulting instruction makes logical sense; d. Executing an algorithm, executing an instruction, or implementing a specific logical function to generate one or more new images that correspond to the results of the analysis; e. Determining the validity or otherwise of the analytical results.

Optionally, the data storage device is a cloud server connected to the user computer through electronic means. Preferably, the data comprises geospatial data.

Embodiments of this aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa.

According to another aspect of the invention, there is provided a method of displaying the results of analysis on a computer screen comprising a. The selection by the user of appropriate image layers to display by way of a visual programming interface; b. In the event of no selection by the user, the designation of appropriate image layers to display, based upon the choice of data source and analytics conducted; c. The correct scaling of image layers so that they are appropriately displayed on the computer screen; d. By way of a visual programming interface, provide the user with the ability to jointly manipulate the image layers through the application of arithmetic relationships, or logical functions, between the selected image layers; e. Updating a display on the screen with the final image layers.

Optionally, the data storage device is a cloud server connected to the user computer through electronic means. Preferably, the data comprises geospatial data.

Embodiments of this aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa. According to another aspect of the invention, there is provided a method of displaying the changes over time of satellite data on a computer screen comprising a. The selection by the user of appropriate image layers over a time period to display by way of a visual programming interface; b. In the event of no selection by the user, the designation of appropriate image layers to display, based upon the choice of data source and analytics conducted; c. The correct scaling of image layers so that they are appropriately displayed on the computer screen; d. The provision of a visual slider representing the passage of time that allows the user to see the multitemporal image layers in a chronological (or reverse chronological) sequence; e. By way of a visual programming interface, provide the user with the ability to jointly manipulate the image layers through the application of arithmetic relationships, or logical functions, between the selected image layers; f. Outputting the final image layers in a suitable image file format.

Optionally, the data storage device is a cloud server connected to the user computer through electronic means. Preferably, the data comprises geospatial data.

Embodiments of this aspect of the invention may comprise features corresponding to the preferred or optional features of any other aspect of the invention or vice versa.

Brief of the Drawings

There will now be described, by way of example only, embodiments of aspects of the invention with reference to the drawings, of which:

Figure 1 is a schematic depiction of the architecture depicting the relationship between user interface and image database and processing platform;

Figure 2 is a flow chart detailing the logical processing steps involved in Figure 1 ;

Figure 3 is a more detailed schematic of the Server element of Figure 1

Figure 4 shows a map display which forms part of a user interface allowing a user to select an area of interest in a first exemplary application of the invention;

Figure 5 shows a selectable list of blocks which relate to the selected area of interest in the map display shown in Figure 4;

Figure 6 (a) to (e) show how a workflow is created within a workspace starting with the selection of a desired block from the list shown in Figure 5;

Figure 7 shows an overall view of the user interface comprising the workspace shown in Figure 6 and the map display shown in Figure 4;

Figure 8 shows an overall view of the user interface comprising the workspace shown in Figure 6 and the map display shown in Figure 4, after the workflow shown in Figure 6 is executed;

Figure 9 shows a map display which forms part of a user interface allowing a user to select an area of interest in a second exemplary application of the invention;

Figure 10 shows how an area of interest in the map display shown in Figure 4 may be defined by uploading a polygon file; Figure 11 shows the creation of a workflow within a workspace starting with the selection of a desired block from a predetermined list;

Figure 12 shows an overall view of the user interface comprising the workspace shown in Figure 11 and the map display shown in Figure 10(b);

Figure 13 shows the map display shown in Figure 10(b), after the workflow shown in Figure 11 is executed;

Figure 14 shows a dashboard display comprising charts and tables after the workflow shown in Figure 11 is executed;

Figure 15 shows the creation of a workflow within a workspace starting with the selection of an area of interest from a predetermined list in a third exemplary embodiment of the invention;

Figure 16 shows a map display containing the output of the workflow in Figure 15 in visual form;

Figure 17 is a flowchart outlining the process by which a user selects an area of interest corresponding to that in Figure 4;

Figure 18 is a flowchart outlining the process by which a user selects an area of interest corresponding to that in Figures 9 and 10;

Figure 19 is a flowchart outlining the general process by which a workflow is constructed;

Figure 20 is a flowchart outlining the running of a workflow in accordance with an embodiment of the invention; and

Figure 21 is a flowchart outlining the general process by which information is displayed on a dashboard in accordance with an embodiment of the invention. Detailed Description of the Invention

Reference will now be made in detail to embodiments of the present invention, one or more examples of which are described in the accompanying text. Each example is provided by way of explanation of the invention, not as a limitation of the invention. It will be apparent to those skilled-in-the-art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used in another embodiment to yield a still further embodiment. Thus, it is intended that the present invention cover such modifications and variations that come within the scope of the appended claims and their equivalents.

One or more different inventions may be described in the present application. Further, for one or more inventions described herein, numerous alternative embodiments may be described; it should be appreciated that these are presented for illustrative purposes only and are not limiting of the inventions contained herein or the claims presented herein in any way. One of more of the inventions may be widely applicable to numerous embodiments, as may be readily apparent from the disclosure. In general, embodiments are described in sufficient detail to enable those skilled in the art to practice one or more of the inventions, and it should be appreciated that other embodiments may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the particular inventions. Accordingly, one skilled in the art will recognize that one or more of the inventions may be practiced with various modifications and alterations. Particular features of one or more of the inventions described herein may be described with reference to one or more particular embodiments or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific embodiments of one or more of the inventions. It should be appreciated, however, that such features are not limited to usage in the one or more particular embodiments of figures with reference to which they are described. The present disclosure is neither a literal description of all embodiments of one or more of the inventions nor a listing of features of one or more of the inventions that must be present in all embodiments.

Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way. Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical.

A description of an embodiment with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components may be described to illustrate a wide variety of possible embodiments of one or more of the inventions and in order to more fully illustrate one or more aspects of the inventions. Similarly , although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may generally be configured to work in alternate orders, unless specifically stated to the contrary. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non- simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the invention(s), and does not imply that the illustrated process is preferred. Also, steps are generally described once per embodiment, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some embodiments or some occurrences, or some steps may be executed more than once in a given embodiment or occurrence. When a single device or article is described herein, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described herein, it will be readily apparent that a single device or article may be used in place of the more than one device or article.

The functionality or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality or features. Thus, other embodiments of one or more of the inventions need not include the device itself. Techniques and mechanisms described or referenced herein will sometimes be described in singular form for clarity. However, it should be appreciated that particular embodiments may include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. Process descriptions or blocks in figures should be understood as representing modules, segments or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of embodiments of the present invention in which, for example, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art.

Generally, the techniques disclosed herein may be implemented in hardware or a combination of software and hardware. For example, they may be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specifically constructed machine, on an application-specific integrated circuit (ASIC), or on a network interface card.

Software/hardware hybrid implementations of at least some of the embodiments disclosed herein may be implemented on a programmable network-resident machine (which should be understood to include intermittently connected network-aware machines) selectively activated or reconfigured by a computer program stored in memory. Such network devices may have multiple network interfaces that may be configured or designed to utilize different types of network communication protocols. A general architecture for some of these machines may be described herein in order to illustrate one or more exemplary means by which a given unit of functionality may be implemented. According to the specific embodiments, at least some of the features or functionalities of the various embodiments disclosed herein may be implemented on one or more general-purpose computers associated with one or more networks, such as for examples an end-user computer system, a client computer, a network server or other server system, a mobile computing device (e.g., a tablet computing device, mobile phone, smartphone, laptop, or other appropriate computing device), a consumer electronic device, a music player, or any other suitable device, or any combination thereof. In at least some embodiments, at least some of the features of functionalities of the various embodiments disclosed herein may be implemented in one or more virtualized computer environments (e.g. network computing clouds, virtual machines hosted on one or more physical computing machines, or other appropriate virtual environments).

For the avoidance of doubt, the term geospatial data is used herein in the conventional sense to describe inter alia “data about objects, events, or phenomena that have a location on the surface of the earth. The location may be static in the short-term (e.g., the location of a road, an earthquake event, children living in poverty), or dynamic (e.g., a moving vehicle or pedestrian, the spread of an infectious disease). Geospatial data combines location information (usually coordinates on the earth), attribute information (the characteristics of the object, event, or phenomena concerned), and often also temporal information (the time or life span at which the location and attributes exist)” (Stock and Guesgen, “Automating Open Source Intelligence”, 2016). Typical sources of geospatial data “provide multispectral imagery at similar resolutions that distinguishes land use, vegetation cover, soil type, urban areas, and other elements” (Ribarsky, “Visualization Handbook”, 2005). Accordingly, geospatial data is not limited to satellite imagery, but any kind of data which might be stored in a geospatial data store, “including observations from a variety of satellite and aerial imaging systems in both optical and non-optical wavelengths, environmental variables, weather and climate forecasts and hindcasts, land cover, topographic and socio-economic datasets” (Gorelick et al, “Google Earth Engine: Planetary-scale geospatial analysis for everyone”, Remote Sensing of Environment, 2017).

In at least one embodiment, the present invention tackles the technical barrier-to-entry problem which otherwise prevents widespread access to and analysis of geospatial data, by using a visual programming interface that provides an easy-to-use web application to run the software in the background, as well as display the results (as a map or as charts and/or tables).

A key innovation is the linking of a visual programming interface with a cloud-based geospatial data platform to create a user-friendly experience for querying and processing geospatial data. A platform according to the invention can make use of freely available satellite data catalogues hosted on the cloud as well as cloud-based analysis capabilities, or it can be used as an engine for processing data on proprietary data storage and processing platforms (such as AWS, for instance), or it can be used to select and process images or other geospatial data on a local machine. It can also integrate a visual programming library for creating a block-based toolbox for geospatial data analysis. Each block in the interface may perform a certain analytical or logical computing step. A user can therefore modify and customize the collection of blocks when and where necessary by combining multiple blocks into new workflows. In at least one embodiment, the present invention also includes premade blocks that conduct standardised, or common, analysis such as detecting anomalies, feature tracking, supervised and unsupervised image classification, visualization, and exporting image files.

User interaction with the interface is preferably of a “drag and drop” style by which the above-mentioned blocks are manipulated and made to interact, rather than a text-based or check-box interface. As such it can be simple to use for beginners, but offer sufficient flexibility for more advanced users to customise complex workflows and create standardised operating procedures. Amongst the main technical challenges that the Applicant has overcome, is building the interface such that novices and experts alike have the ability to conduct elaborate tasks while being assured that the instructions are kept meaningful (to the data source), and even optimised for performance without the user necessarily knowing how (and importantly without the user needing to know).

As part of that ambition, an platform according to an embodiment of the invention may include workflows designed to address industry specific needs, such as identifying illegal mining, monitoring land cover change, or mapping flooding, each of which may be built from a collection of existing complex algorithms. The platform improves on state-of-the-art cloud processing platforms (for example, EO Browser, Copernicus DIAS, Google Earth Engine) as it offers users the functionality to write their own processing steps, as if they were writing code, but all done via a visual interface. Compared to other tools which tends to offer a finite selection of options on a webpage (or customisation by writing lines of code, which is what the present invention was conceived to avoid), the platform offers near limitless possibilities in functionality and customisation through the above-mentioned block-based approach.

As intimated above, a ‘block’ is a visual representation of the underlying algorithm or set of instructions written in a computer coding language (the particular language is not consequential). Each block may incorporate user input, e.g. through dropdown menus, so that the user can select properties of the data, or variables applicable to each algorithm. The API negotiator can send requests to any data storage or service that has an API. Examples of a block based representation of specific workflows are shown in Figures 5 to 8, 10 and 11. A potential target user group for a platform according to an embodiment of the invention is therefore those who may have some prior knowledge of remote sensing and/or a familiarity with geospatial data and the opportunities it provides, but have never learned to code, or those who have had some experience of writing code but a lack of proficiency in the particular language required to interrogate a particular data source. For example, JavaScript is the coding language required for writing instructions for Earth Engine but some users of Earth observation data need to be proficient in other languages, such as Python. A familiarity with some form of programming language is a benefit, but not a prerequisite to make use of the present invention.

Embodiments of the present invention use pictorial, icon-like, “blocks” to represent functional, or logical, processes such as loops, conditionality, and threads, rather than requiring a user to write out the equivalent programming instructions in text. Variables, conditionalities {including Boolean), and arithmetic procedures may be incorporated using labelled drag-and-drop elements, drop-down menus, or text entry via a keyboard. Blocks that perform different functions may have different block shapes to provide visual cues to the user as to their function, and different blocks may be connected together to develop longer and/or complex sets of instructions. The blocks may be restricted in how they can be connected so as to avoid syntax errors, or to avoid the combination of incompatible blocks, and the blocks may be provided with visual cues (such as “notches” top and/or bottom) to help the user join blocks correctly.

In order to select, analyse and visualise geospatial data (including for example satellite imagery), in a near real-time basis and in accordance with an embodiment of the invention, the following ongoing operations are performed.

FIG 1 is a block diagram of an exemplary system architecture 100 for performing a visualprogramming based analysis of geospatial data (which may comprise satellite imagery), according to a preferred embodiment of the invention. According to the embodiment, an orthorectified geospatial image (OGI) database and analytical platform 101 may comprise a plurality of stored image information, for example as captured by a plurality of image capture sources such as imaging satellites, non-imaging satellites, aircraft, drones, or other geospatial data-capture devices that can be represented by an image, such as for example laser scanner point clouds. In one embodiment, the OGI may be cloud-based server such as Google Earth Engine or Amazon Web Services. In another embodiment, the OGI may be a local server or local disk drive that performs the functions of a data server.

The OGI may be utilized by a curation interface for manual curation by a human user, for example via a software-based graphical interface, or command line interface, configured to allow a user to manually review of modify image data. According to the embodiment disclosed herein, the curation interface is replaced by combination of elements that comprise a Server 104, and a GUI 105. The Server communicates with the OGI via a WEB Application Programming Interface (API). A WEB API that is a specific kind of interface between two computers whereby the client computer makes a request in a specific format, in order to always get a response in a specific format or initiate a defined action. The user interface 105 comprises a Toolbox 103 and a Data Display tool 106. A Query and Validation (QAV) Tool 102 ensures that parameters selected in the Toolbox are valid and appropriate and ensures that the received data response satisfies certain formatting standards before being displayed or plotted on a graph.

FIG 2 represents the sequence in a flow chart 200. According to the embodiment, a Toolbox 202 (and 103) utilises a visual programming library to display visual representations of computer code (referred to as “blocks” 203) to allow a user to manually select a plurality of instructions about which data to view, manipulate or analyse. The visual programming interface allows the user to assemble computing code, and computing code instructions (such as loops or conditional statements) by using a drag-and-drop interface that is composed of draggable “blocks” 203. The user does not need to type computing code instructions. Each block contains instructions or computing code. Multiple blocks can be assembled in order to construct workflows 201. Workflows comprise of collections of instructions that may replicate one instruction multiple times, or may provide multiple instructions sequentially, or may provide more complex instructions that replicate any instructions that can be made using command line instructions or computer code. In the embodiment disclosed herein, there are blocks that are created to perform specific tasks. These tasks include, but are not limited to, for example the selection of query dates, the selection of areas of interest, the selection of data sources and types of data, the visualisation of data and the manner in which it is displayed, the output of data in various formats and realisations. Within the Toolbox 202 is a menu that holds blocks of common code concepts, and a workspace, where the user can drag and drop individual blocks to create larger workflows. When satisfied with the workflow, the user can initiate the instructions defined by the workflow. In the embodiment disclosed herein, the blocks, and combinations of blocks in workflows, can generate code in JavaScript, Python and many other languages. Each block in the toolbox in the GUI has a block definition and a generator. The block definition controls the appearance of the block and specifies how it is going to be rendered in the interface while a generator assembles processing parameters selected by the user and generates instructions in the form of executable computer code. In the embodiment disclosed herein, these blocks also modify the user selection options dynamically via the QAV 204 so that incompatible selections cannot be made by the user. For example, if a user selects a particular satellite mission, the choice of dates automatically constrains the user to those dates during which the mission collected data. The toolbox GUI constrains the combination of blocks only to those blocks that are compatible, or consistent with each other, both in terms of instruction and order or sequence of instructions.

The generated code is then compiled by the Server 205 (and 104) and the request is sent to the OGI 206 for execution and processing. In the current embodiment the cloud storage server is Google Earth Engine (GEE), but the method works with any similar cloud storage device that holds archives of satellite imagery and can run algorithmic processes on these data.

The OGI returns the output or results to the Server 207. In this instance, before presenting the results to the user, the QAV 208 validates that the results are in an appropriate data format. In the embodiment whereby the outputs are an image the Data display tool 209 renders the image data in a suitable format, which may include scaling, stretching, enhancing, rescaling or combining with other images as appropriate in order to display a meaningful image on the map projection, or to generate downloadable images in a format suitable for use in other software tools or for display purposes, or to combine images in animated sequences or video. In the embodiment whereby the outputs are strings or tables of data, the Server curates the data in a form suitable for the Data Display tool to visualise as a graph, a table or a combination of graphs and tables in a collective dashboard display.

The Toolbox 202 and the Data Display 209 operate on a computing device, on any number of computer platforms, such as web browsers or dedicated graphical interfaces. According to the specific embodiments, at least some of the features or functionalities of the various embodiments disclosed herein may be implemented on one or more general-purpose computers associated with one or more networks, such as for example an end-user computer system, a client computer, a network server or other server system, a mobile computing device (e.g., a tablet computing device, mobile phone, smartphone, laptop, or other appropriate computing device).

The Server 104 may include one or more processing tasks and functions for handling data requests by the user and negotiating with the OGI via the API. In one embodiment the Server makes use of a python backend with the main framework used for request handling and database management being a webapp2 framework FIG 3 represents the backend of the invention.

In this embodiment the figure represents the Server 300 with component modules comprising a user management module 301 , a page (template) rendering module 303, and a code processing module 304. The exemplar Server 300 may employ one or more memories or memory modules configured to store data, program instructions for the general-purpose network operations, or other information relating the functionality of the embodiments described herein (or any combinations of the above). In one embodiment the database may run on proprietary or commercially provided datastores, such as the Google App Engine datastore.

In one embodiment the user management module 301 uses an architecture that closely resembles the one described here: (the content of which is incorporated by reference). The module is made up of a number of request handlers, each corresponding to a step in the user authentication process that is necessary to ensure users can obtain secure, repeated access to the application:

(1) The Registration handler contains a method for rendering the user registration template. In one embodiment this is an HTML template containing a form with fields for username, email, name, password, and other pertinent information such as a short questionnaire, and a checkbox for accepting the terms and conditions. The data collected via the form is saved to the database on the Server, and sends an email to the registered user for verification.

(2) The Verification handler contains a method for retrieving the verification token of a user, validating it, and updating the user status in the database.

(3) The Login handler contains methods for rendering the user authentication page - containing a sign-in form with fields for username and password - and for running the authentication flow. This flow attempts to authenticate the user based on their credentials, and stores an authentication key in the cookies for accessibility (i.e. avoiding to repeat the authentication process on each page reload).

(4) The Logout handler contains a method for logging the user out.

(5) The Set Password handler contains a method for updating the user password.

(6) The Forgot Password handler contains a method for rendering the ‘forgot password’ page, containing a form for changing the user password. It also contains a method or methods for emailing the user their password change token, which is used for authenticating the user into the password change flow.

(7) The Base handler contains methods for retrieving each of the database fields, and for rendering the appropriate template foreach step of the authentication flow.

Each handler described above corresponds to a similarly named view template, each of these being HTML pages. These, in turn, extend a base HTML template, which imports the required libraries and displays the application logo. The Page Rendering module 303 is made up of a number of request handlers that render their corresponding pages. In the embodiment disclosed herein, these pages include but are not limited to: the About page, the Tutorials page and the Main Application page. The Main Application handler prevents the page from loading if the user is not authenticated, redirecting them to the Login page.

The Code Processing module 304 is made up of a single request handler, which is used for running the user-generated code created by the Toolbox. In addition to this, the module relies on a front-end module, which is used for retrieving the user-generated code. While this module is not purely back-end, its main component (the handler) lies on the backend, and so the overarching flow relies on the backend to run. In the current embodiment the front end module runs an ajax GET request, which is part of the jQuery JavaScript library (bt?ps://ig ery ; cgm/). The user code is sent via this request to the backend handler.

The handler itself initializes the requests to the OGI via the API. In the current embodiment, the OGI is Google Earth Engine and the API is the EarthEngine python API (https://github.com/google/earthengine-api) with the use of the host’s service account, as per the requirements of the AppEngine framework. Through this API, the handler makes requests to the OGI servers, where the user-generated code is run and the results are retrieved. In other embodiments, the host service account is replaced by a user service account. In another embodiment, the access to the API is via the user’s user account to the OGI. At the current stage, the OGI returns the retrieved results and are parsed, being sent back to the frontend Ajax call as a String response. In the embodiment disclosed herein this String is Parsed into a JSON-formatted object, containing the product ID of the retrieved data, as well as a product token and other parameters specific to the data type. The current embodiment then uses the EarthEngine JavaScript API so that the data is displayed on the map window of the Data Display tool within the GUI, and dynamic interface elements are added according to the data type. These interface elements may be any number of metadata descriptors and annotations to the results, which include, but are not limited to: additional tabs containing generated data, such as tables(for instance, in the case of classification datajor graphs; Legend map display in the case of colour-coded data (e.g. NDVI); Class Selection map display, in the case of for example the supervised classification data; Area selection map display, in the case of for example the area selection blocks; a timespan slider, in the case of data output results being in a temporal sequence, so that, for example the time-lapse data may be visualised on the map. The nature of these ancillary components and metadata descriptors can also be used to export results in a plurality of file formats, which may include, but are not limited to, CSV, PDF, ASCII, TIFF, GEOTIFF, GIF (and animate GIF), and other proprietary or non-proprietary file formats that may be appropriate for the storing and distribution of geospatial data and images.

Some of these displays are also added before the actual code processing, since they influence variables of the actual code. Namely, these are the class selection display and the area selection display.

In the current embodiment the Earth Engine Python API and the JavaScript API are used such that the former is employed for data acquisition purposes and data processing purposes, while the latter is only used for data acquisition and rendering. In other embodiments each component may make use of other computer coding languages in order to compose instructions to the API, render data, execute algorithms, and any other appropriate manipulation or modification or analysis of the data. The current embodiment also uses a map display API for the map display, adding polygons, displaying data, adding the map interface and other housekeeping functions for the display of map information.

The embodiment disclosed herein includes a toolbox that incorporate blocks designed to perform predefined analysis, operations and instructions. Each block includes various optional parameters that may be selected by the user through various visual mechanisms and subsequently included in the instructions sent to the OGI by the Server. Such mechanisms may be via a drop down menu, a free form entry, a visual calendar-based date selector, a selection of a point or area on a digital map, via the uploading of files that contain information such as data points, shapes, or images, or the selection of radio buttons, Boolean selections, or through the allocation of variables that may be related to other forms of data or previous results returned by the OGI. The plurality of image sources and data types available in the OGI, combined with the plurality and extensibility of individual blocks and assemblies of blocks in the form of workflows, results in a plurality of combinations of analytical functions and data source. The embodiment disclosed herein is designed to allow users to build their own workflows, or to use predefined blocks or predefined workflows.

In one embodiment a block includes a Data Selection block. This block allows the user to select one or more data sources, satellite type, or method of data collection, such that the Server makes a request to the OGI to retrieve data as defined by these choices. The block also requires the selection of one or more areas of interest (AOI) which can be selected via a drop-down menu listing; countries, national parks, protected areas, regional boundaries, or other preselected AOIs, or via the selection of a polygon, circle or rectangular area, or point, selected from the map display, or via the uploading of one or more files that describe one or more AOIs. Via the QAV the plurality of choices is constrained to only include meaningful and consistent combinations of parameters. In another embodiment the choices may include options to remove noise, or defects, or other impairments in the requested data, for example, the removal of clouds, instrument noise or speckle in radar imagery. In another embodiment, a choice is made in average over the date range using a variety of statistical options, such as mean, median, maximum, or minimum, or other such statistical properties as appropriate. In one embodiment all of the above options reside in one or more blocks. In another embodiment the choices appear as individual parameters choices within single blocks, and their combined selection is a consequence of assembling multiple blocks into a single workflow.

Blocks may also make instructions to manipulate, analyse, visualise, display, one or more image data sets, either individually or in combination with other image layers or other forms of data. By way of example, in one embodiment a block, or combination of blocks in a workflow, may calculate the biomass of a forest from satellite radar data. First, the block Forest Change is dragged from the toolbox to the workspace. The user selects “Biomass derived from PALSAR” from a drop down list. The QAV routines determine the range of dates provided for the PALSAR instrument. Dates within that range can be selected. The parameter field is also updated to provide further available options for the PALSAR dataset. For example, the dropdown will display “Forest Cover, Loss, Gain, Loss and Gain, Loss, Extent and Gain” for PALSAR and Hansen (Hansen et al 2013) datasets. If Above Ground Biomass dataset is selected, the field will display Biomass. The user then selects the area and time of interest. A Forest Change Visualization block is inserted into the Forest Change Block to form a workflow. The Forest Change Visualisation block also allows multiple options for output display: (a) If a boundary outline box is checked, a black outline of the polygon/country will be added to the display, (b) If a country is selected and a ‘Polygon’ box is checked, the output will also include a polygon of the selected country. When instructed by the user, the Toolbox sends the generated code to the Server which then places the request to the OGI via the WEB API. When the data is returned to the Server it is scaled according to the approach described in Mitchard et al., (2009) for calculating above ground woody biomass from ALOS PALSAR data. The resulting image is then scaled and rendered before being sent to the Data Display function that projects the image onto a digital map.

In other embodiments, a plurality of functions may be applied to the data. In the embodiment disclosed herein, such functionality includes, but is not limited to, Principal Component Analysis (PCA), Tasseled Cap Transformations, supervised and unsupervised classification, change detection, image differencing, forest loss detection, post-classification analysis, time series analysis.

There now follows some further specific examples illustrating embodiments of the invention for the purposes of demonstrating the technical benefits arising therefrom.

Example 1 : NVDI Time Series Analysis

Figures 4 to 8 are illustrative of the construction and running of a workflow for analysing normalized difference vegetation index (NDVI) data for a specific geographical region (in this example an area including Central Scotland). A user is presented with a map (Figure 4) which can be manipulated in a conventional manner (e.g. dragged to move, scrolled to zoom in and out) to display an area of interest or at least a region containing an area of interest. In this example the user is interested in Central Scotland and has zoomed in on a portion of the United Kingdom. After clicking on “ADD AREA” 401 the user has drawn a rectangular box 402 to specify an area hereafter designated “Area 1”.

Subsequently, the user is able to choose (Figure 5) from a selectable list of blocks 501 which pre-designate said “Area 1”. This selection is done via a hierarchical list 502 which contains blocks corresponding to a variety of inputs, processes, analysis types and outputs. The blocks available to the user under the category “Input” represent different data types and/or sources, for example under sub-category “Imagery” the user is able to choose from optical data from Landsat 8 and other sources (501A), satellite imagery from Landsat 8 and other sources (501 B), radar data from Sentinel-1 SAR GRD and other sources (501 C) and PALSAR Yearly Mosaic radar data (501 D). In accordance with an embodiment of the invention this selectable list of blocks 501 may be filtered from a larger set of blocks dependent on the availability of data within the selected area.

Using the drag and drop interface which underpins the visual programming interface of the platform, the user adds an input block 601 (corresponding to block 501A) to the workspace 602 (Figure 6(a)). The workspace 602 provides the user with a virtual canvas upon which they may construct one or more workflows. This particular input block 601 A is then modified to select a different source from the default (Landsat 8) using a drop down menu. As shown in Figure 6(b) the user has selected the Sentinel-2 satellite. Similarly, the list of available sources in the drop down menu is limited according to the availability of data within the selected area. Input block 601 also comprises two editable fields 603 which correspond to start and end dates of a data set which the user wishes to retrieve from the data source. Again, the available dates may be filtered according to the availability of data.

The workflow is further refined as shown in Figure 6(c) by dragging a time series block 604 onto the input block 601. In this case the user has changed the interval selection to “Monthly”. The workflow is yet further refined as shown in Figure 6(d) by dragging an NDVI block 605 onto the time series block 604, and from the associated drop down list the user has selected the “Median” option. Finally, the workflow is completed by dragging a visualisation block 606 onto the NDVI block 605, and selecting the desired layer name from the drop down menu, in this case “NDVI Time Series”. As with the other blocks described above, the drop down menus of the time series block 604, NDVI block 605 and visualisation block 606 comprise a filtered list of options according to the availability of data and/or options. Furthermore, the blocks themselves are also filtered such that it is only possible to select blocks which are compatible with the earlier or preceding blocks in the workflow.

Having constructed the desired workflow in visual form within the workspace, the user then executes the workflow by pressing the “Run” button 701 (see Figure 7) after which the outputs are visualised or overlaid on the map 801 (see Figure 8) with the relevant III elements (legends, layers and time series slider) also displayed. As is evident from Figure 8, the platform provides an elegant and user friendly way of generating code which is able to carry out sophisticated analysis without the need for intimate knowledge of programming languages. The block based visual programming interface effectively hides the complex code which forms the actual algorithm or query which is executed on the geospatial data source (be it Google Earth Engine or otherwise).

Invention lies not only in the handling of complex code in a user friendly manner; an important aspect is that the platform validates all aspects of the code to ensure that when the user presses the “Run” button, the desired output is delivered. This is achieved by validating the workflow at various points and in various ways. For example, as noted above, the platform limits the user to those blocks which are compatible with the workflow, and within those blocks the parameters are limited to those which are compatible with the data source. Furthermore, the corresponding code (which is hidden from the user) is manipulated dynamically as different block combinations and/or different parameter combinations are made. A benefit of the platform is that a particular block might appear simple to the user, yet the underlying code be very complicated, even to the extent that it might actually use a different programming language depending on the data source. Further detail of the construction process is described below under “Constructing a Workflow” with reference to Figure 19.

Within this example, there is included a specific process by which a user “manually” draws an area on a map so as to define the area of interest. This process is described further below under “Selecting a Target Area” with reference to Figure 17.

Example 2: Active Fires and Land Cover Comparison, and Dashboards

Figures 9 to 14 are illustrative of the construction and running of a workflow for analysing active fires data for a specific geographical region (in this example an area of South California including San Francisco, San Jose and Santa Cruz). A user is presented with a map (Figure 9) which as described above can be manipulated in a conventional manner (e.g. dragged to move, scrolled to zoom in and out) to display an area of interest or at least a region containing an area of interest. In this example the map display has defaulted to the previous view (from Example 1) which is a portion of the United Kingdom. After clicking on “ADD AREA” 901 , instead of drawing the region of interest, the user then clicks on “UPLOAD POLYGONS”, selects an appropriate vector file (Figure 10(a)) and clicks on the “UPLOAD” button 1001 , whereupon the corresponding polygon 1002 is superimposed on the map 1003 in the appropriate location. The map view is automatically shifted to show the relevant geographical area (Figure 10(b)).

In order to view the daily active fires date, the active fires block 1101 (Figure 11(a)) is dragged onto the workspace and the desired options selected; in this case the dataset “Daily Active Fires (1000m) [Temperature in K]” between 1 August 2020 and 31 August 2020. In order to compare this data to landcover data, comparison block 1102 is dragged onto the active fires block 1101 and the desired landcover dataset (in this case Copernicus Global (2015)) selected from the drop down menu (Figure 11(b)). Those land cover classes that are desirous to compare are selected by way of tick boxes.

In Example 1 , the output of the workflow was superimposed on the map. In this example, at least three different outputs are provided. Firstly, the other visualisation block 1201 is dragged into the workflow, then the charts and tables blocks 1202,1203 are dragged into the same position within the workflow. As a result, the data retrieved from the data source is simultaneously overlaid on the map 1301 (Figure 13) and presented in charts 1403A, 1403B and tables 1405A, 1405B form on a dashboard tab 1401 of the platform (Figure 14).

At each step, as with the blocks described in Example 1 above, the drop down menus of the active fires block 1101, the comparison block 1102, other visualisation block 1201 and chart block 1202, as well as the array of classes selectable by tick boxes in the comparison block 1102, comprise a filtered list of options according to the availability of data and/or options. Again, the blocks themselves are also filtered such that it is only possible to select blocks which are compatible with the earlier or preceding blocks in the workflow. Within this example, there is included a specific process by which a user selects an area on a map by uploading a vector file that defines the area of interest. This process is described further below under “Selecting a Target Area” with reference to Figure 18.

Example 3: Radar Data Band Math and Country Selection

In contrast to the examples above, by which a user selects an area of interest using polygons either “drawn” or uploaded to the platform, a user may select an area of interest from a pre-populated drop down list of geographical regions. In this example, Figures 15 and 16 are illustrative of the construction and running of a workflow for applying algorithms to one or more bands in an image or set of images.

Firstly, the user drags the input block, which in this case is radar block 1501 , to the workspace, selects the desired radar (Sentinel-1 SAR GRD) from the radar drop down menu 1502, and selects a specific geographical region (Ireland) from the area drop down menu (Figure 15(b)). As before, the area drop down menu 1502 is filtered according to the availability of data from the selected radar. The user then selects the desired date range, from the dates available, and orbit type “both” (Figure 15(c)).

The user then drags the band math block 1503 to the radar block 1501 so as to perform band math operations on the data retrieved from the data source, and individual band math blocks 1503A to 1503N on to the band math block 1503 in order to construct a custom algorithm. These individual band math blocks 1503A to 1503N represent “bite sized” components of a computation and follow the overriding principles of being modular, compatible with the other blocks within the workflow and/or the math block, and compatible with the data source. Finally, as in the previous examples, a visualisation block 1504 is added in order to display the output of the workflow in a visual form. Figure 16 shows the output visualised on a map 1601 of the area of interest (in this case Ireland).

Selecting a Target Area

As discussed above there are two methods disclosed herein by which a user may select an area of interest. In Example 1 this comprises selecting the area of interest via the interface, for example by drawing a polygon on a visual representation (i.e. a map image) of the area for which geospatial data is available. In Example 2 this comprises uploading a vector file which creates a corresponding map polygon.

Figure 17 is a flowchart outlining the process by which a user selects an area of interest corresponding to that in Example 1. As described above, the user clicks “ADD AREA” 1701 and the platform creates an empty object “Area Object” 1702. The user then uses a polygon tool to draw a shape on the map to create a map polygon 1703. In the example above this was rectangular in form but it may take any shape. The platform creates a visible map polygon on the map interface, which holds information about itself including the corner coordinates 1704. The platform then checks which area object (or area objects as the case may be) is selected 1705. If the platform determines that an area object is selected 1706 the corner coordinates of the map polygon are assigned to the empty Area Object 1707. If the platform determines that no area object is selected the map polygon is not usable 1708, and the user has to draw a map polygon (as in step 1703) and manually assign the map polygon to the Area Object 1709, after which the corner coordinates are assigned to the Area Object 1707.

Figure 18 is a flowchart outlining the process by which a user selects an area of interest corresponding to that in Example 2. As described above, the user clicks “ADD AREA” 1801 and the platform creates an empty object “Area Object” 1802. The user then clicks “UPLOAD POLYGON” and uploads a vector file in a valid format 1803. The platform reads the vector, checks that it is valid, and creates a visible map polygon on the map interface, which holds information about itself including the corner coordinates 1804. The platform then checks which area object (or area objects as the case may be) is selected 1805. If the platform determines that an area object is selected 1806 the corner coordinates of the map polygon are assigned to the empty Area Object 1807. If the platform determines that no area object is selected the map polygon is not usable 1808, and the user has to upload a vector file so as to define a map polygon (as in step 1803) and manually assign the map polygon to the Area Object 1809, after which the corner coordinates are assigned to the Area Object 1807. Constructing a Workflow

The examples above describe specific examples of the manner in which workflows can be constructed in accordance with embodiments of the invention; there now follows a more generalised description of the construction of a workflow with reference to Figure 19.

As shown in at least Figures 5 to 8 and 12 above, the blocks available to a user are organised in a hierarchical list or “nested toolbox”. The first step of the process of constructing a workflow is to navigate the nested toolbox to find the block that is required to build the workflow, and drag this block onto the workspace 1901. A workflow is constructed from a plurality of blocks which may combine in a predetermined manner, but at a minimum the workflow must comprise an input block (which retrieves the data) and an output block (which outputs data or the results of data analysis) 1902.

When a new block is dragged onto the workspace, corresponding code is automatically generated 1903, following the principles outlined above. The code is modular, enabling code to be generated for different block combinations automatically, and parameters of the code adjust dynamically as the user changes options. It is also structured in a strict way so as to operate within the software architecture, so that it is able to execute the necessary operations, and (where appropriate) facilitate the visualisation of the outputs on behalf of the user. Put another way, the code generation in respect of a particular block is responsive and reactive to limitations and/or requirements of other blocks and/or parameters such that it will always execute properly (also see below). Finally, the code is generated so that it can be executed on the relevant Analytical Engine (in the examples above this is the Google Earth Engine API); in order to successfully undertake the necessary computations.

When the user changes options and settings on the block itself (examples of which are described above in detail), the code changes dynamically 1904. For example, if a different satellite source or data type is selected it is necessary to generate code which retrieves and/or analyses that data in an appropriate way. The options themselves are pre-filtered so that a user is only presented with valid and logical options. As discussed above, this filtering may be extended to the blocks themselves such that it is not possible to construct incompatible or illogical workflows. An area of analysis must be selected 1905, and this may be as described above or otherwise. Accordingly a user may select an area by drawing a polygon, uploading a vector file, selecting a pre-determined area from a drop-down list, or any other appropriate way.

As noted above, some of the options may change responsive to the selected area, and/or some of the options may be restricted 1906. For example, changing the satellite might constrain the dates from which the user may select to those for which there is data from that satellite, or the bands (data from specific wavelengths) may only be selected from those which have actually been collected. The result of this is that data should always be retrieved; i.e. no zero or empty data sets will be returned from the data source.

Every time a new block is dragged to the workflow 1907, and its settings changed, the workflow code is changed dynamically. Furthermore, every time a new block is dragged to the workflow the platform checks that it is compatible 1908 and does not allow the block to be added to the workflow (or only permits it to be added where it makes logical sense).

This is achieved by querying a compatibility database 1909 which defines all possible block combinations between categories. If a block combination is valid 1910, the block is connected and the code is dynamically updated to include the instructions provided by the block. If the block combination is invalid 1911 the block will not be connected and may be discarded or added to the workspace as a new, separate workflow, and code generated for that workflow separately.

Finally, as described above, the user presses the “RUN” button 1912 and the workflow (or workflows) is executed and run.

Running a Workflow

There now follows a description of the running of a workflow with reference to Figure 20. As in the last step of the process of constructing a workflow, the first step of running a workflow is that the user presses the “RUN” button 2001. Firstly, any outputs currently visible on the map or dashboard are cleared 2002. A new “job” is created 2003 and associated with an organisation, given a unique ID and is populated with the information about that job including the code generated by the user (via the visual programming interface) following the process(es) described above. The job is then run on the data source 2004, which in the examples described herein is via the Google Earth Engine API. The necessary data selections and computations are then performed by the data source and the outputs returned to the platform 2005 where they are stored in local database 2006. If there are multiple workflows these trigger individual and separate runs 2007, such that they are run in parallel.

Once all of the data has been received and stored, the visualisation process is triggered 2008. On the client side, the platform detects the changes in the local database 2009 and responsive to said change (which indicates that the workflow has been executed to completion on the data source) the data is then presented to the user in the selected manner(s) 2010. In the case of map visualisation, the platform requests map imagery from (in this example) the Google Maps API and displays the map on the user interface 2011. Where the workflow produces multiple map layers they are handled dependent on the type of analysis 2012. If the analysis is time series, then the map layers are grouped and handled as one overarching layer on the user interface 2013 (on the user interface the map layers may be grouped and treated as a single layer (e.g. the year) however additional user interface elements may mean that the user can choose which image (e.g the month) from that collection is displayed at any one time); whereas if the analysis is not time series then each individual layer is treated independently 2014.

Alternatively, or additionally, the user may have selected to display information on a dashboard, one example of which was described above with reference to Figure 14. In that case, dashboard specific outputs from the data source are stored within a specific “data dictionary” within the local database 2101 (Figure 21). As above, the data is then presented to the user in the selected manner(s) 2102 following conventional procedures for visualising the data (for example bar charts and tables as illustrated in Figure 14). The available display types are predetermined and preselected by the user during the construction of the workflow 2103. Different user interface features are then overlaid on the dashboard 2104, for example legends, scales and labels etc. as appropriate. The Applicant has conceived and reduced to practice a system and various methods for analysing geospatial data, including global-scale satellite imagery from a cloud-based repository of satellite data, and visualising the results on a map, using a visual programming graphical user interface (GUI) to allow complex analysis of larger data sets without the need for the user to write computer code instructions. The various methods are created in the GUI through the formation of individual “blocks”. In this instance, a “block” is a visual representation of the underlying method written in a computer language. Each block may incorporate dropdown menus, or otherwise some such method of user input, so that the user can select properties of the data, or variables applicable to each algorithm. The visual programming GUI provides the user the option to assemble multiple blocks to form workflows. Workflows incorporate two or more blocks by visually linking together the blocks in a coherent sequence. The visual computing interface, or the blocks themselves, do not allow the user to assemble incoherent combinations of blocks. Each block may represent a variety of different functions, some examples of which are described herein.

Throughout the specification, unless the context demands otherwise, the terms “comprise” or “include”, or variations such as “comprises” or “comprising”, “includes” or “including” will be understood to imply the inclusion of a stated integer or group of integers, but not the exclusion of any other integer or group of integers. Furthermore, unless the context clearly demands otherwise, the term “or” will be interpreted as being inclusive not exclusive.

The foregoing description of the invention has been presented for the purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise form disclosed. The described embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilise the invention in various embodiments and with various modifications as are suited to the particular use contemplated. Therefore, further modifications or improvements may be incorporated without departing from the scope of the invention as defined by the appended claims. References

Hansen, et al, (2013). High-Resolution Global Maps of 21st-Century Forest Cover Change. Science, Vol. 342, Issue 6160, pp. 850-853. DOI: 10.1126/science.1244693

Mitchard, E., Saatchi, S., Woodhouse, I., Nangendo, G., Ribeiro, N., Williams, M., Ryan, C., Lewis, S., Feldpausch, T., Meir, P.. (2009). Using satellite radar backscatter to predict above-ground woody biomass: A consistent relationship across four different African landscapes. Geophys. Res. Lett.. 36. L23401. 10.1029/2009GL040692.

Martin Sudmanns, Dirk Tiede, Stefan Lang, Helena Bergstedt, Georg Trost, Hannah Augustin, Andrea Baraldi & Thomas Blaschke (2019) Big Earth data: disruptive changes in Earth observation data management and analysis?, International Journal of Digital Earth, DOI : 10.1080/17538947.2019.1585976