Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS WITH CLASSIFICATION STANDARD FOR COMPUTER MODELS TO MEASURE AND MANAGE RADICAL RISK USING MACHINE LEARNING AND SCENARIO GENERATION
Document Type and Number:
WIPO Patent Application WO/2022/115938
Kind Code:
A1
Abstract:
Embodiments relate to computer systems and methods for computer models and scenario generation. The system involves generating integrated climate risk data using a Climate Risk Classification Standard hierarchy that maps climate data and multiple risk factors to geographic space and time. A computer model involves risk factors modeled as graphs of nodes, each node corresponding to a risk factor and connected by edges or links. The nodes of the graph create scenario paths for the model. A hardware processor populates the graphs of nodes using a machine learning, natural language processing and expert judgement systems. The system automatically generates multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the computer model.

Inventors:
DEMBO RON SAMUEL (CA)
WIEBE JOHN HOWARD ANDREW (CA)
REILLY BRENDAN (CA)
Application Number:
PCT/CA2021/050743
Publication Date:
June 09, 2022
Filing Date:
June 01, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RISKTHINKING AI INC (CA)
International Classes:
G06N5/00; G06F16/33; G06F17/00; G06F17/18; G06F40/20; G06N20/00; G06Q10/00; G06Q40/06
Domestic Patent References:
WO2020061562A12020-03-26
Foreign References:
US20180218299A12018-08-02
US20190340548A12019-11-07
US20190096212A12019-03-28
US20190197442A12019-06-27
US20190066217A12019-02-28
US20170286622A12017-10-05
Attorney, Agent or Firm:
NORTON ROSE FULBRIGHT CANADA LLP (CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computer system for computer models for risk factors and scenario generation to query and aggregate impact, cost, magnitude and frequency of risk for different geographic locations: non-transitory memory storing a risk model comprising a causal graph of nodes for risk factors and a knowledge graph defining an extracted relationship of the nodes, each node corresponding to a risk factor and storing a quantitative uncertainty value derived for the risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the risk model, the knowledge graph of the nodes defining a network structure of the risk factors and n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes, the n-grams being domain specific keywords; a hardware processor with a communication path to the non-transitory memory to: generate integrated risk data structures using a natural language processing pipeline to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link values, and return structured, codified and accessible data structures, wherein the integrated risk data structures map multiple risk factors to geographic space and time; populate the knowledge graph and the causal graph of nodes in the memory by computing values for the risk factor for the time horizon using the integrated climate risk data structures; generate multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model; generate risk metrics for stress tests using the multifactor scenario sets and the knowledge graph;

- 68 - transmit at least a portion of the risk metrics and the multifactor scenario sets in response to queries by a client application; and store the integrated risk data structures and the multifactor scenario sets in the non-transitory memory; a computer device with a hardware processor having the client application to transmit queries to the hardware processor and an interface to generate visual elements at least in part corresponding to the multifactor scenario sets and the risk metrics received in response to the queries. The system of claim 1 wherein the hardware processor, for each risk factor, merges relevant text data into a risk-specific corpus for the risk factor to populate the knowledge graph in memory. The system of claim 2 wherein the hardware processor, for each risk factor, creates a link between a node for the risk factor and a respective n-gram extracted from the risk-specific corpus for the risk factor based on a frequency rate for the n-gram and a relevance rate to the respective risk factor determined by keyword extraction. The system of claim 1 wherein the hardware processor generates the knowledge graph by computing a bipartite network of risk factors and n-grams, and projecting the bipartite network into a single mode network of risk factors connected by edges corresponding to shared n-grams. The system of claim 4 wherein the hardware processor computes edge weights between risk factors based on the tf-idf for overlapping keywords. The system of claim 1 wherein the knowledge graph of the nodes defining the network structure of the risk factors and the n-grams having a set of links between nodes to indicate that a respective n-gram is relevant to a plurality of risk factors to form a connection and dependency between the plurality of risk factors. The system of claim 1 wherein the hardware processor extracts the n-grams using a highest pooled score to generate a set of n-grams for each risk factor to populate the knowledge graph in memory.

- 69 - The system of claim 3 wherein an expert pipeline refines candidate keywords to generate the n-grams as the domain-specific keywords. The system of claim 1 wherein the hardware processor processes the unstructured text to replace each word with a syntactic form lemma to populate the knowledge graph in memory. The system of claim 1 wherein the hardware processor computes the associated values of the links in the knowledge graph using term frequency-inverse document frequency (tf- idf) score to link the risk factors based on shard use of n-grams. The system of claim 1 wherein the hardware processor preprocesses the unstructured text to remove removing punctuation, special characters, and some common stopwords. The system of claim 1 wherein the hardware processor continuously populates the knowledge graph of nodes by re-computing the nodes, the links, and the weight values by processing additional text data using the natural language processing pipeline. The system of claim 1 wherein the hardware processor defines risk-specific queries to extract raw text data from relevant articles, processes the raw text data to generate a list of tokens and predict a named entity for each token, detect and classify relationships between different entities, and defines a query to traverse the knowledge graph in an order based on a set of rules, so that only entities associated with a value of interest will be returned, wherein the hardware processor assigns a unique identifier for each entity. The system of claim 1 wherein each node stores the quantitative uncertainty value derived by a forward-frequency distribution of possible values for the corresponding risk factor for the time horizon, wherein the hardware processor populates the causal graph of nodes by computing the forward-frequency distribution of possible values for the risk factor for the time horizon. The system of claim 1 wherein the hardware processor populates the causal graph of nodes using extremes values of the distributions and a weight of the distributions above and below accepted values. The system of claim 1 wherein the hardware processor generates the causal graph having forward edges connecting the nodes to create the scenario paths for the risk model.

- 70 - The system of claim 1 wherein the hardware processor identifies macro risk factors in response to a request and generates the causal graph of nodes for the risk factors using the identified macro risk factors and dependencies between the risk factors. The system of claim 1 wherein the hardware processor continuously populates the causal graph of nodes by re-computing the frequency distribution of possible values for the risk factor at different points in time by continuously collecting data using the machine learning system and the expert judgement system. The system of claim 1 wherein the hardware processor computes the forward-frequency distribution of possible values for the risk factor for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forward-frequency distribution. The system of claim 1 wherein the hardware processor wherein the hardware processor filters outlier data using the structured expert judgement system before computing the forward-frequency distribution. The system of claim 1 wherein the hardware processor populates the causal graph of nodes by computing the frequency distribution of possible values for the risk factor at different points in time using machine learning and structured expert judgement data to collect the possible values representing estimates of future uncertain values. The system of claim 1 wherein the hardware processor generates the multifactor scenario sets using the scenario paths for the risk model and generates scenario values using the frequency distribution of possible values for the risk factors. A computer method for computer models for risk factors and scenario generation to query and aggregate impact, cost, magnitude and frequency of risk for different geographic locations, the method comprising: storing, in non-transitory memory, a risk model comprising a causal graph of nodes for risk factors and a knowledge graph defining an extracted relationship of the nodes, each node corresponding to a risk factor and storing a quantitative uncertainty value derived for the risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the risk model, the knowledge graph of the nodes defining a network structure of the risk factors and

- 71 - n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes, the n-grams being domain specific keywords; generating, using a hardware processor with a communication path to the non- transitory memory, integrated risk data structures using a natural language processing pipeline to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link values, and return structured, codified and accessible data structures, wherein the integrated risk data structures map multiple risk factors to geographic space and time; populating the knowledge graph and the causal graph of nodes in the memory by computing values for the risk factor for the time horizon using the integrated climate risk data structures; generating multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model; generating risk metrics for stress tests using the multifactor scenario sets and the knowledge graph; transmitting, by the hardware processor, at least a portion of the risk metrics and the multifactor scenario sets in response to queries by a client application; and storing the integrated risk data structures and the multifactor scenario sets in the non-transitory memory. The method of claim 23 further comprising, for each risk factor, merging relevant text data into a risk-specific corpus for the risk factor to populate the knowledge graph in memory. The method of claim 23 further comprising, for each risk factor, creating a link between a node for the risk factor and a respective n-gram extracted from the risk-specific corpus for the risk factor based on a frequency rate for the n-gram and a relevance rate to the respective risk factor determined by keyword extraction.

- 72 - The method of claim 23 further comprising generating the knowledge graph by computing a bipartite network of risk factors and n-grams, and projecting the bipartite network into a single mode network of risk factors connected by edges corresponding to shared n-grams. The method of claim 23 further comprising computing edge weights between risk factors based on the term frequency-inverse document frequency for overlapping keywords. The method of claim 23 wherein the knowledge graph of the nodes defining the network structure of the risk factors and the n-grams having a set of links between nodes to indicate that a respective n-gram is relevant to a plurality of risk factors to form a connection and dependency between the plurality of risk factors. The method of claim 23 further comprising extracting the n-grams using a highest pooled score to generate a set of n-grams for each risk factor to populate the knowledge graph in memory. The method of claim 23 further comprising computing the associated weights of the links in the knowledge graph using term frequency-inverse document frequency score to link the risk factors based on shard use of n-grams. The method of claim 23 further comprising continuously populating the knowledge graph of nodes by re-computing the nodes, the links, and the weight values by processing additional text data using the natural language processing pipeline. A computer system for computer models for risk factors and scenario generation: non-transitory memory storing a risk model as a causal graph of nodes for risk factors, each node corresponding to a risk factor and storing a quantitative uncertainty value derived by a forward-frequency distribution of possible values for the risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the risk model; a hardware processor with a communication path to the non-transitory memory to: generate integrated climate risk data using a Climate Risk Classification Standard hierarchy that maps climate data and multiple risk factors to geographic space and time, the Climate Risk Classification Standard

- 73 - hierarchy defining climate transition scenarios, climate regions, climate modulators, climate elements and climate risks; populate the causal graph of nodes by computing the forward-frequency distribution of possible values for the risk factor for the time horizon using the integrated climate risk data, a machine learning system and expert judgement system to link the climate model to macro financial variables to encode a relationship between climate shocks and financial impact; generate multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model; generate risk metrics for stress tests using the multifactor scenario sets; transmit the multifactor scenario sets to a valuation engine to provide a causal map of climate risk to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports; and store the multifactor scenario sets in the non-transitory memory; a computer device with a hardware processor having an interface to provide visual elements by accessing the multifactor scenario sets and risk metrics in the non- transitory memory. The system of claim 32 where in the hardware processor computes a knowledge graph defining an extracted relationship of the nodes, the knowledge graph of the nodes defining a network structure of the risk factors and n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes, the n- grams being domain specific keywords, the hardware processor using a natural language processing pipeline to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link weights, and return structured, codified and accessible data structures. The system of claim 32 wherein the hardware processor populates the causal graph of nodes using extremes values of the distributions and a weight of the distributions above and below accepted values. The system of claim 32 wherein the hardware processor generates the causal graph having forward edges connecting the nodes to create the scenario paths for the risk model. The system of claim 32 wherein the hardware processor identifies macro risk factors in response to a request and generates the causal graph of nodes for the risk factors using the identified macro risk factors and dependencies between the risk factors. The system of claim 32 wherein the hardware processor continuously populates the causal graph of nodes by re-computing the frequency distribution of possible values for the risk factor at different points in time by continuously collecting data using the machine learning system and the expert judgement system. The system of claim 32 wherein the hardware processor computes the forward-frequency distribution of possible values for the risk factor for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forward-frequency distribution. The system of claim 32 wherein the hardware processor wherein the hardware processor filters outlier data using the structured expert judgement system before computing the forward-frequency distribution. The system of claim 32 wherein the hardware processor populates the causal graph of nodes by computing the frequency distribution of possible values for the risk factor at different points in time using machine learning and structured expert judgement data to collect the possible values representing estimates of future uncertain values. The system of claim 32 wherein the hardware processor generates the multifactor scenario sets using the scenario paths for the risk model and generates scenario values using the frequency distribution of possible values for the risk factors. uter system for climate models and scenario generation: non-transitory memory storing a climate model as a causal graph of nodes for climate risk factors, each node corresponding to a climate risk factor and storing a quantitative uncertainty value derived by a forward-frequency distribution of possible values for the climate risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the climate model; a hardware processor with a communication path to the non-transitory memory to: generate integrated climate risk data using a Climate Risk Classification Standard hierarchy that maps climate data and multiple risk factors to geographic space and time, the Climate Risk Classification Standard hierarchy defining climate transition scenarios, climate regions, climate modulators, climate elements and climate risks; populate the causal graph of nodes by computing the forward-frequency distribution of possible values for the climate risk factor for the time horizon using the integrated climate risk data, a machine learning system and expert judgement system to link the climate model to macro financial variables to encode a relationship between climate shocks and financial impact; generate multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model transmit the multifactor scenario sets to a valuation engine to provide a causal map of climate risk to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports; and store the multifactor scenario sets in the non-transitory memory; a computer device with a hardware processor having an interface to provide visual elements by accessing the multifactor scenario sets in the non-transitory memory.

- 76 - The system of claim 42 where in the hardware processor computes a knowledge graph defining an extracted relationship of the nodes, the knowledge graph of the nodes defining a network structure of the risk factors and n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes, the n- grams being domain specific keywords, the hardware processor using a natural language processing pipeline to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link weights, and return structured, codified and accessible data structures. The system of claim 42 wherein the hardware processor populates the causal graph of nodes using extremes values of the distributions and a weight of the distributions above and below accepted values. The system of claim 42 wherein the hardware processor generates the causal graph having forward edges connecting the nodes to create the scenario paths for the climate model. The system of claim 42 wherein the hardware processor identifies macro risk factors in response to a request and generates the causal graph of nodes for the climate risk factors using the identified macro risk factors and dependencies between the climate risk factors. The system of claim 42 wherein the hardware processor continuously populates the causal graph of nodes by re-computing the frequency distribution of possible values for the climate risk factor at different points in time by continuously collecting data using the machine learning system and the expert judgement system. The system of claim 42 wherein the hardware processor computes the forward-frequency distribution of possible values for the climate risk factor for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forward-frequency distribution. The system of claim 42 wherein the hardware processor wherein the hardware processor filters outlier data using the structured expert judgement system before computing the forward-frequency distribution.

- 77 - The system of claim 42 wherein the hardware processor populates the causal graph of nodes by computing the frequency distribution of possible values for the climate risk factor at different points in time using machine learning and structured expert judgement data to collect the possible values representing estimates of future uncertain values. The system of claim 42 wherein the hardware processor generates the multifactor scenario sets using the scenario paths for the climate model and generates scenario values using the frequency distribution of possible values for the climate risk factors. A computer implemented method for climate models and scenario generation: by a hardware server, storing a climate model in non-transitory memory as a causal graph of nodes for climate risk factors, each node corresponding to a climate risk factor and storing a quantitative uncertainty value derived by a forwardfrequency distribution of possible values for the climate risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the climate model; generating integrated climate risk data using a Climate Risk Classification Standard hierarchy that maps climate data and multiple risk factors to geographic space and time, the Climate Risk Classification Standard hierarchy defining climate transition scenarios, climate regions, climate modulators, climate elements and climate risks; populating the causal graph of nodes using a hardware processor to connect to the non-transitory memory to write the forward-frequency distribution of possible values for the climate risk factor for the time horizon using a machine learning system and expert judgement system to link the climate model to macro financial variables to encode a relationship between climate shocks and financial impact; generating multifactor scenario sets using the processor to connect to the non- transitory memory to read the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model; transmitting the multifactor scenario sets to a valuation engine to provide a causal map of climate risk to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports;

- 78 - storing the multifactor scenario sets in the non-transitory memory; and providing an interface at a computer device to provide visual elements representing the multifactor scenario sets in the non-transitory memory.

- 79 -

Description:
SYSTEMS AND METHODS WITH CLASSIFICATION STANDARD FOR COMPUTER MODELS TO MEASURE AND MANAGE RADICAL RISK USING MACHINE LEARNING AND SCENARIO GENERATION

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] The present application claims priority to and the benefit of U.S. provisional patent application 63/121 ,187 filed December s, 2020 and U.S. provisional patent application 63/147,016 filed February 8, 2021 , the entire contents of which are hereby incorporated by reference.

FIELD

[0002] The improvements generally relate to the field of computer modelling, classification standards, simulations, scenario generation, risk management, taxonomies, and machine learning. The improvements relate to computer systems that automatically generate scenarios for risk data. The computer systems implement automated testing of estimated impacts of scenarios in a scalable, consistent, auditable and reproducible manner.

INTRODUCTION

[0003] Embodiments described herein relate to computer systems that generate scenarios and computer models for risk factors consistently and at scale using machine learning, natural language processing, and expert systems. The computer system derives data representing the uncertainty of risk factors in the future; and uses this information as input for scenario generation, testing and computing metrics, and generating interfaces with visual elements for the results.

[0004] Embodiments described herein apply to different types of risk factors. Embodiments described herein relate to computer systems with a consistent framework for generating and using scenarios, to stress test and calculate risk of an organization under radical uncertainty.

[0005] Climate change is an example risk under radical uncertainty. Classification standards can classify data for transition and physical risk related to climate change. For example, there is a lack of consistent climate data and analytics, making it difficult to manage and plan for extreme uncertainty of our future under climate change. This problem has been recognized by TCFD1 and NGFS2. Regulators are attempting to build on TCFD and NGFS initiatives as a broad-based requirement for asset managers, financial institutions and large companies. Mitigating climate risk may occur when markets can price climate risk properly and ubiquitously. [0006] A Climate Risk Classification Standard (CRCS™) hierarchy can be used by embodiments described herein to consistently classify transition and physical risk related to climate change. The CRCS provides a robust, consistent and scalable computing hierarchy for understanding and comparing exposure to climate-related risk. The CRCS can be used by embodiments described herein to respond to the global financial community’s need for a globally comprehensive, accurate, and auditable approach to defining and classifying climate risk factors and determining their economic impact. The CRCS can be used by embodiments described herein to quantify both risks and opportunities presented by climate change, climate-related policy, and emerging technologies in an uncertain world.

[0007] Other example risks are pandemics, cyber risk, and stress testing of financial portfolios.

[0008] Embodiments described herein relate to computer systems that generate data structures using classification standards and scenarios for climate and financial risk consistently and at scale, based on the latest climate science, epidemiological science, finance and extracted data elements from expert opinion. The computer system derives data representing the uncertainty of these factors in the future; and uses this information as input for scenario generation.

[0009] Embodiments described herein relate to computer systems and methods for generating ontologies of climate related risk (e.g. as knowledge graphs or data structures) from unstructured text using a natural language processing pipeline to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure that can be queried by a client application.

SUMMARY

[0010] In accordance with an aspect, there is provided a computer system for computer models for risk factors and scenario generation to query and aggregate impact, cost, magnitude and frequency of risk for different geographic locations. The system has a n-transitory memory storing a risk model comprising a causal graph of nodes for risk factors and a knowledge graph defining an extracted relationship of the nodes, each node corresponding to a risk factor and storing a quantitative uncertainty value derived for the risk factor for a time horizon, the causal graph having edges connecting the nodes to create scenario paths for the risk model, the knowledge graph of the nodes defining a network structure of the risk factors and n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes, the n-grams being domain specific keywords.

[0011] The system has a hardware processor with a communication path to the non-transitory memory to: generate integrated risk data structures using a natural language processing pipeline to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link values, and return structured, codified and accessible data structures, wherein the integrated risk data structures map multiple risk factors to geographic space and time; populate the knowledge graph and the causal graph of nodes in the memory by computing values for the risk factor for the time horizon using the integrated climate risk data structures; generate multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model; generate risk metrics for stress tests using the multifactor scenario sets and the knowledge graph; transmit at least a portion of the risk metrics and the multifactor scenario sets in response to queries by a client application; and store the integrated risk data structures and the multifactor scenario sets in the non-transitory memory.

[0012] The system has a computer device with a hardware processor having the client application to transmit queries to the hardware processor and an interface to generate visual elements at least in part corresponding to the multifactor scenario sets and the risk metrics received in response to the queries.

[0013] In some embodiments, the hardware processor, for each risk factor, merges relevant text data into a risk-specific corpus for the risk factor to populate the knowledge graph in memory.

[0014] In some embodiments, the hardware processor, for each risk factor, creates a link between a node for the risk factor and a respective n-gram extracted from the risk-specific corpus for the risk factor based on a frequency rate for the n-gram and a relevance rate to the respective risk factor determined by keyword extraction.

[0015] In some embodiments, the hardware processor generates the knowledge graph by computing a bipartite network of risk factors and n-grams, and projecting the bipartite network into a single mode network of risk factors connected by edges corresponding to shared n-grams.

[0016] In some embodiments, the hardware processor computes edge weights between risk factors based on the term frequency-inverse document frequency for overlapping keywords [0017] In some embodiments, the knowledge graph of the nodes defining the network structure of the risk factors and the n-grams having a set of links between nodes to indicate that a respective n-gram is relevant to a plurality of risk factors to form a connection and dependency between the plurality of risk factors.

[0018] In some embodiments, the hardware processor extracts the n-grams using a highest pooled score to generate a set of n-grams for each risk factor to populate the knowledge graph in memory.

[0019] In some embodiments, an expert pipeline refines candidate keywords to generate the n-grams as the domain-specific keywords.

[0020] In some embodiments, the hardware processor processes the unstructured text to replace each word with a syntactic form lemma to populate the knowledge graph in memory.

[0021] In some embodiments, the hardware processor computes the associated values of the links in the knowledge graph using tf-idf score to link the risk factors based on shard use of n- grams.

[0022] In some embodiments, the hardware processor preprocesses the unstructured text to remove removing punctuation, special characters, and some common stopwords.

[0023] In some embodiments, the hardware processor continuously populates the knowledge graph of nodes by re-computing the nodes, the links, and the weight values by processing additional text data using the natural language processing pipeline.

[0024] In some embodiments, the hardware processor defines risk-specific queries to extract raw text data from relevant articles, processes the raw text data to generate a list of tokens and predict a named entity for each token, detect and classify relationships between different entities, and defines a query to traverse the knowledge graph in an order based on a set of rules, so that only entities associated with a value of interest will be returned, wherein the hardware processor assigns a unique identifier for each entity.

[0025] In accordance with an aspect, there is provided a computer system for computer models and scenario generation. The system has non-transitory memory storing a risk model as a causal graph of nodes for risk factors, each node corresponding to a risk factor and storing a quantitative value (uncertainty) for a forward-frequency distribution of possible values for the risk factor at the time horizon, the causal graph having edges connecting the nodes to create scenario paths for the risk model. The system has a hardware processor with a communication path to the non- transitory memory to: generate integrated climate risk data using a Climate Risk Classification Standard hierarchy that maps climate data and multiple risk factors to geographic space and time, the Climate Risk Classification Standard hierarchy defining climate transition scenarios, climate regions, climate modulators, climate elements and climate risks; populate the causal graph of nodes by computing the forward-frequency distribution of possible values for the risk factor at different points in time using machine learning and structured expert judgement data to link the model to macro financial variables to encode a relationship between shocks and financial impact; generate multifactor scenario sets using the scenario paths for the risk model to compute the likelihood of different scenario paths for the risk model; transmit the multifactor scenario sets to a valuation engine to provide a causal map of risk factors to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports; and store the multifactor scenario sets in the non-transitory memory. The system has a computer device with a hardware processor having an interface to provide visual elements by accessing the multifactor scenario sets in the non-transitory memory.

[0026] The system can generate risk factors based on a risk hierarchy. The risk hierarchy can map risk conditions, risk modulators, risk elements, risk factors, and scenario sets.

[0027] In some embodiments, the hardware processor populates the causal graph of nodes using extremes of the distributions and a weight of the distributions above and below accepted values.

[0028] In some embodiments, the hardware processor generates the causal graph having forward edges connecting the nodes to create the scenario paths for the risk model.

[0029] In some embodiments, the hardware processor identifies macro risk factors in response to a request and generates the causal graph of nodes for the risk factors using the identified macro risk factors and dependencies between the risk factors.

[0030] In some embodiments, the hardware processor continuously populates the causal graph of nodes by re-computing the frequency distribution of possible values for the risk factor at different points in time by continuously collecting data using the machine learning system and the expert judgement system. [0031] In some embodiments, the hardware processor computes the forward-frequency distribution of possible values for the risk factor for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forwardfrequency distribution.

[0032] In some embodiments, the hardware processor filters outlier data using the structured expert judgement system before computing the forward-frequency distribution.

[0033] In some embodiments, the hardware processor populates the causal graph of nodes by computing the frequency distribution of possible values for the risk factor at different points in time using machine learning and structured expert judgement data to collect the possible values representing estimates of future uncertain values.

[0034] In some embodiments, the hardware processor generates the multifactor scenario sets using the scenario paths for the computer model and generates scenario values using the frequency distribution of possible values for the risk factors.

[0035] Example risk factors include climate risk factors. Other example risk factors include pandemic risk factors.

[0036] Embodiments described herein relate to computer systems and methods for machine generating scenarios automatically, without bias.

[0037] Many further features and combinations thereof concerning embodiments described herein will appear to those skilled in the art following a reading of the instant disclosure.

DESCRIPTION OF THE FIGURES

[0038] In the figures,

[0039] Figure 1 is a view of an example of the system with servers and hardware components.

[0040] Figure 1A is a view of an example risk hierarchy.

[0041] Figure 1 B is a view of example components of a climate risk hierarchy.

[0042] Figure 1C is a view of example components of a climate risk hierarchy.

[0043] Figure 1 D is a view of example components of a CRCS hierarchy. [0044] Figure 1 E is another view of example components of a CRCS hierarchy.

[0045] Figure 2 is an example graph of nodes for climate risk factors.

[0046] Figure 3 is a graph of distributions for data collected by the structured expert judgement system.

[0047] Figure 4 is a diagram of the expert system and scenario sets for the valuation engine.

[0048] Figure 5 is a diagram of the valuation engine and reports for interface.

[0049] Figure 6 is an example display of visual elements for an interface.

[0050] Figure 7 is a data flow diagram showing data exchange between different hardware components of server.

[0051] Figure 7A shows an example data warehouse.

[0052] Figure 7B shows an example machine learning pipeline orchestration component.

[0053] Figure 7C shows an example back-testing and validation system.

[0054] Figure 7D shows an example scenario generation platform.

[0055] Figure 8 shows an example API connecting to a front-end interface.

[0056] Figure 9 shows an example front-end interface.

[0057] Figure 10 are graphs of distributions for pandemic data collected by the structured expert judgement system.

[0058] Figure 11 shows an example table of scenario data.

[0059] Figure 12 show an example scenario tree of scenario sets.

[0060] Figure 13 shows an example graph of distribution values for different factors over a time horizon.

[0061] Figure 14 is a view of an example of the system with servers and hardware components for climate risk factors. [0062] Figure 15 is a view of an example of the system with servers and hardware components for pandemic risk factors.

[0063] Figure 16 is a view of an example server system model to extract, transfer and load data.

[0064] Figure 17 is a view of an example interface.

[0065] Figure 18 is a view of another example interface.

[0066] Figure 19 is a view of an example interface with visual elements for risk factors and keywords identified through natural language processing and expert domain knowledge.

[0067] Figure 20 is a view of an example interface with visual elements representing a network of risk factors and keywords.

[0068] Figure 21 is a view of an example process for natural language processing pipeline.

[0069] Figure 22 is a view of an example of tokenization.

[0070] Figure 23 is a view of an example of parts of speech tagging.

[0071] Figure 24 is a view of an example of named entities within a sentence.

[0072] Figure 25 is a view of an example of a semantic relation.

[0073] Figure 26 is a view of an example of a modified sentence including multiple anchors.

[0074] Figure 27 is a view of an example of knowledge graph or hierarchical data model.

DETAILED DESCRIPTION

[0075] Embodiments described herein provide a computer system for a classification standard to generate integrated climate risk data for computer models and scenario generation.

[0076] Embodiments described herein provide computer hardware executing instructions to generate scenarios on mixed risk factors. For example, risk factors can relate to climate risk factors. Embodiments described herein provide a computer system with a classification standard that defines a taxonomy or ontology for mapping climate data received from different data sources. Embodiments described herein provide a computer process for generating integrated climate risk data by processing climate data using the taxonomy to map climate data received from different data sources to different climate regions.

[0077] The computer system selects a group of portfolios and identifies the key material macro economic factors affecting these portfolios.

[0078] Embodiments described herein relate to computer systems and methods for generating an ontology of climate related risk as knowledge graphs or data structures. The systems and methods process unstructured text using a natural language processing pipeline to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure that can be queried by a client application.

[0079] Embodiments described herein provide a Climate Risk Classification Standard (CRCS™) system to map input data for computer models and scenario generation. Embodiments described herein use the CRCS system to generate integrated climate risk data using an electronic taxonomy or ontology to automatically map climate data received from different data sources to different climate regions. The CRCS hierarchy maps climate data and multiple risk factors to geographic space and time. The CRCS hierarchy defines climate transition scenarios, climate regions, climate modulators, climate elements and climate risks. Embodiments described herein provide a computer process for generating integrated climate risk data by processing climate data using the taxonomy to map climate data to climate transition scenarios, climate regions, climate modulators, climate elements and climate risks. Embodiments described herein provide an automated rating mechanism for processing data from a variety of different sources using a consistent taxonomy. The integrated climate data can be used for scenario generation to measure the financial impact of climate change, uniformly, at some horizon, on different business units across geographies or climate regions. A business unit can operate in multiple regions and the data can be mapped to those regions.

[0080] The CRCS system can enable the proper pricing of climate financial risks; this will inform different financial strategies and help measure transition risks.

[0081 ] T ransition scenarios are encoded files defining estimates of future evolution of the world economies and their impact on greenhouse gas emissions. For example, Transition Regimes are standards used for analyzing future impacts due to climate change. CRCS creates the causal links from Transitions to risk in a particular geography. The standards can divide the world into homogeneous climate regions and these have been integrated into CRCS. The CRCS system can encode causality to enable the computing components to understand the radical risks associated with climate change. The CRCS system can start by selecting a transition scenario (carbon pathway). The transition scenario impacts climate modulators, which affects all regions in the world. The transition scenario impacts the climate modulators and these affect all regions in the world, but only certain geographic regions or locations are of interest to business units, for example. These points of interest intersect with a subset of climate regions. The points of interest impact climate risk, chronic and acute risks in the regions of interest. This in turn leads to the climate risk factors in the regions of interest. This process can be repeated for every transition scenario.

[0082] Embodiments described herein provide a computer process for generating integrated climate risk data to generate a set of climate stress scenarios to measure or estimate impact on a business unit. The output data can be used to generate visualizations that aggregate gains and losses and forms a distribution of the gains (Upside) and losses (Downside).

[0083] Embodiments described herein provide a computer process for generating integrated climate risk data rating metrics. An example metric can be referred to as CaR, the Climate-risk adjusted Return. CaR is computed by dividing the Upside by the Downside as a measure of the risk-adjusted upside. Embodiments described herein can be used to generate visualizations of bar graphs for the distribution of the gains (Upside) and losses (Downside). The Upside can be given by an area covered by a first section of bars and the Downside can be given by an area covered by another section of bars.

[0084] A CaR of less that one implies a likely financial impact on profitability under these stresses.

[0085] The CRCS system identifies integrated climate risk factors by selecting a climate region. The CRCS system selects the transition scenario, climate modulators, climate elements and climate risks to compute integrated climate risk factors.

[0086] The CRCS system is a codified standard to map data for computing integrated climate risk factors. For example, the CRCS system can codify climate regions to cover the world. The CRCS system can codify Climate Modulators that impact different climate regions. The CRCS system can codify links between the Climate Modulators to both chronic and acute risks in the climate regions. The CRCS system can codify climate indices such as the freeze/thaw cycles, number of heat wave events, etc. To measure the financial risk of a portfolio, a physical asset or a business line in a radically uncertain future (such as climate change), embodiments described herein can determine how the portfolio performs under various scenarios that describe the uncertainty in the future. An example way of summarizing this is the future profit and loss frequency distribution under these scenarios.

[0087] The CRCS system is designed to consistently classify transition and physical risk related to climate change to generate input data for computer models. The CRCS provides a robust, consistent and scalable hierarchy for understanding and comparing exposure to climate- related risk. The CRCS is designed to respond to the need for a globally comprehensive, accurate, and auditable approach to defining and classifying climate risk factors and determining their economic impact. The CRCS universal approach sheds light on both risks and opportunities presented by climate change, climate-related policy, and emerging technologies in an uncertain world.

[0088] Embodiments described herein provide a computer system to automatically test the impact of radical uncertainty on financial portfolios. Radical uncertainty can be represented by events or combinations of events that are unprecedented in historical data or are historically unlikely. Financial stress tests have relied on historical correlations and regression analysis of past events to foretell future impacts. However, underlying macro factors of the risk potential, defined by their frequency distribution, are changing beyond their historical bounds. The impact of changes is unaccounted for as methods traditionally have no recourse to deal with radical uncertainty.

[0089] Embodiments described herein provide a computer system that addresses the radical uncertainty inherent in the world by automatically generating a set of scenarios that account for a wide range of risk potentials, including extreme events. The computer system accounts for the tail end of the uncertainty distribution explicitly and provides a measure of the likelihood that a particular path within the set is realised in the real world. The ultimate goal is stress testing to understand the risk reward trade-offs and the stability of institutions and markets.

[0090] As an illustrative example, the impacts associated with risks are geospatial by nature, floods occur within a catchment, pandemics begin as regional epidemics, and so on. To address this, embodiments described herein provide a computer system with a geospatial partitioning engine that segments world data into climate regions following the IPCC CLIMATE CHANGE ATLAS definition of climate regions. These are large areas of land and sea that experience similar climatic conditions. These regions are further divided into climate geo-zones characterized by sub-tiles at a higher spatial resolution.

[0091] Fig. 1 shows an example computer system with a hardware server 100 for computer models and scenario generation that includes CRCS 250. The CRCS 250 involves a computer hardware processor configured to consistently classify transition and physical risk related to climate change to generate input data for computer models. The CRCS 250 integrates with a machine learning pipeline 160 (with a natural language processing pipeline 165) to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure that can be queried.

[0092] In accordance with an aspect, the server 100 generates computer models for risk factors and scenario generation to query and aggregate impact, cost, magnitude and frequency of risk for different geographic locations.

[0093] The server 100 has a machine learning pipeline 160 with a natural language processing (NLP) pipeline 165, structured expert pipeline 170, indices 175, and an integrated model pipeline 185 to generate a knowledge graph from unstructured data. The processor 120 uses the machine learning pipeline 160 and expert pipeline 170 to link the computer model to macro financial variables to encode a relationship between risk shocks and financial impact. The processor 120 can respond to queries using the knowledge graph. The processor 120 uses the NLP pipeline 165 to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data to update the knowledge graph. The knowledge graph can be queried by server 100 in response to queries received from a client application (e.g. interface 140) via API gateway 230. Further details of the NLP pipeline 165 are provided herein in relation to Figures 7, 19 to 27. For example, Figure 21 is a view of an example workflow of NLP pipeline 165 to process unstructured text and return a structured, codified and accessible data structure (knowledge graph).

[0094] As shown in Figure 1 , the server 100 has a non-transitory memory 110 storing a knowledge graph defining extracted relationships of nodes corresponding to risk factors. The knowledge graph of the nodes defines a network structure of the risk factors and n-grams with links between nodes having weight values based on shared use of n-grams by risk factors corresponding to the nodes. The n-grams can be domain specific keywords.

[0095] The server 100 has a hardware processor 129 with a communication path to the non- transitory memory 110 to generate integrated risk data structures using a natural language processing pipeline 165 to extract information from unstructured text, classify risk and a plurality of risk dimensions to define the risk, quantify interconnectedness of risk factors for the associated link values. The server 100 returns structured, codified and accessible data structures to update the knowledge graphs in memory 110. The integrated risk data structures map multiple risk factors to geographic space and time. The server 100 populates the knowledge graph and the causal graph of nodes in the memory 110 by computing values for the risk factor for the time horizon using the integrated climate risk data structures. The server 100 generates multifactor scenario sets using the scenario paths for the climate model to compute the likelihood of different scenario paths for the climate model. The server 100 generates risk metrics for stress tests using the multifactor scenario sets and the knowledge graph. The server 100 transmits at least a portion of the risk metrics and the multifactor scenario sets in response to queries. The server 100 stores the integrated risk data structures and the multifactor scenario sets in the non-transitory memory 100.

[0096] The server 100 connects to a computer device 130 with a hardware processor having a client application to transmit queries to the hardware processor 120, and an interface 140 to generate visual elements at least in part corresponding to the multifactor scenario sets and the risk metrics received in response to the queries.

[0097] In some embodiments, the hardware processor 120, for each risk factor, merges relevant text data into a risk-specific corpus for the risk factor to populate the knowledge graph in memory.

[0098] In some embodiments, the hardware processor 120, for each risk factor, creates a link between a node for the risk factor and a respective n-gram extracted from the risk-specific corpus for the risk factor based on a frequency rate for the n-gram and a relevance rate to the respective risk factor determined by keyword extraction.

[0099] In some embodiments, the hardware processor 120 generates the knowledge graph by computing a bipartite network of risk factors and n-grams, and projecting the bipartite network into a single mode network of risk factors connected by edges corresponding to shared n-grams. In some embodiments, the hardware processor 120 computes edge weights between risk factors based on overlapping keywords.

[00100] In some embodiments, the knowledge graph of the nodes indicates that a respective n- gram is relevant to a plurality of risk factors to form a connection and dependency between the plurality of risk factors. In some embodiments, the hardware processor 120 uses the natural language pipeline 165 to extract the n-grams using a highest pooled score to generate a set of n- grams for each risk factor to populate the knowledge graph in memory.

[00101] In some embodiments, an expert pipeline 170 refines candidate keywords to generate the n-grams as the domain-specific keywords.

[00102] In some embodiments, the hardware processor 120 processes the unstructured text to replace each word with a syntactic form lemma to populate the knowledge graph in memory.

[00103] In some embodiments, the hardware processor 120 computes the associated values of the links in the knowledge graph using tf-idf score to link the risk factors based on shard use of n- grams. In some embodiments, the hardware processor 120 preprocesses the unstructured text to remove removing punctuation, special characters, and some common stopwords. In some embodiments, the hardware processor 120 uses the natural language pipeline 165 to continuously populate the knowledge graph of nodes by re-computing the nodes, the links, and the weight values by processing additional text data.

[00104] In some embodiments, the hardware processor 120 uses the natural language pipeline 165 to define risk-specific queries to extract raw text data from relevant articles, processes the raw text data to generate a list of tokens and predict a named entity for each token, detect and classify relationships between different entities, and defines a query to traverse the knowledge graph in an order based on a set of rules, so that only entities associated with a value of interest will be returned, wherein the hardware processor assigns a unique identifier for each entity.

[00105] In some embodiments, the server 100 has non-transitory memory 110 storing computer models as causal graphs of nodes for risk factors. Each node corresponds to a risk factor and stores a quantitative value (uncertainty) derived by a forward-frequency distribution of possible values for the risk factor at a point in time. The causal graph has forward edges connecting the nodes to create scenario paths for the computer models. The edges encode dependencies between the nodes of the causal graph. Example risk factors include climate risk factors and the server 100 can store computer risk models for climate models. Other example risk factors include pandemic risk factors and the server 100 can store computer risk models for pandemic models including epidemiological models, economics models, distance models, and so on.

[00106] The causal graph can be a directed acyclic graph or Bayesian Network, for example. The causal graph can be referred to as a scenario tree for illustrative purposes. Each node of the graph corresponds to a risk factor and stores a quantitative value corresponding to radical uncertainty. The graph provides forward-frequency distribution data of possible values for the risk factor at the time horizon. The causal graph has edges connecting the nodes to create scenario paths for the risk model. The server 100 populates the causal graph of nodes by computing the forward-frequency distribution of possible values for the risk factor at different points in time using machine learning and structured expert judgement data to link the model to macro financial variables to encode a relationship between shocks and financial impact. The server 100 generates multifactor scenario sets using the scenario paths for the risk model to compute the likelihood of different scenario paths for the risk model.

[00107] The CRCS 250 collects large datasets from different data sources (risk data 220, model data 190) and uses machine learning pipeline 160 to process the large datasets to generate structured input data for computer models and scenario engine 180. The CRCS 250 is implemented by computer hardware executing instructions to generate integrated climate risk data for scenarios on mixed risk factors. In order for the large datasets of different data formats and types to be usable for computer systems, the CRCS 250 implements processing operations to align the data to different geographic locations in a way that is scalable. CRCS 250 can change the resolutions of the data views. CRCS 250 generates a causal based hierarchy that maps climate data and multiple risk factors in space and time. CRCS 250 can enable different resolutions of data based on different geographic locates. CRCS 250 can scale back to a location (city, region) over time and spatially. CRCS 250 can encode the casualty of the changes. CRCS 250 encodes the chain of impacts on factors, when a trigger to a factor in turn triggers another factor. CRCS 250 generates a hierarchy of mapping for the data. CRCS 250 creates a computing structure of understanding for the data. The data can be in different formats and needs to be mapped or aligned or structured to be usable for computing models. The data can be input into transition scenario models for scenario engine 180 to generate forward looking prediction models.

[00108] CRCS 250 is designed to consistently classify transition and physical risk related to climate change. CRCS 250 provides a robust, consistent and scalable hierarchy for understanding and comparing exposure to dim ate- related risk. CRCS 250 is designed to respond to the need for a globally comprehensive, accurate, and auditable approach to defining and classifying climate risk factors and determining their economic impact. The CRCS 250 universal approach sheds light on both risks and opportunities presented by climate change, climate-related policy, and emerging technologies in an uncertain world.

[00109] CRCS 250 uses a physically consistent causal data hierarchy of measurable earth system variables and climate related phenomena covering any location (land and sea) on earth. The CRCS 250 implements a globally consistent geospatial standard that scales from climate risk regions down to the individual assets. The geospatial nature of the CRCS 250 means that any asset class or group of assets can be mapped by the CRCS 250 based on their geographic location. The standard provides a robust and consistent method for linking distributed assets at a global scale including their intermediary dependencies via supply chain disruptions.

[00110] The CRCS 250 provides a geospatial reference following the Intergovernmental Panel on Climate Change (IPCC) climate regions defined in the Climate Change Atlas. These regions are linked to climate transition scenarios (SSP and NGFS), climate elements and climate risks (chronic, acute and compound), through the climate modulators (for example ENSO, IOD, Monsoon). The climate modulators are the causal link defined by climate science, through direct and indirect (teleconnections) influence of temperature and precipitation patterns to the global atmosphere, ocean and cryosphere.

[00111] As an illustrative example embodiment, the CRCS 250 structure can consist of different climate transition scenarios, climate regions, climate modulators, climate elements and climate risks, covering chronic, acute and compound climate risks (integrated climate risk factors generated dynamically from user interaction). The CRCS 250 defines a electronic mapping to represent a causal link between transition scenarios, modulators, elements and risks to a geographic region in a consistent and science driven methodology.

[00112] The server 100 can respond to requests from interface 140 for different use cases and risk factors. The CRCS 250 processes data from the different sources to generate input for the models.

[00113] The server 100 can implement a computer process for generating integrated climate risk data by processing climate data using the taxonomy to map climate data to climate transition scenarios, climate regions, climate modulators, climate elements and climate risks. The server 100 can implement an automated rating mechanism for processing data from a variety of different sources using a consistent taxonomy. The integrated climate data can be used for scenario generation to measure the financial impact of climate change, uniformly, at some horizon, on different business units across geographies or climate regions. A business unit can operate in multiple regions and the data can be mapped to those regions.

[00114] The server 100 can implement a computer process for generating integrated climate risk data to generate a set of climate stress scenarios to measure or estimate impact on a business unit. The output data can be used to generate visualizations that aggregate gains and losses and forms a distribution of the gains (Upside) and losses (Downside). T

[00115] The server 100 can provide a computer process for generating integrated climate risk data rating metrics. An example metric can be referred to as CaR, the Climate-risk adjusted Return. CaR is computed by dividing the Upside by the Downside as a measure of the risk- adjusted upside. The server 100 can generate visualizations of bar graphs for the distribution of the gains (Upside) and losses (Downside). A CaR of less that one implies a likely financial impact on profitability under these stresses. In this example, there is a section for material positive impact, a section for non-material impact, and a section for minor impact.

[00116] The server 100 generates and manages climate models, pandemic models, and other example models to respond to different types of requests. The server 100 uses CRCS 250 to generate input data for the models in response to requests. For example, the server 100 uses CRCS to generate data to query existing climate models from different computer models and calculates climate risk indices. As another example, the server 100 queries existing pandemic/epidemiological model outputs from different computer models and calculates pandemic risk indices. Other models can be incorporated as third-party input via application passing interface (API).

[00117] The server 100 has a hardware processor 120 with a communication path to the non- transitory memory 110 to process data from different data sources using the CRCS 250 and to populate the causal graph of nodes by computing the forward-frequency distribution of possible values for the risk factor at different points in time. The multifactor scenario sets are generated using the scenario paths for the computer model and scenario values are computed using the frequency distribution of possible values for the risk factors. In some embodiments, the hardware server 100 identifies macro risk factors in response to a request received from the user device 130 and generates the causal graph of nodes for the risk factors using the identified macro risk factors and dependencies between the risk factors encoded by the graph structure. The hardware server 100 generates the causal graph having forward edges connecting the nodes to create the scenario paths for the computer model. The causal relationship between risk factors are defined for each climate region. The encoding can seed the tree and arrange the nodes. In some embodiments, the relationships are updated by a named entity recognition (NER) optimiser that measures the distance between the stem words of risk factors in the scientific literature. The shorter the distance the closer the stems are to each other and the stronger the relationship between risk factors, for example.

[00118] The server 100 can use the CRCS 250 to generate input data to automatically generate scenario sets using scenario engine 180 by identifying macro factors and generating a scenario tree for the factors. The server 100 can use the scenario engine 180 to generate forward distributions of possible values for each factor at the time horizon. The server 100 can generate a set of scenarios on the combinations of macro risk factors. The server 100 can identify the extreme values and the corresponding likelihoods for each factor. A scenario is a path in the scenario tree, the scenario engine 180 having computed its likelihood as the product of the likelihoods along the path and the value associated with the scenario is the sum of the values along the path.

[00119] The server 100 can use API gateway 230 to exchange data and interact with different devices 130 and data sources, including model data 190, risk data 220, vector parameters 200, and state parameters 210. The server can receive input data from model data 190, risk data 220, vector parameters 200, and state parameters 210 to populate the computer risk models, nodes, and the scenario sets.

[00120] The server 100 can identify the micro financial factors or effects that are impacted by a set of the macro climate risk factors. The server 100 can compute valuations using a macro to micro climate conversion for each scenario.

[00121] The processor 120 has a machine learning pipeline 160 with a natural language processing (NLP) pipeline 165, structured expert pipeline 170, indices 175 (e.g., climate indices), and an integrated model pipeline 185 to generate an ontology of risk (knowledge graph) from unstructured data. The processor 120 uses the machine learning pipeline 160 and expert pipeline 170 to link the computer model to macro financial variables to encode a relationship between risk shocks and financial impact. The processor 120 can respond to queries using the knowledge graph. The processor 120 uses the NLP pipeline 165 to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure (knowledge graph) that can be queried by a client application (e.g. interface 140) via API gateway 230. Further details of the NLP pipeline 165 are provided herein in relation to Figures 19 to 27. For example, Figure 21 is a view of an example workflow of NLP pipeline 165 to process unstructured text and return a structured, codified and accessible data structure (knowledge graph).

[00122] The processor 120 implements a scenario engine 180 and generates multifactor scenario sets using the scenario paths for the computer models to compute the likelihood of different scenario paths for the computer models. The processor 120 transmits the multifactor scenario sets to a valuation engine to provide a causal map of computer risk to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports. The processor 120 stores the multifactor scenario sets in the non-transitory memory 110. The server 100 connects to a computer device 130 via a network 150.

[00123] The computer device 130 has a hardware processor having an interface 140 to provide visual elements by accessing the multifactor scenario sets. The computer device 130 can access the scenario data from its non-transitory memory by a processor executing code instructions. The interface updates in real-time in response to computations and data at server 100.

[00124] The hardware server 100 populates the causal graph of nodes with values (estimates) for the risk factors. In some embodiments, the hardware server 100 populates the causal graph of nodes using extremes of the distributions and a weight of the distributions above and below accepted values. In some embodiments, the hardware server 100 computes the forwardfrequency distribution of possible values for the risk factor for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forward-frequency distribution. The server 100 can used structured expert pipeline 170 to collect data for computing distributions. In some embodiments, The hardware server 100 can filter outlier data using the structured expert pipeline 170 before computing the forward-frequency distribution. That way, the extreme values are more likely to be accurate representations and not outliers or noisy data. In some embodiments, outliers are not filtered to represent the entire distribution. The outliers are valid data points just at an exceedingly rare probability. The server 100 can apply data management techniques to normalise units and formats. [00125] In some embodiments, the hardware server 100 continuously populates the causal graph of nodes by re-computing the frequency distribution of possible values for the risk factor at different points in time by continuously collecting data using the machine learning system and the structured expert pipeline 170.

[00126] In some embodiments, the hardware processor populates the causal graph of nodes by computing the frequency distribution of possible values for the climate risk factor at different points in time using machine learning pipeline 160 (with NLP pipeline 165 and structured expert pipeline 170) to collect the possible values representing estimates of future uncertain values.

[00127] The server 100 provides a scalable automated process for generating scenarios that link different types of risk factors. For example, the server 100 can generate output to stress test the financial impacts of climate change in a scalable, consistent, auditable and reproducible manner.

[00128] The server 100 assembles a comprehensive knowledge graph or database including the most updated data on risk factors and economic drivers covering all regions around the world. The dataset contains a dynamic collection of recent, trusted, peer reviewed articles that can be updated regularly. The server 100 uses machine learning pipeline 160 (and NLP pipeline) to read these papers and summarize the uncertainty in risk factors at future points in time. The server 100 uses the structured expert pipeline 170 to exchange data with communities of experts using interface 140 and assess the sentiment on the future. The server 100 maintains complete and most current data on scientific experiments with a large number of the major models and their view on the future, month by month or over other times periods.

[00129] From all these data sources, the server 100 generates knowledge graphs or derived data that captures uncertainty in a set of related risk factors at numerous future horizons. The server 100 uses the data for generating scenarios and visual elements for interface 140 of user device 130.

[00130] As an example, risk factors can relate to pandemic risk factors. The server 100 can store computer risk models for pandemic models including epidemiological models, economics models, distance models, and so on. The server 100 can use API gateway to receive data from different data sources, including model data 190, risk data 220, vector parameters 200, and state parameters 210. The server can receive input data from model data 190, risk data 220, vector parameters 200, and state parameters 210 to populate the computer risk models, nodes, and the scenario sets.

[00131] The server 100 can generate visual elements, decisions and policies for interface 140 relating to the pandemic risk factors based on computations by scenario engine 180 with input from different data sources. For example, model data 190 and risk data 220 can include community and distancing model data from camera feeds, video input and video files. For example, vector parameters 200 can include epidemiological vector parameters (transmission rate, duration care facility, duration illness, illness probabilities, measures, effects) and economic vector parameters (household expenditure, income consumption, unemployment benefit uptake, imported input restrictions, shutdown, timing, labour productivity, export demand, wage subsidy update, government programs). For example, state parameters 210 can include epidemiological state parameters (current state infected, hospital, death, recovered, geography regions) and economic state parameters (time required to reopen, labour force, demographics, geographic controls). The server 100 uses the data to populate the computer risk models, nodes, and the scenario sets.

[00132] As another example, risk factors can relate to climate risk factors. The server 100 has computer models for climate risk models. The server 100 receives climate data from different types of data sources. The CRCS 250 processes the data from different types of data sources to generate input data and scenarios.

[00133] The CRCS 250 structure can consist of different climate transition scenarios, climate regions, climate modulators, climate elements and climate risks, covering chronic, acute and compound climate risks (integrated climate risk factors generated dynamically from user interaction). The CRCS 250 processes the data from different types of data sources to define an electronic mapping to represent a causal link between data elements for transition scenarios, modulators, elements and risks to data elements for geographic regions.

[00134] The server 100 manages risk factors for different types of risk. The server 100 can categorize different macro risks that define a risk potential can be categorised into five hierarchical frameworks:

• Climate Risk

Pandemic/Epidemic Risk Political/GeoPolitical Risk

• Cyber Security Risk

• Macroeconomic Risk

[00135] Each of these risk areas can be defined by following risk hierarchy:

• Risk Conditions

• Risk Modulators

• Risk Elements

• Risk Factors

• Multi-Factor Scenario

[00136] The server 100 can define the different types of risk factors using the risk hierarchy. The path to a multi-factor scenario can be conditioned on the realisation of one or multiple macro risk hierarchies.

[00137] Fig. 1A shows an example risk hierarchy for generating multi-factor scenarios 1600 with risk conditions 1608, risk modulators 1606, risk elements 1604, risk factors 1602. The server 100 defines the risk hierarchy by encoding links from the risk conditions 1608 to the risk modulators 1606. The server 100 defines the risk hierarchy by encoding links from risk modulators 1606 to the risk elements 1604. The server 100 defines the risk hierarchy by encoding links from the risk elements 1604 to the risk factors 1602. The server 100 generates the multi-factor scenarios 1600 based on the different paths of the risk factors 1602.

[00138] Accordingly, server 100 can define different risk factors 1602 for scenario generation using the risk hierarchy of risk conditions 1608, risk modulators 1606, risk elements 1604, risk factors 1602.

[00139] For example; climate risk can be conditioned on the world following one of the Shared- Socioeconomic Pathways (SSPs) defined by the Intergovernmental panel on climate change (IPCC). These SSP’s define the configuration of energy production and demand, population growth, economic growth and carbon emissions that lead to a specific warming potential in watts per meter squared globally over the next 80 years. Under a specific SSP trajectory, changes to the global climate are uncertain as are their impacts and the response of a portfolio to the realisation of those changes at multiple geographical locations. Such that the climate risks associated with western North America are different from Central and eastern North America, as are the macroeconomic drivers of these climatic regions including the economic/political/administrative boundaries within the climate regions.

[00140] Figs. 1 B and 1C show the cascade of risk potentials conditioned for climate risk. The end result is a set of multi-factor climate risk scenarios that are used as input stressors to a macroeconomic model for stress testing. For this example, server 100 can define different geospatial regions 1610. The server 100 defines a risk hierarchy of climate risk conditions 1612, climate risk modulators 1614, climate risk elements 1616, and climate risk factors 1612. In this example, chronic and acute factors are shown.

[00141] Figure 1 D is a view of example components of a CRCS 250 hierarchy. The example CRCS 250 structure consists of 37 climate transition scenarios (CTS0000), 62 climate (land and sea) regions (CR0000), 27 climate modulators (CM0000), 13 climate elements (CE0000) and 28 climate risks, covering chronic (CCR0000), acute (ACR0000) and compound (ICR0000) risks (integrated climate risk factors generated dynamically from user interaction). The CRCS 250 defines the causal link between transition scenarios, modulators, elements and risks to a region.

[00142] The CRCS 250 can encode causality to enable the computing components to understand the radical risks associated with climate change. The CRCS 250 can start by selecting a transition scenario (carbon pathway). The transition scenario impacts climate modulators, which affects all regions in the world. The transition scenario impacts the climate modulators and these affect all regions in the world, but only certain geographic regions or locations are of interest to business units, for example. These points of interest intersect with a subset of climate regions. The points of interest impact climate risk, chronic and acute risks in the regions of interest. This in turn leads to the climate risk factors in the regions of interest. This process can be repeated by CRCS 250 for every transition scenario.

[00143] Figure 1 E is another view of example components of a CRCS 250 hierarchy to generate integrated climate risk for scenarios.

[00144] The example CRCS 250 hierarchy shows of a climate transition scenario (CTS0001) with a link to a climate (land and sea) region (CR0001). The climate region (CR0001) has a link to climate modulators (CM0013, CM0008, CM0009, CM0016). The climate modulators have links to climate elements (CE10003, CE10002, CE10003, CE10010, CE10012, CE10011). The climate elements have links to climate risks, including chronic, acute, and compound risks. The CRCS 250 dynamically generates climate risk data corresponding to integrated climate risk for scenario generation. The CRCS 250 defines the causal link between transition scenarios, modulators, elements and risks to a region.

[00145] The CRCS 250 encodes causality to enable the computing components to understand the radical risks associated with climate change. The CRCS 250 selects a transition scenario (CTS0001). The transition scenario impacts climate modulators, which affects all regions in the world. These points of interest intersect with a subset of climate regions. The transition scenario impacts the climate modulators and these impact climate elements. The points of interest impact climate risk, chronic and acute risks in the regions of interest. This in turn leads to the climate risk factors in the regions of interest. This process can be repeated by CRCS 250 for different transition scenarios.

[00146] CRCS 250 encodes physically-consistent, causal hierarchy. CRCS 250 encodes different locations or regions, and the geospatial standard scales from climate risk regions down to a physical asset. Any asset class or group of assets can be mapped by CRCS 250, based on its geographic location. CRCS 250 encodes a robust and consistent method, for linking distributed assets, at a global scale. CRCS 250 encodes a geospatial reference consistent with global standard climate regions.

[00147] The CRCS 250 can be reviewed periodically to ensure it keeps pace with the evolving climate change field and continues to deliver value to users, while maintaining the constancy required of a standard. When new information becomes available, modifications, additions or removals of a given risk factor or index from the CRCS 250 may be undertaken. To ensure transparency and quality of the standard, there can be a CRCS Committee of expert data communities. [00148] Table 1 shows example Climate Change Regimes (greenhouse gas concentration trajectories) that can be used by the CRCS 250. [00149] Table 2 shows example Global climate regions and their classification codes. [00150] Table 3 shows example Global climate modulators and their classification codes.

[00151] Table 4 shows example Global climate elements and their classification codes.

[00152] Table 5 shows example Global climate risks (chronic and acute) and their classification codes.

[00153] The server 100 can implement a microservice event driven architecture that stores data in a data warehouse (memory 110) accessible over a network via secure API gateway 230. The input data is retrieved automatically and preprocessed before insertion into the data warehouse. Pre-processing of input data by server 100 can involve deduplication, unit normalisation and format alignment. The input data is called via a microservice over the network 150 to a modelling pipeline where data analytics and machine learning techniques are applied to the data to derive risk indices and develop risk models. The modelling processes are actuated on multiple computer processing units (CPU) working in unison and/or on multiple graphical processing units (GPU) working in unison, which can be referred to as processor 120 for illustrative purposes. The data derived from these processes are returned to the data warehouse and made available via application programming interface (API) gateway 230 to the front end user interface 140 (at user device 130), as raw data streams or back into the platform for integration into the scenario generation engine 180.

[00154] Fig. 2 shows an example diagram for example scenarios 250 defined by scenario paths for the climate models. The server 100 has non-transitory memory 110 storing climate models as causal graphs of nodes for climate risk factors. Each node corresponds to a climate risk factor. The server 100 generates multifactor scenario sets using the scenario paths for the climate models to compute the likelihood of different scenario paths for the climate models. The server 100 stores the multifactor scenario sets in the non-transitory memory 110. The example diagram shows scenario paths for example scenarios 250 and associated states and values for the models.

[00155] The server 100 uses the climate models and the causal graphs of nodes for climate risk factors to store quantitative values (uncertainty) for frequency distributions of possible values for the climate risk factors at different points in time. The causal graph has forward edges connecting the nodes to create scenario paths for the climate models. The server 100 populates the causal graph of nodes by computing the frequency distribution of possible values for the climate risk factor at different points in time.

[00156] The computer device 130 has an interface 140 to provide visual elements corresponding to the different scenario paths for the climate models by accessing the multifactor scenario sets at server 100 or its internal memory. The computer device 130 can access the scenario data from its non-transitory memory and generate the visual elements corresponding to the different scenario paths for climate models using its processor executing code instructions.

[00157] In this example, the interface 140 can display visual elements corresponding to causal graphs of nodes for climate risk factors. The interface 140 can display visual elements corresponding to different scenario paths of climate risk factors and display a scenario set as a collection of scenario paths of the causal graph or tree structure.

[00158] Example climate risk factors include: resulting atmospheric carbon concentration, world average temperature, the Indian Ocean Dipole, precipitation in East Africa, drought in Australia, and carbon emissions. These are examples and there are many climate drivers that affect the risk factors of climate change. Using the Indian Ocean Dipole as an example, the dipole is the difference in temperatures between the Eastern and Western areas of the Indian Ocean. As the world average temperature rises, the dipole may become more pronounced. As the sea heats up in the East, more evaporation occurs, making the air above East Africa more prone to precipitation and extreme weather. In contrast, as the water cools in the ocean bordering Northern Australia, the precipitation over Australia drops causing drought conditions coupled with high temperatures. The climate risk factors and process may be represented by a causal graph. Conditioned on a transition pathway for carbon, the visual elements can depict how carbon emissions can affect the concentration of greenhouse gases in the atmosphere, which, in turn, can affect the world average temperature rise in the future which, in turn, may exaggerate the Indian Ocean Dipole. As the Indian Ocean Dipole grows so might the precipitation in East Africa and drought in Australia.

[00159] In this example, the interface 140 can display visual elements corresponding to different scenario paths for the climate risk factors. The interface 140 can be used to visually represent relationships between the climate risk factors, and the impact a climate risk factor can have on another climate risk factor. Example queries include: Will carbon concentration grow more than scientists think? or less? If CO2 is more than expected what will happen to the world average temperature rise? If world average temperature grows, will the Indian Ocean Dipole be larger or smaller than what scientists think?

[00160] This is an illustrative example and the interface 140 can display visual elements corresponding to different scenario paths for the other types of risk factors. The interface 140 can be used to visually represent relationships between the risk factors, and the impact a risk factor can have on another risk factor.

[00161] The example scenario tree shown in Figure 2 is a graph of nodes conditioned on a transition pathway for carbon emissions. Each path is a scenario made up of 3 risk factors: carbon concentration, world average temperature and Indian Ocean Dipole.

[00162] An example scenario path is the following scenario: Carbon concentration in the atmosphere grows less than scientist data indicates, and the world average temperature rise is lower than scientist data indicates; however the Indian Ocean Dipole is larger than was anticipated by scientist data. The structure can generate additional values for responding to queries: how much higher or how much lower? how likely is higher? how likely is lower?

[00163] The server 100 can generate scenarios and values that capture possible extreme climate risks. The values can be generated based on estimations of the radical uncertainty in the future of the climate risk factors at the nodes of the scenario tree. The example shows 8 different scenario paths as illustrative visual elements.

[00164] The server 100 can generate climate data sets stored at memory 110. The server 100 can define climate drivers and climate risk factors.

[00165] There can be different climate drivers, and in particular, global regional climate drivers. The server 100 can generate data values for different global regional climate drivers. The server 100 can generate nodes of the graph for different climate drivers. The server 100 can model different classifications or categories of climate risk factors or different types of risk. The server 100 can model climate risk factors with continuous risk, phase risk, emergent risk, chronic risk, acute risk, and compound risk, for example. In some embodiments, the different classifications of risk can be chronic, acute and compound.

[00166] The server 100 can generate an ontology of climate related risk as knowledge graphs or data structures. The server 100 can use machine learning pipeline 160 (and NLP pipeline 165) to process unstructured text and extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure that can be queried by a client application. The NLP pipeline 165 can use NLP classifiers. The server 100 can train the NLP classifiers separately for chronic and acute risks. The NER optimiser output is also used to group risks into compound potentials. Continuous risk can correspond to persistent/chronic environmental change. Phase risk can correspond to intensity/duration/frequency, acute events limited in space and time. Emergent risk can correspond to compounding risks, coincident in space and time, coincident in space but not time, distributed effects (i.e. supply chain impacts), tipping points and feedback loops.

[00167] The interface 140 can provide an annotation guide aligned with different risk classifications. The server 100 can generate annotations using climate risk data. The server 100 can train NLP classifiers (for NLP pipeline 165) on continuous and phase risk factors, for example. The server 100 can use different models to cover all risk factor classifications or types.

[00168] Macro factors can be directly linked to climate drivers and risk factors. The structured expert pipeline 170 can collect data to fill in the knowledge gaps around the magnitude of these relationships quantifying the uncertainty in the global climate models. [00169] The server 100 collects data from global and regional data sources on climate and climate extremes to compute values to quantify climate extremes. For example, the server 100 can process climate extreme indices recommended by the CCI/WCRP/JCOMM Expert Team on Climate Change Detection and Indices (ETCCDI). The data can be related to temperature (e.g., extreme heat or cold events, prolonged heat or cold spells) and precipitation (e.g., extreme rainfall, prolonged droughts). As other examples, Climdex provides historical data on these indexes for > 32,000 weather stations across the globe, SPEI/SPI provides drought indices, IBtRACS provides data for global tropical storm tracks, intensity, duration, CMIP6 datasets, and FFDI/FWI provides forest fire Indices.

[00170] The server 100 can be used for different types of risk factors.

[00171] The server 100 uses a machine learning pipeline 160 and structured expert pipeline 170 to link the computer models to macro financial variables to encode a relationship between risk shocks and financial impact.

[00172] The server 100 uses machine learning pipeline 160 to extract data elements from different sources to populate the data sets at memory 110. The server 100 uses structured expert pipeline 170 to extract data elements from expert sources to populate the data sets at memory 110. The server 100 defines economic or financial macro factors or variables, and stores values for the financial macro factors or variables at memory 110. The system 100 use macro-economic modelling to generate values for the macro factors or variables. The system 100 translates the macro factors or variables to micro factors for client profiles and portfolios. The server generates the scenario sets and computes values using data at memory 110.

[00173] For example, the server 100 uses machine learning pipeline 160 to extract climate data elements from different sources to populate the climate data sets at memory 110. The server 100 uses structured expert pipeline 170 to extract climate data elements from expert sources to populate the climate data sets at memory 110. The server 100 defines economic or financial macro factors or variables, and stores values for the financial macro factors or variables at memory 110. The server generates the climate scenario sets and computes climate risk values using data at memory 110.

[00174] The server 100 uses machine learning pipeline 160 for text mining of articles. The server 100 can have article keywords to capture different ways that risk factors are described as in the literature. This can be done by processing articles and recording all the ways that a given risk stressor is mentioned. The machine learning pipeline 160 can implement term frequency-inverse document frequency (Tf-ldf) to automatically extract relevant keywords for the various risk factors. The machine learning pipeline 160 can optimise the Named Entity Recognition pipeline of the NLP pipeline 165. The server 100 speeds up the annotation process to cover different types of risk factors.

[00175] For example, the server 100 can have -250,000 article keywords to capture different ways that climate risk factors are described as in the literature. This can be done by processing articles and recording all the ways that a given climate stressor is mentioned (e.g., global warming, temperature increase, heat wave, extreme temperature, etc.). The machine learning pipeline 160 can implement term frequency-inverse document frequency (Tf-ldf) to automatically extract relevant keywords for the various climate risk factors (e.g., sea level rise, ocean acidification, drought, etc.). The machine learning pipeline 160 can optimise the Named Entity Recognition pipeline of the NLP pipeline 165.

[00176] The server 100 encodes a relationship between risk shocks and financial impact using data from the machine learning pipeline 160 and structured expert pipeline 170 to link the computer risk model to macro financial variables. The interface 140 can also be used to collect expert data for the structured expert pipeline 170. The server 100 can compute probability distributions from expert data captured using interface 140 to populate the nodes of the causal graphs for risk factors.

[00177] Figure 3 is a graph of distributions for data collected by the structured expert pipeline 170. The graph shows probability distributions from data captured using survey fields of interface 140. The server 100 can compute the frequency distribution of possible values for the risk factors captured by the interface 140 and structured expert pipeline 170. The values captured by the interface 140 can map to different points in time to add time to the values of the nodes. The structured expert pipeline 170 collects input data as estimates of future uncertainty for individual risk factors. For example, the structured expert pipeline 170 can use interface 140 to poll for estimates of future uncertainty for individual risk factors.

[00178] The server 100 can also generate estimates of future uncertainty for individual risk factors using the machine learning pipeline 160. The server 100 uses the machine learning pipeline 160 to process data sources to generate expected values for the risk factors. The server 100 derives these data values by examining the latest trusted scientific data sources and other data on these factors using machine learning. The server 100 uses the machine learning pipeline 160 for automated extraction of risk data from scientific journals. For example, on the climate risk factor ‘Sea Level Rise’, the machine learning system 160 can process 25,000 documents for data extraction.

[00179] The server 100 represents estimates of future uncertainty for individual risk factors by its distribution of possible values collected by the machine learning pipeline 160 and structured expert pipeline 170. The server 100 uses the distributions to compute values for the nodes of risk factors for some future point in time. The server 100 collects data that embodies the full range of uncertainty in the factors. The server computes scenarios on combinations of related risk factors, or any other radically uncertain variables using individual distributions of possible values for each of the factors at some future point in time (the time horizon).

[00180] The server 100 uses the structured expert pipeline 170 to link the computer risk model to macro financial variables to encode a relationship between risk shocks and financial impact.

[00181] The server 100 uses the structured expert pipeline 170 to collect data representing the collective wisdom of a large group of experts. The server 100 can collect extreme values as estimations as discrete sets of possible scenarios for combinations of factors. The server 100 uses the structured expert pipeline 170 to obtain distributions of possible outcomes that can span the range of possible values for the risk factors. The distributions can provide estimates for the possible range of upside and downside movements in the risk factors and likelihood of occurrence of the upside and downside range.

[00182] The server 100 can combine the data in the forward distributions with scenario trees to get both scenarios for the combinations of factors as well as estimates of the likelihood of these scenarios occurring. The values for the up and down ranges and the likelihoods of the up and down movements are used by the server 100 to complete the data required to evaluate the tree. The extremes and odd combinations span the range of possible outcomes for the scenario space. The server 100 can use the collected data to identify extreme values. The server 100 can estimate the unknown future values periodically and continuously.

[00183] The server 100 can represent a spanning set of scenarios of risk factors as different causal graphs of nodes. The ordering of the nodes in the tree can map to dependencies between factors. For example, the server 100 can represent a spanning set of scenarios of risk factors as a tree of nodes with each factor exhibiting states with corresponding probabilities. The server 100 computes the forward distribution of each climate risk factor at the time horizon for a spanning set of scenarios of risk factors.

[00184] The server 100 can automatically generate scenario sets by identifying macro factors and generating a scenario tree for the factors. The server 100 can generate forward distributions for each factor at the time horizon. The server 100 can identify the extreme values and the corresponding likelihoods for each factor. A scenario is a path in the scenario tree, its likelihood is the product of the likelihoods along the path and the value associated with the scenario is the sum of the values along the path.

[00185] The server 100 can continuously collect expert data using the structured expert pipeline 170. There can be a large number of expert engagements over days, weeks, months to continuously provide expert data to compute values for the distributions. The server 100 can represent the uncertainty at a node by a frequency distribution of possible values that factor might assume in the future. The graph represents the possible range of values captured by interface 140 of user device 130, and their frequencies. The server 100 can focus on the extremes of these distributions and the weight of the distribution above and below the commonly accepted values to generate quantitative values for nodes of the tree and likelihoods for these scenarios. The server 100 can generate reports, such as the best and worst-case scenarios in the scenario set that it generates. The data can be stored in a nested hierarchical database. New data is appended to the hierarchy and timestamped. The server 100 can traverse the database in both directions pulling data points for various points in time. The default is to always reference the latest time stamped data point in some examples.

[00186] In some embodiments, the hardware processor 102 populates the causal graph of nodes using extremes of the distributions and a weight of the distributions above and below accepted values. The distributions are derived from different sources. Examples include: numerical model outputs, NLP of scientific literature and structured expert judgement (SEJ). The server 100 can use different sources and combinations of sources. They are also hierarchical with model output being the baseline data that is refined by the NLP and SEJ.

[00187] Figure 4 is a diagram of the expert system (structure expert judgement system 704 in this example), risk factors 702, scenario sets, and financial variables 706 for valuation engine. The structured expert judgement system 704 links the computer models (risk factors 702) to macro financial variables to encode the financial variables 706 as a relationship between risk shocks and financial impact. The scenario sets include values for risk factors and macro financial variables 706. The server 100 can link risk scenarios to financial impacts to generate values for portfolios or institutions.

[00188] The processor 120 transmits the multifactor scenario sets to a valuation engine to provide a causal map of risk factors to micro shocks for the valuation engine to translate the macro financial variables to micro shocks and output portfolio reports. In some embodiments, the server 100 connects to a computer device 130 via a network 150 to generate updated visual elements at the interface 140 based on the valuation engine results.

[00189] For example, The server 100 can encode the relationship between climate shocks and their corresponding financial impacts. For example, how do drought and high temperatures in Australia affect the AUS dollar and interest rates. The structured expert pipeline 170 can collect this information at interface 140. The interface 140 can prompt Australian traders, economists and policy makers for input data on how a drought might affect macro financial factors in Australia such as AUS GDP, the Aussie dollar, Interest rates, and so on. This data can give the server 100 input to compute the uncertainty distributions to populate node values. The server 100 can translate macro financial variables to micro shocks for different client profiles and portfolios.

[00190] The server 100 computes the full causal map between carbon emission regimes and impact on specific portfolios. From this causal map, the server 100 can generate a scenario tree (graph) and compute uncertainty distributions on all the risk factors involved. This is how server 100 computes multifactor scenarios for the future that link climate and financial impacts. In this example of Fig. 7, there is shown a transition scenario corresponding to the IPCC regime of RCP8.5. as the root. This is how the server 100 can link Transition and Physical risk in a consistent manner (TCFD).

[00191] Figure 5 is a diagram of valuation engine 804 and report generation 806 for interface 140. The server 100 can connect with the valuation engine 804 to use output data 800 of the multifactor scenario sets to connect with client computer systems 802 to generate client output and portfolio valuations. The interface 140 can be used for different client profiles. The output results can include stress results and client reports for the scenario sets with climate risk factors, financial macro variables and micro shocks. [00192] The server 100 can apply the micro shocks of the output data 800 to all the portfolios managed by a fund, for example. Different subsets of micro shocks might apply to each portfolio. The valuation can be done for every scenario on every portfolio. One set of micro stress shocks can be valid for all portfolios. This high level of consistency allows for the creation of benchmark output results. An example client can see how each and every portfolio will compare to the average of exposure by other groups.

[00193] Figure 6 is an example display of visual elements for an interface 140 of user device 130 that interacts with server 100 to generate updates for the visual elements using climate models and scenario data as an illustrative example.

[00194] The example interface 140 shows visual elements mapping to economic risk. The visual elements can depict physical assets at risk (GAR 2015 economic exposure). The interface 140 can show different classifications of risk. The visual elements can depict portfolio risk, sector risk (industrials, energy, materials), and sector trade between nations (China for materials, Netherlands for industrials), for example. The server 100 can map climate risk values to economic risk. The interface 140 can: indicate georeferenced physical risk; identify risk profile to assets, sectors, supply chains; and identify transition risk factors (i.e. geopolitical instability, policy changes, regulation, social licence to operate).

[00195] The server 100 uses the components 1000 to systematize the generation of scenarios so the scenario sets can be generated automatically without prior assumptions on underlying probability distributions. The server 100 can receive future macro events as input data to trigger a request for forward-looking generation of spanning scenario sets to populate the front-end interface 140. The worst and best scenarios are included in the generated set. The server 100 can minimize bias introduced by human input.

[00196] Figure 7 is a data flow diagram showing data exchange between different hardware components 1000 of server 100. The example components 1000 include a data warehouse 1002, machine learning pipeline orchestration 1004, scenario generation platform 1008, and back- testing and validation system 1006. The hardware components 1000 exchange data using an API and populate the front-end interface 140 to update its visual elements.

[00197] Figure 7A shows the data warehouse 1002 with article data collection, structured expert judgement data, climate data and integrated models. The data warehouse 1002 with article data collection receives input from article query API and an automated service (for key value store and deduplication) to populate data sets. The data warehouse 1002 with structured expert judgement data receives input from survey automation API and data exchange services to populate data sets for expert database and survey response data. The data warehouse 1002 with climate data receives input from climate data retrieval and data exchange services to populate climate data sets. The data warehouse 1002 with integrated models receives input from model API and model services to populate climate data sets. The data warehouse 1002 exchanges data with the machine learning pipeline orchestration 1004 and the back-testing and validation system 1006 to update the interface 140 at client device 130.

[00198] Figure 7B shows the machine learning pipeline orchestration 1004 that can be used for the machine learning pipeline 160 of server 100, for example. For this example, the machine learning pipeline orchestration 1004 has an NLP pipeline 165, an expert judgement pipeline 170, climate indices 175, and an integrated model pipeline 185. Each pipeline can be implemented using a data ingestion pipeline, data repository, model training, trained models, model inference, and model serving. The machine learning pipeline orchestration 1004 exchanges data with the data warehouse 1002 and scenario generation platform 1008 to update the interface 140 at client device 130.

[00199] The machine learning pipeline orchestration 1004 has a natural language processing (NLP) pipeline 165, structured expert pipeline 170, indices 175 (e.g., climate indices), and an integrated model pipeline 185 to generate an ontology of risk from unstructured data and a knowledge graph with an accessible data structure of data elements and connections between the data elements. The processor 120 can respond to queries using the knowledge graph. The processor 120 uses the NLP pipeline 165 to extract information from unstructured text, classify the risk, the multitude of dimensions that define therisk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure (knowledge graph) that can be queried by a client application (e.g. interface 140) via API gateway 230.

[00200] Further details of the NLP pipeline 165 are provided herein in relation to Figures 19 to 27.

[00201] Embodiments described herein provide an ontology of climate related risk as a knowledge graph by extracting data from unstructured text. For example, the graph can be constructed from unstructured text in a large number of publications. The knowledge graph can be used to query and aggregate the impact, cost, magnitude and frequency of risk for different geographic locations in the world. The graph provides a historical view of data, a (near) present view of data, and a forward looking view of data. For example, the forward projections can be conditioned by a climate and economic scenario (e.g., transition scenario). To construct this knowledge graph, the server 100 uses a NLP pipeline 165 to extract information from unstructured text, classify the risk, the multitude of dimensions that define a risk, quantify the interconnectedness of risk factors, and return a structured, codified and accessible data structure that can be queried by a client application. The server 100 enables large-scale information extraction and classification of climate related risk.

[00202] The NLP pipeline 165 implements keyword extraction. The NLP pipeline 165 extracts relevant keywords from large amounts of text (e.g. scientific journal articles) by combining climate, economic, and financial domain knowledge from the structured expert pipeline 170 with NLP tools and big data analytics. For example, the NLP pipeline 165 can condense unstructured text from large amounts of articles into candidate words and phrases, which the structured expert pipeline 170 can refine to extract domain-specific keywords from. These keywords can be either single words or short phrases composed of consecutive words, which can be referred to as ‘n-grams,’ where n represents the word length.

The NLP pipeline 165 preprocesses the article data by removing punctuation, special characters, and common stop words. For example, NLP pipeline 165 preprocesses text data by removing common stop words including both English stop words from the nltk Python library stopwords. words() and the custom stop words listed in Table 6. Once preprocessed, the NLP pipeline 165 can lemmatize the article data by replacing each word with its basic syntactic form or lemma. Lemmatize can refer to process of sorting words by grouping variant forms of the same word. For each risk factor, the NLP pipeline 165 merges the relevant publications or articles into a single corpus. The NLP pipeline 165 compares the risk-specific corpus with a baseline corpus generated by merging up to a number (e.g., 4000) of randomly chosen articles from a pool or collection of queried publications, excluding those articles belonging to the corpus for the risk factor of interest. The NLP pipeline 165 then extracts a top number (e.g. 2000) of n-grams (n = 1~5) using different ranking processes. For example, a ranking process can involve selecting a top number of n-grams with the highest pooled Term Frequency x Inverse Document Frequency (tf-idf) score to obtain the final n-grams for the risk factors. [00203] Table 6 shows an example list of custom stop words:

[00204] The NLP pipeline 165 implements text network analysis. For example, the NLP pipeline 165 can employ a textnet method that enables relational analysis of meanings in text to explore the network structure between risk factors based on their shared keywords and phrases. This method enables relational analysis of meanings in text, which uses NLP and graph theory to detect communities in bipartite networks. A textnet represents a bipartite network of documents (i.e. risk factors) and terms (i.e. n-grams), for which links only exist between the two types of nodes (i.e. n-grams). The NLP pipeline 165 uses the tf-idf score to quantify the links between documents and terms to link risk factors to one another to create links between risk factors based on their shared use of n-grams.

[00205] For example, the NLP pipeline 165 can first process the queried articles to create a text corpus for each risk factor. A n-gram can only be considered as a potential link for a given risk factor if it meets the following conditions:

[00206] (i) the n-gram must occur at least once in every five articles that were queried for that risk factor (i.e., frequency per article (fpa) >= 0.2) and (ii) the n-gram can have high relevance to at least one of the climate risk factors (i.e., the n-gram must be in the list of extracted keywords).

[00207] The NLP pipeline 165 extracts all n-grams (n=1~5), with a fpa >= 0.2 from the processed text corpora for each risk factor (i.e. the first condition). The NLP pipeline 165 merges the extracted n-grams into a single list of relevant keywords, removing duplicates, and filtering out those not in the list of extracted keywords (i.e. the second condition). A frequency table (of both total frequency and fpa) of n-grams can be extracted for each of the risk factors. The n-grams in the table can be used for forming the connection and dependency between different risk factors as they may be relevant to more than one risk factor, the driver of the risk, the impact of the risk or connections between any or all of the above.

[00208] The NLP pipeline 165 creates a corpus and frequency tables for each risk factor, which represents the bipartite network of terms (n-grams) and documents (risk factors). The NLP pipeline 165 combines the n-grams into a series of repeating n-grams for each risk factor. For example, the repeating n-grams can be based on a repetition number of transformed fpa, which can be defined as the value of the original fpa multiplied by 10 and rounded to the nearest integer to eliminate the rare n-grams (e.g. fpa <0.5) and differentiate between low and high frequency n- grams. From this, a dataframe with columns can be created: a vector of risk factors and a vector of strings with repeating n-grams, where repetition numbers for each term can be proportional to their fpa in the original articles for the given risk factors [00209] The NLP pipeline 165 then converts this dataframe to a textnet corpus, from which textnets can be created to represent (i) the bipartite network of risk factors and keywords and (ii) the network of risk factors based on their shared keywords.

[00210] The n-grams are not only domain-relevant to that specific risk factor. Instead, these keywords and phrases might be relevant to more than one risk factor, the driver of the risk, the impact of the risk or even teleconnections between any or all of the above. Thus, the n-grams can be used to form the connection and dependency between different risk factors.

[00211] Figure 19 is a view of an example network analysis interface with visual elements for risk factors and keywords identified through natural language processing and expert domain knowledge. In this example, the NLP pipeline 165 creates a textnet that links risk factors to keywords. The NLP pipeline 165 can tokenize the textnet corpus to handle individual words rather than phrases. Tokenization on phrases connected with allows the NLP pipeline 165 to customize the keywords to the predetermined n-grams instead of using the default algorithm to search for n-grams. The minimum number of documents can be specified to include all keywords. The network can be converted to an RF-KW network for a subset of risks specific to climate. As an illustrative example, the network can consist of 21 docs (the names of the risk factors), 394 terms (the number of keywords), and 1181 edges (the total number of links between risk factors and keywords). In Figure 19, the large circles indicate climate risk factors and the small circles indicate keywords. Risk factors that are closely related to each other can be indicated using similar color and close proximity. Sublinear (logarithmic) scaling can be used for calculating tf-idf for edge weight.

[00212] Figure 20 is a view of an example interface with visual elements representing a network of risk factors and keywords. In this example, the NLP pipeline 165 creates a textnet that links risk factors to one another, based on shared keywords. The minimum number of documents is specified (e.g. as a parameter configuration) such that each keyword was relevant to at least two risk factors (i.e. form at least one link in the network of risk factors). As an example, a resulting RF-RF network can consist of 21 docs (risk factors), 230 terms (keywords), and 1017 edges (links between risk factors and keywords). The NLP pipeline 165 can then project the bipartite network into a single-mode network of risk factors, connected through the keywords they share in common, where the edge weights between risk factors is the sum of the tf-idf for the overlapping keywords. In Figure 20, the circles represent risk factors and line thickness represents strength of connections based on common keywords. By extracting strong connections, the overall network structure of how risk factors are related to one another is formed by the NLP pipeline 165 by extracting strong connections and formed clusters (e.g. related to increasing temperature, related to flooding, related in storms).

[00213] The NLP pipeline 165 extracts keywords that can partially show that various climate risk factors are connected to one another by common climate drivers and impacts. Many keywords are shared by several risk factors, which forms the links of a network of risk factors.

[00214] Figure 21 is a view of an example process for NLP pipeline 165. The example components of the process for the NLP pipeline 165 shown are Sentence Boundary Detection (SBD), Part of Speech (POS) tagging, Dependency Parsing (DP), Named Entity Recognition (NER), Relation Extraction (RE), and Entity Linking (EL). This example pipeline process can be applied and expanded on any risk factors. The server 100 can respond to queries using the example pipeline process for different risk factors.

[00215] Risk-specific queries can be defined and used by the server 100 to extract raw texts from relevant publications. Following a preprocessing stage by the server 100, a custom tokenizer can slice the sentence strings into the smallest working unit of a list of tokens. The machine learning pipeline 160 can have a machine-learning (ML) sentence boundary detector (SBD) to slice the list of tokens into multiple lists, where each list represents a sentence. For each sentence, the NLP pipeline 165 predicts the part-of-speech tag (POS tag), dependency structure, and named entity for each token within the sentence. The NLP pipeline 165 uses a hybrid approach, referred to as Pattern Matching and Classification, to classify the preliminary relationship between different entities among the predefined set. After the NLP pipeline 165 filters out the irrelevant sentences, the server 100 uploads the extracted information (i.e. entities, POS tag, lemma, relationship, sentence, doc ID, metadata, etc.) to the graph database stored in the non-transitory memory 110.

[00216] The server 100 defines a query to traverse the knowledge graph (stored in memory 110) in an order based on a set of grammatical rules, for example, such that the query will only return entities associated with the value of interest. The results are collected and normalized through entity linking to resolve polysemy and synonym. The server 100 uploads the normalized results to memory 110 or gateway 230 (for data exchange). The results can be made available to clients directly through the interface 140 at client device 130 and via secure API gateway 230. [00217] The NLP pipeline 165 implements preprocessing. After converting publications and articles to plaint texts, the NLP pipeline 165 removes URLs, special characters, headers, footnotes, style strings, and redundant whitespaces and replaces the missing characters with context-specific placeholders. The NLP pipeline 165 can replace derived or foreign characters with their English base characters, for example. The NLP pipeline 165 preprocessor is based on regular expression to detect these patterns to replace or remove the unwanted match results. The NLP pipeline 165 preprocessor can recognize reference during the process, which may be temporarily removed to improve the performance of the dependency parser (DP).

[00218] The NLP pipeline 165 implements a tokenizer. The NLP pipeline 165 tokenizer is modified from spaCy’s tokenizer by adding more symbols into infixes (i.e. token delimiters), and re-tokenizing (splitting or merging) specific subsets of tokens that match predefined patterns to tune the granularity for further components processing.

[00219] Figure 22 is a view of an example of tokenization and illustrates tokens of the tokenizer after processing raw text. The output is generated by server 100 by transforming text data received as input. The server 100 generates symbols for the data structures.

[00220] The NLP pipeline 165 implements sentence boundary detection (SBD). The NLP pipeline 165 SBD component can be from libraries based on the bidirectional long short-term memory neural network (Bi-LSTM) from the Universal Dependencies project, for example. The NLP pipeline 165 SBD outputs can be configured in CONLL-U format, which are parsed into a list of sentences by an example package - conllu.

[00221] The NLP pipeline 165 implements Part-of-speech tagging (POS tagging) or grammatical tagging, which classifies each token into a single grammatical form (e.g. Noun, Verb, Adjective, etc.) based on its meaning and context. The NLP pipeline 165 POS tagging consists of a linear layer with a softmax activation function on top of the token-to-vector layer from a transformerbased (RoBERTa) model trained on OntoNotes 5, a large scale dataset. The NLP pipeline 165 POS tagging accuracy rate can be tracked and validated. The list of all possible grammatical forms for the NLP pipeline 165 POS tagging is defined in Universal Dependencies (UD).

[00222] The NLP pipeline 165 implements a dependency parser, which takes a single sentence as input and uses rule-based or machine learning techniques to generate the directional grammatical tree for the sentence. The NLP pipeline 165 dependency parser recognizes each directional arc in a parse tree as a dependency relationship from its head (parent) to its target (child). The NLP pipeline 165 dependency can extract relations from the texts based on this dependency structure.

[00223] Figure 23 is a view of an example of parts of speech tagging and dependency parsing.

In this example sentence, the token ‘a’ is dependent on the token ‘decrease’ via the determiner (det) relationship. All tokens have a parent, except the root token. For example, the verb ‘projected’ is the root token of this sentence.

[00224] The NLP pipeline 165 implements Named Entity Recognition (NER), locating and classifying named entities in unstructured texts. The NLP pipeline 165 named entity recognizer can be implemented as a hybrid of the collection of pattern rules and pre-trained machine learning models to increase recall rate of the NER task. For example, the default spaCy’s NER or the

DeepPavlov’s NER can be used as the first component of the NER system, and re-examine the texts with the predefined pattern rules to capture missing named entities.

[00225] Figure 24 is a view of an example of named entities within a sentence. In this example sentence, the NLP pipeline 165 named entity recognizer detected the boundaries of ’33.33%’ and ‘RCP4.5’ and classified them correctly to their entity types.

[00226] Table 7 shows the list of important entity types. The DIMENSION entities are the collection of keywords that users might be interested in, and the MENTION entities are the riskspecific keywords selected by the structured expert pipeline 170 from the list generated from the NLP pipeline 165 keyword extraction algorithm.

[00227] For some tokens that cannot fully convey their meanings without their neighbors, the NLP pipeline 165 utilizes syntactic parsing and POS tagging to recognize the meaning of the tokens. For example, ‘the number of deaths increased 123 in a week’ will be expressed as the list of tokens, [“the”, “number”, “of’, “deaths”, “increases”, “123”, “in”, “a”, “week”]. The NLP pipeline 165 may only capture [“number”. “123”, “a week”] in the relation extraction phase and can utilize information from syntactic parsing and POS tagging to merge the associated tokens as a whole (i.e. [“the”, “number”, “of”, “death”, “increases”, “123”, “in”, “a”, “week”] -> [“the number of token”, “increases”, “123”, “in”, “a week”]) to establish comprehensive data points in the further steps.

[00228] The NLP pipeline 165 implements relation extraction, detecting and classifying the semantic relationships among a set of entities. The NLP pipeline 165 extracts relationships among different entities to build understanding of the target field and ontology construction. The NLP pipeline 165 relation extraction represents a valid relation by the name of a specific relation, associated with a tuple of entities.

[00229] Figure 25 is a view of an example of a semantic relation. In this example semantic relation, ‘Drought Frequency’ is the name of the relation, and (‘decrease’, ’33.33%’, ‘RCP4.5’) are the associated entities.

[00230] The NLP pipeline 165 adopts two different approaches, deep learning classification and pattern matching, for relation extraction depending on the amount of the labelled data for the risk factor. Given a climate risk factor, if the labelled dataset is relatively balanced and has a size of more than 2000, the NLP pipeline 165 will prioritize classification. The NLP pipeline 165 will adopt pattern matching for relation extraction for most of the risk factors.

[00231] The NLP pipeline 165 can adopt the deep learning classification approach for relation extraction, which takes each sentence as a single input, inserting 2 unused tokens as anchors at the start and the end position of each entity of interest. The NLP pipeline 165 inputs the token vectors of the modified sentence to a model for classifying the relationship among the “anchored” entities. An illustrative example can be based on the architecture of the bioBERT model, a transformer-based neural network used for the biomedical field. The NLP pipeline 165 deep learning classification approach achieves fair accuracy. The NLP pipeline 165 example can use the deep learning classification approach for the risk factor of sea level rise to generate a sample ontological structure. Other types of risk factors can be used, and different learning approaches.

[00232] Figure 26 is a view of an example of a modified sentence including multiple anchors. This figure is an illustrative example of the deep learning classification approach for relation extraction.

[00233] The NLP pipeline 165 can adopt a pattern matching approach for relation extraction, which is based on similar relationships being mapped to similar types of text representation or grammatical patterns. The NLP pipeline 165 can use the high accuracy information obtained from POS tags, dependency relation, etc. to build a list of rules for any given general relationships. The NLP pipeline 165 can find patterns and mappings to these general relationships and use these relations to further link the relevant entities with the keywords in the sentences. The NLP pipeline 165 can select keywords for a risk factor (“MENTION” keywords) from the list generated by the NLP pipeline 165 keyword extraction process, or climate scientists can recommend the keywords. [00234] Table 8 shows the general relationship between entities for the example sentence: “An increase of 33% and 66% in extreme drought frequency was projected under RCP 4.5 and RCP 8.5, reported by IPCC.” [00235] Table 9 shows examples of the rules for pattern matching.

[00236] The server 100 uploads the extracted general relationship (Table 8) and all information associated with the included tokens, sentences and documents to the graph database stored in the non-transitory memory 110. The hierarchy of the data model includes four types of vertices: document, sentence, relation, token. Each vertex type is associated with different kinds of properties, and some of the direct edges linking from one vertex to another also have their own kind of property. The server 100 uses the topological structure of these vertices, edges and their associated properties as a basis to further extract higher-level relationships among entities. For example, Table 11 lists six essential patterns to form the query on the graph, so that the server 100 can extract relevant values and its associated entities including spatial, temporal, scenario, predicate dimensions with high precision.

[00237] Figure 27 is a view of an example of knowledge graph or hierarchical data model that can be stored in non-transitory memory 110, and queried or updated by server 10 for data exchange.

[00238] Table 10 shows the associated properties for each vertex type and edge type.

[00239] Table 11 shows essential patterns for extracting value on the knowledge graph. [00240] The NLP pipeline 165 can build a set of linking rules for entities to automatically correctly assign a unique ID for each entity to avoid confusion from synonyms, an entity having different textual representations (e.g. a bike and a bicycle) and polysemies, identical textual representations representing different entities (e.g. ‘I hurt my arms’ vs. ‘Country A sold weapons to Country B). The NLP pipeline 165 can also adopt a classification model to predict whether two given entities are identical or not.

[00241] Embodiments described herein combine NLP with community detection and network analysis, to use keyword-generated links to obtain a high level view of the connectedness among risk factors. The server 100 enables generation of a knowledge graph of the interrelatedness of various risk factors from a massive amount of unstructured data in articles. Embodiments described herein combine NLP, expert domain knowledge, and network analysis to obtain high level insights from large amounts of detailed and unstructured data.

[00242] Embodiments described herein combine NLP with community detection and network analysis, to generate a model of connectedness among risk factors. The server 100 uses the knowledge graph of the interrelatedness of various risk factors for scenario generation platform 1008 and scenarios sets.

[00243] The scenario generation platform 1008 can generate output results using scenario engine 180 of server 100 and scenario sets populated with data from the knowledge graph of risk factors.

[00244] Referring to Figure 7C, there is shown the back-testing and validation system 1006. The back-testing and validation system 1006 compute testing output data using data from the scenario generation platform 1008 and the scenario sets. The back-testing and validation system 1006 exchanges data with the data warehouse 1002 and scenario generation platform 1008 to update the interface 140 at client device 130. The back-testing and validation system 1006 validates and tests the output data from the scenario generation platform 1008.

[00245] Figure 7D shows the scenario generation platform 1008 that can be used for scenario engine 180 of server 100, for example. The scenario generation platform 1008 computes scenario sets and risk factor data. The scenario generation platform 1008 exchanges data with back-testing and validation system 1006 and machine learning pipeline orchestration 1004 to update the interface 140 at client device 130. The scenario generation platform 1008 computes distribution data 1020 for scenario generation. The distribution data 1020 can be from expert pipeline 170. [00246] The hardware components 1000 of Figure 7 exchange data using an API and populate the interface 140 to update its visual elements.

[00247] Figure 8 shows illustrative example data components 1100 that can be populated using hardware components 1000. Different data sources 1100 can be ingested using an API Data Exchange 1104 and transformed by the hardware components of Figure 1 to generate or update knowledge graphs stored in memory 110, for example. Different data sources 1100 can be ingested and transformed by the hardware components 1000 of Figure 7 using an API Data Exchange 1104. The API Data Exchange 1104 can provied data to update a front-end interface 1106 with different visual elements that change based on data and commands. In some embodiments, the front-end interface 1106 corresponds to interface 140 of user device 130 of Figure 1.

[00248] Figure 9 shows an example front-end interface 1106 that corresponds to interface 140 of user device 130 of Figure 1 . As noted, example risk factors relate to pandemic risk factors. The visual elements can correspond to data for pandemic risk factors.

[00249] The server 100 of Figure 1 can store computer risk models for pandemic models including epidemiological models, economics models, distance models, and so on. The server 100 can use API gateway to receive data from different data sources, including model data 190, risk data 220, vector parameters 200, and state parameters 210. The server can receive input data from model data 190, risk data 220, vector parameters 200, and state parameters 210 to populate the computer risk models, nodes, and the scenario sets.

[00250] The hardware server 100 populates the causal graph of nodes with values (estimates) for the risk factors using computed distributions of values for pandemic risk factors. The server 100 can used structured expert judgement data and an expert pipeline 170 to collect data for computing distributions for pandemic risk factors. In some embodiments, the hardware server 100 populates the causal graph of nodes by computing the frequency distribution of possible values for the climate risk factor at different points in time using machine learning pipeline 160 and expert pipeline 170 to collect the possible values representing estimates of future uncertain values. In some examples, the risk factors also include climate risk factors as they can impact a pandemic. The expert pipeline 170 processes structured expert judgement data to update the knowledge graph of risk factors and topics. [00251] In some embodiments, the hardware server 100 populates the causal graph of nodes using extremes of the distributions and a weight of the distributions from data collected by the expert pipeline 170. In some embodiments, the hardware server 100 computes the forwardfrequency distribution of possible values for the risk factor collected by structured expert pipeline 170 for the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forward-frequency distribution. The hardware server 100 can filter outlier data using the structured expert pipeline 170 before computing the forwardfrequency distribution for the pandemic risk factors. That way, the extreme values are more likely to be accurate representations and not outliers or noisy data.

[00252] In some embodiments, the hardware server 100 computes output using values for the risk factor stored in the knowledge graph, which may be updated with data collected by structured expert pipeline 170. The hardware server 100 computes the time horizon to extract upward and downward extreme values, and likelihoods of upward and downward movement from the forwardfrequency distribution, for example.

[00253] In some embodiments, the hardware server 100 continuously populates the knowledge graph or causal graph of nodes by re-computing the frequency distribution of possible values for the pandemic risk factors at different points in time by continuously collecting data using the machine learning pipeline 160, NLP pipeline 165, and the structured expert pipeline 170.

[00254] The server 100 can generate visual elements, decisions and policies for interface 140 relating to the pandemic risk factors based on computations by scenario engine 180 with input from different data sources. For example, model data 190 and risk data 220 can include community and distancing model data from camera feeds, video input and video files. For example, vector parameters 200 can include epidemiological vector parameters (transmission rate, duration care facility, duration illness, illness probabilities, measures, effects) and economic vector parameters (household expenditure, income consumption, unemployment benefit uptake, imported input restrictions, shutdown, timing, labour productivity, export demand, wage subsidy update, government programs). For example, state parameters 210 can include epidemiological state parameters (current state infected, hospital, death, recovered, geographic regions) and economic state parameters (time required to reopen, labour force, demographics, geographic controls). The server 100 uses the data to populate the computer risk models, nodes, and the scenario sets. [00255] Figure 10 are graphs 1302, 1304, 1306 of distributions for pandemic data collected by the structured expert pipeline 170 to derive values corresponding to perspectives on the path to a vaccine, the financial impacts of a pandemic on the global economy and the impact of climate change on a pandemic. The interface 140 can be used by respondents to provide input on views on how climate change will influence future global pandemics, the likely time to a vaccine being available in their respective regions, and how the current situation will impact the global economy over a future period of time. There is radical uncertainty regarding the time to a vaccine and its likely effect on key financial benchmarks and the input can be used to derive estimates for risk factors. Financial benchmarks can indicate low trends.

[00256] The graphs 1302, 1304, 1306 show probability distributions from data captured using survey fields of interface 140. The server 100 can compute the frequency distribution of possible values for different pandemic risk factors captured by the interface 140 and structured expert pipeline 170. The values captured by the interface 140 can map to different points in time to add time to the values of the nodes. The structured expert pipeline 170 collects pandemic data as estimates of future uncertainty for the pandemic risk factors. For example, the structured expert pipeline 170 can use interface 140 to poll for estimates of future uncertainty for individual risk factors.

[00257] Figure 11 shows an example table 1400 of scenario data that can correspond to nodes of the causal graph and corresponding values computer by server 100. The scenario data can also be generated using data from the knowledge graphs stored in memory 110, for example.

[00258] Figure 12 show an example scenario tree 1500 of scenario sets corresponding to the table 1400 of Figure 11 as an illustrative example of visual elements that can be generated and displayed at interface 140. The nodes of the scenario tree 1500 can be linked to other nodes by encodings to define edges of the scenario tree 1500.

[00259] In this example, a graph 1302 shows expert data for time to a vaccine viewed globally. Another graph 1304 shows expert data for time to a vaccine viewed regionally and a further graph 1306 shows expert data for time to a vaccine viewed by sector. These are examples of expert data and visual elements for interface 140.

[00260] The expert data can be derived from the knowledge graph. The server 100 can compute expert data using the risk factors and concepts from the knowledge graph. As noted, the NLP pipeline can updated the knowledge graph by processing articles. [00261] The expert data can represent the collective wisdom of articles or people worldwide who are experts and otherwise involved in understanding and influencing the future values of these factors. The data can extract their views on the subject to obtain a distribution of possible outcomes that would span the range of possibilities. The data can map to a discrete set of possible scenarios for combinations of these factors, so that the precise nature of these distributions is less important than their ability to capture extremes. These distributions give estimates of values which are needed to be able to develop scenarios which combine all factors that are material to the issue at hand. Namely, the possible range of Upside and Downside movements in the factors and their likelihood of occurrence. Figure 13 shows an example graph of distribution values for different factors over a time horizon. Further details on scenario seta for an example climate change setting are provided in “Generating scenarios algorithmically” risk.net June 2020 the entire contents of which is hereby incorporated by reference and attached hereto.

[00262] Figure 14 is a view of an example of the system with server 100 and hardware components configured for climate risk factors. In this example diagram, server 100 couples to different data sources (regulatory data, financial data, market data, research data, climate data, environmental data) via API gateway 230 to capture climate data for scenario generation engine 180. The API gateway 230 can store collected data at a data lake, for example. The server 100 continuously captures data to compute scenario data using scenario generation engine 180. The server 100 also has expert system and machine learning processes as described herein. The scenario generation engine 180 outputs scenario distribution data 1020 for updating and populating interface 140 with visual elements for the climate computations.

[00263] Figure 15 is a view of an example of the system with server 100 and hardware components configured for pandemic risk factors. In this example diagram, server 100 couples to different data sources (regulatory data, economic data, survey data, research data, health data, demographic data) via API gateway 230 to capture pandemic data for scenario generation engine 180. The API gateway 230 can store collected data at a data lake, for example. The server 100 continuously captures data to compute scenario data using scenario generation engine 180. The server 100 also has expert system and machine learning processes as described herein. The scenario generation engine 180 outputs scenario distribution data 1020 for updating and populating interface 140 with visual elements for the pandemic computations.

[00264] By combining the information in the forward distributions with scenario trees the server 100 can compute both scenarios for the combinations of factors as well as estimates of the likelihood of these scenarios occurring. Four example values - up and down ranges for the factors as well the likelihoods of the up and down movements are all we need to complete the data required to evaluate the tree.

[00265] The embodiments of the devices, systems and methods described herein may be implemented in a combination of both hardware and software. These embodiments may be implemented on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface.

[00266] Figure 16 is a view of an example server system model to extract, transfer and load data.

[00267] Figures 17 and 18 are views of example interfaces.

[00268] The server 100 can implement a computer process for generating integrated climate risk data by processing climate data using the taxonomy to map climate data to climate transition scenarios, climate regions, climate modulators, climate elements and climate risks. The server 100 can implement an automated rating mechanism for processing data from a variety of different sources using a consistent taxonomy. The integrated climate data can be used for scenario generation to measure the financial impact of climate change, uniformly, at some horizon, on different business units across geographies or climate regions. A business unit can operate in multiple regions and the data can be mapped to those regions.

[00269] The server 100 can implement a computer process for generating integrated climate risk data to generate a set of climate stress scenarios to measure or estimate impact on a business unit. The output data can be used to generate visualizations that aggregate gains and losses and forms a distribution of the gains (Upside) and losses (Downside).

[00270] Embodiments described herein provide a computer process for generating integrated climate risk data rating metrics. As noted, an example metric can be referred to as CaR, the Climate-risk adjusted Return. CaR is computed by dividing the Upside by the Downside as a measure of the risk-adjusted upside. Embodiments described herein can be used to generate visualizations of bar graphs for the distribution of the gains (Upside) and losses (Downside). Figure 17 is a view of example interface with visualizations of bar graphs for the distribution of the gains (Upside) and losses (Downside). The Upside can be given by an area covered by a first section of bars and the Downside can be given by an area covered by another section of bars.

[00271] A CaR of less that one implies a likely financial impact on profitability under these stresses. Figure 18 is a view of example interface with visualizations of a gauge representing the spectrum of possible gains or losses to climate stress. In this example, there is a section for material positive impact, a section for non-material impact, and a section for minor impact.

[00272] Program code is applied to input data to perform the functions described herein and to generate output information. The output information is applied to one or more output devices. In some embodiments, the communication interface may be a network communication interface. In embodiments in which elements may be combined, the communication interface may be a software communication interface, such as those for inter-process communication. In still other embodiments, there may be a combination of communication interfaces implemented as hardware, software, and combination thereof.

[00273] Throughout the description, numerous references will be made regarding servers, services, interfaces, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium. For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.

[00274] The following discussion provides many example embodiments. Although each embodiment represents a single combination of inventive elements, other examples may include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, other remaining combinations of A, B, C, or D, may also be used.

[00275] The term “connected” or "coupled to" may include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements).

[00276] The technical solution of embodiments may be in the form of a software product. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a removable hard disk. The software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the methods provided by the embodiments.

[00277] The embodiments described herein are implemented by physical computer hardware, including computing devices, servers, receivers, transmitters, processors, memory, displays, and networks. The embodiments described herein provide useful physical machines and particularly configured computer hardware arrangements. The embodiments described herein are directed to electronic machines and methods implemented by electronic machines adapted for processing and transforming electromagnetic signals which represent various types of information. The embodiments described herein pervasively and integrally relate to machines, and their uses; and the embodiments described herein have no meaning or practical applicability outside their use with computer hardware, machines, and various hardware components. Substituting the physical hardware particularly configured to implement various acts for non-physical hardware, using mental steps for example, may substantially affect the way the embodiments work. Such computer hardware limitations are clearly essential elements of the embodiments described herein, and they cannot be omitted or substituted for mental means without having a material effect on the operation and structure of the embodiments described herein. The computer hardware is essential to implement the various embodiments described herein and is not merely used to perform steps expeditiously and in an efficient manner.

[00278] Embodiments relate to processes implements by a computing device having at least one processor, a data storage device (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface. The computing device components may be connected in various ways including directly coupled, indirectly coupled via a network, and distributed over a wide geographic area and connected via a network (which may be referred to as “cloud computing”).

[00279] An example computing device includes at least one processor, memory, at least one I/O interface, and at least one network interface. A processor may be, for example, any type of microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or any combination thereof. Memory may include a suitable combination of any type of computer memory that is located either internally or externally. [00280] Each I/O interface enables computing device to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output devices such as a display screen and a speaker. Each network interface enables computing device to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.

[00281] Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope as defined by the appended claims.

[00282] Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

[00283] As can be understood, the examples described above and illustrated are intended to be exemplary only.