Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A DATA STRUCTURE AND ASSOCIATED ALGORITHMS FOR ASSESSING DISPERSION IN COMPLEX GEOMETRY
Document Type and Number:
WIPO Patent Application WO/2004/070532
Kind Code:
A2
Abstract:
A data structure with associated algorithms for representing and reconstructing contaminant transport in realistic complex geometries, e.g. in cities and around buildings, is described (160). The data structure has a pair of two-dimensional matrices compressed and extracted from three-dimensional matrices describing the transport and dispersion of contaminants in complex geometries using full-resolution, time-dependent computational fluid dynamics or compete and detailed experimental data (130). A first matrix (220, 320) has numerical values, in which continuous, monotone, gauge-invariant contours of the same value represent an edge of a family of contaminant clouds transported from one edge of the domain. A second matrix (250, 350) contains numerical values, in which continuous, monotone, gauge-invariant contours of the same value represent an opposite edge of a family of contaminant clouds transported from the other edge of the domain. The dual-matrix data structure may be used by overlapping the two matrices (440, 480) within the domain, and reading the areas between the two edge contours crossing at particular locations of interest.

Inventors:
JAY BORIS P
Application Number:
PCT/US2004/000215
Publication Date:
August 19, 2004
Filing Date:
January 29, 2004
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
US NAVY (US)
International Classes:
G06F1/00; G06F17/00; G06F19/00; G06G7/48; G06Q10/00; G06F; (IPC1-7): G06F/
Foreign References:
US20030030582A12003-02-13
Other References:
See references of EP 1588239A4
Attorney, Agent or Firm:
Karasek, John J. (Code 1008.2 4555 Overlook Avenue, S, Washington DC, US)
Download PDF:
Claims:
What is claimed:
1. A data structure for representing chemical biological radiological (CBR) atmospheric contaminant transport comprising : a first two dimensional matrix of summarized flow data in which continuous monotone gauge invariant contours of the same value represent an edge of a family of contaminant clouds; a second two dimensional matrix of summarized flow data representing an opposite edge of said family of contaminant clouds.
2. A method of using a data structure to predictively represent CBR atmospheric transport comprising: storing atmospheric data in a data structure consisting of a first two dimensional matrix of summarized flow data in which continuous monotone gauge invariant contours of the same value represent an edge of a family of contaminant cloud; a second two dimensional matrix of summarized flow data respresenting an opposite edge of said contaminant cloud; overlapping the first two dimensional matrix and the second two dimensional matrix within a domain; reading areas between said edge and said opposite edge of at least one of the family contaminant clouds to detect CBR transport.
3. The method of claim 2, wherein reading areas between said edge and said opposite edge of a contaminant cloud predicts a region of contamination in an upwind direction and a region of contaminantion in a downwind direction.
4. The method of claim 2, wherein reading areas between said edge and said opposite edge of a contaminant cloud predicts a site danger zone in an upwind direction.
5. A method for controlling access to data structures representing CBR atmospheric contaminant transport comprising: defining a domain, wind direction, and critical contaminant level; reading summarized flow data from a point of interest in a domain data; comparing summarized flow data from a point of interest with flow data at other points within the domain.
6. The method claim 5, wherein said point of interest is a site node.
7. The method claim 5, wherein said point of interest is a sensor node.
8. The method claim 5, wherein said point of interest is a source node.
9. The method of claim 2, wherein the step of reading areas between said edge and said opposite edge of a contaminant cloud to detect the spread of CBR contamination further comprises : reading cells at a location X, Y to determine whether each cell location is upwind of a CBR related point of interest Xs, Ys using the formula: If R (XS, Ys) =< R (X, Y) and L (X, Y) =< L (Xs, Ys) Then the location (cell) X, Y lies within the Danger Zone Otherwise the location (cell) the X, Y lies outside the Danger Zone, Where R (X, Y) and L (X, Y) are contour level values from a right edge matrix and a left edge matrix respectively.
10. The method of claim 2, wherein the step of reading areas between said edge and said opposite edge of a contaminant cloud to detect the spread of CBR contamination further comprises: reading cells at a location X, Y to determine whether each cell is downwind of a CBR related point of interest Xs, Ys or leakage zone which may become contaminated by a CBR threat using the formula: If R (Xs, Ys) >= R (X, Y) and L (X, Y) >= L (Xs, Ys) Then the location (cell) X, Y lies within the Leakage Zone Otherwise the location (cell) the X, Y lies outside the Leakage Zone, where R (X, Y) and L (X, Y) are contour level values from a right edge matrix and a left edge matrix respectively.
11. A data structure for representing fluid transport comprising: a first two dimensional matrix of summarized flow data in which continuous monotone gauge invariant contours of the same value represent an edge of a family of contaminant clouds; a second two dimensional matrix of summarized flow data representing an opposite edge of said family of contaminant clouds.
12. A method of using a data structure to predictively represent contaminant transport comprising: storing atmospheric data in a data structure consisting of a first two dimensional matrix of summarized flow data in which continuous monotone gauge invariant contours of the same value represent an edge of a family of contaminant clouds; a second two dimensional matrix of summarized flow data respresenting an opposite edge of said family of contaminant clouds; overlapping the first two dimensional matrix and the second two dimensional matrix within a domain; reading areas between said edge and said opposite edge of at least one of the family of contaminant clouds to detect contaminant.
13. The method of claim 12, wherein reading areas between said edge and said opposite edge of a contaminant cloud predicts a region of contamination in an upwind direction and a region of contamination in a downwind direction.
14. The method of claim 12, wherein reading areas between said edge and said opposite edge of a contaminant cloud predicts a site danger zone in an upwind direction.
15. A method for controlling access to data structures representing contaminant transport comprising: defining a domain, wind direction, and critical contaminant level; reading summarized flow data from a point of interest in a domain data; comparing summarized flow data from a point of interest with flow data at other points within the domain.
16. The method claim 15, wherein said point of interest is a site node.
17. The method claim 15, wherein said point of interest is a sensor node.
18. The method claim 15, wherein said point of interest is a source node.
19. The method of claim 12, wherein the step of reading areas between said edge and said opposite edge of a contaminant cloud to detect contaminant further comprises: reading cells at a location X, Y to determine whether each cell location is upwind of a point of interest Xs, Ys using the formula: If R (Xs, Ys) =< R (X, Y) and L (X, Y) =< L (Xs, Ys) Then the location (cell) X, Y lies within the Danger Zone Otherwise the location (cell) the X, Y lies outside the Danger Zone, Where R (X, Y) and L (X, Y) are contour level values from a right edge matrix and a left edge matrix respectively.
20. The method of claim 12, wherein the step of reading areas between said edge and said opposite edge of a contaminant cloud to detect the spread of contaminant further comprises: reading cells at a location X, Y to determine whether each cell is downwind of a point of interest or leakage zone which may become contaminated by using the formula: If R (Xs, Ys) >= R (X, Y) and L (X, Y) >= L (Xs, Ys) Then the location (cell) X, Y lies within the Leakage Zone Otherwise the location (cell) the X, Y lies outside the Leakage Zone, where R (X, Y) and L (X, Y) are contour level values from a right edge matrix and a left edge matrix respectively.
Description:
A DATA STRUCTURE AND ASSOCIATED ALGORITHMS FOR ASSESSING DISPERSION IN COMPLEX GEOMETRY Field of the Invention The invention relates to computer predictions of atmospheric contaminant transport, with much faster than real-time access to results for emergency response and enables accurate new capabilities to locate and unknown source from isolated sensor readings of the contaminant.

Description of the Related Prior Art The effective defense of cities, large bases, and military forces against airborne chemical, biological, or radiological (CBR) incidents or attack, and against environmental and obscurant threats requires new (faster and more accurate) prediction technology to be successful. While all of these threats may differ in their physical and physiological properties, they all involve"contaminant transport"in the air.

Contaminants, or other fluids of interest, may also move through water, as for example in oil spills; or solids, as for example in underground oil reservoirs. For convenience this specification will refer to all of these applications as CBR threats, with the understanding that the disclosed data structure and methods of use also provide effective response information for other contaminant transport scenarios. Similarly, the dual-matrix data structure and methods of use may be used for analysis of other types of flows.

The existing plume prediction technology in use throughout the nation (and the world) is based on Gaussian similarity solutions and/or Lagrangian particle models. The Gaussian similarity solutions ("puffs"or"plumes") are extended Lagrangian approximations for a single particle or"puff."These Gaussian puff models only really apply for large regions and flat terrain where large-scale vortex shedding from buildings, cliffs, or mountains is absent. The Lagrangian particle models treat many more moving points, and thus take much longer to compute, but they do not allow each particle or "puff'to expand.

These current plume prediction methods are also not designed for terrorist situations where the input data about the contaminant source (or sources) is very scant and the crucial distances (e. g. a city block) are so small that problem set-up, analysis and situation assessment must take place in seconds to be truly effective. Both greater speed and greater accuracy are required for effective assessment of CBR emergency response around buildings and in cities.

The CBR defense of a fixed site or region has a number of important features that make it different from the predictive simulation of a contaminant plume from a known set of initial conditions. The biggest difference is that very little may be known about the source, perhaps not even its location. Therefore any analysis methods for real-time, emergency response cannot require this information. It is crucial to be able to use anecdotal information, qualitative data, and any quantitative sensor data that may be available and then to instantly build a situation assessment suitable for immediate action based on this fragmented information.

A software emergency assessment tool should be effectively instantaneous and easy to use to allow immediate assimilation of new data, instantaneous computation of exposed and soon-to-be exposed regions, and zero-delay evaluations of options for future actions. The software should also be capable of projecting optimal evacuation paths based on the current and evolving situation assessment. To meet these crutial requirements, a new tool is required that is both much faster than current"common use" models and accurate compared to three-dimensional, physics-based flow simulations for scenarios involving complex and urban landscapes. The data structure and algorithms disclosed here focus on situation assessment through sensor fusion of qualitative and incomplete data using a summary of accurate flow details, rather than integrating a computer simulation.

Previously existing common-use hazard prediction and consequence assessment systems have at their heart a plume simulation model based on a Gaussian/Lagrangian puff model. These Gaussian/Lagrangian puff/plume models generally feature variable meteorology by allowing the user to interface the attack scenario with a meteorological forecast for approximate long-range and long-time predictions. The Gaussian plume method, while relatively fast, tends to be inaccurate, especially for urban areas. The setup for all these systems tends to be complicated, and require a priori knowledge of the source characteristics.

Some examples of common-use hazard prediction and assessment systems are as follows: ALOHA (Area Locations of Hazardous Atmospheres) is a Gaussian plume model used for evaluating releases of hazardous chemical vapors. ALOHA allows the user to estimate the downwind dispersion of a chemical cloud based on the toxicological/physical characteristics of the released chemical, atmospheric conditions, and specific circumstances of the release. Graphical outputs include a"cloud footprint" that can be plotted on maps.

MIDAS-ATTM (Meteorological Information and Dispersion Assessment System- Anti-Terrorism) is software that models dispersion using a Gaussian puff model of releases of industrial chemicals, chemical and biological agents, and radiological isotopes caused by accidents or intentional acts. MIDAS-AT is designed for use during emergencies and for planning emergency response drills.

VLSTRACK (Vapor, Liquid, and Solid Tracking) provides approximate downwind hazard predictions for a wide range of chemical and biological agents and munitions of military interest using a Gaussian puff model.

CATS (Consequences Assessment Tool Set) is a consequence management package that integrates hazard prediction, consequence assessment, emergency management tools and is built around the SCIPUFF Gaussian puff model. SCIPUFF is also embedded in the Hazard Prediction and Assessment Capability (HPAC) system principally designed for military use. HPAC estimates the effects of hazardous material releases into the atmosphere. The HPAC system also predicts approximate downwind hazard areas resulting from a nuclear weapon strike or reactor accident.

The NARAC (National Atmospheric Release Advisory Center) emergency response central modeling system consists of a coupled suite of meteorological and dispersion models, including Gaussian, Lagrangian and computational fluid dynamics (CFD) models. Users must initiate a problem through a phone call to their operations staff or interactively via computer. NARAC will then execute a combination of Gaussian puff/plume and approximate Lagrangian particle 3-D models to generate the requested products that depict the size and location of the plume, affected population, health risks, and proposed emergency responses. NARAC has recently announced a multi-year research program aimed at adding an urban modeling and sensor fusion capability.

All these common-use technologies, however, do not treat the important effects of buildings in any rigorous way and they all take minutes at least to set up and run-once information about the source has been determined. Even more simplified systems such as PEACS (Palmtop Emergency Action for Chemicals) were developed to provide the necessary emergency response information to make quick and informed decisions to protect response personnel and the public. PEACH can return results within seconds and requires less detailed knowledge of the source, but the resulting fixed-shape plume does not take into account any effect of complex terrain or buildings. Furthermore, none of these systems allow a user to backtrack the very limited input data likely to be available in a terrorist scenario to an undisclosed source location.

FASTD-CT (FAST3D-Contaminant Transport) is a time-accurate, high- resolution, complex geometry CFD model. The fluid dynamics is performed with a fourth-order accurate implementation of a low-dissipation algorithm that sheds vortices from obstacles as small one cell in size. The region of interest or domain is divided into finite parcels called computational cells. These units limit the spatial accuracy of a particular model. Particular care has been paid to the turbulence treatments since the turbulence in the urban canyons lofts ground-level contaminant up to where the faster horizontal airflow can transport it downwind. FAST3D-CT also has a number of physical processes specific to contaminant transport in urban areas such as solar heating and buoyancy, solar chemical degradation, evaporation of airborne droplets, re-lofting of particles and ground evaporation of liquids.

State-of-the-art, engineering-quality 3D predictions, such as FAST3D-CT or the best 3D Computational Fluid Dynamics (CFD) models in the Department of Energy and Department of Defense laboratories, provide more accurate results that one might be more inclined to believe, but can take hours or days to set up, run, and analyze.

However, waiting even one or two-minutes for each approximate scenario computation, as with the best puff and plume models in the current common-use hazard prediction systems, can be far too long for timely situation assessment. On the other hand, overly simplified results can result in qualitatively wrong assessments. The answer to this dilemma is to do the best computations possible from state-of-the-art 3D simulations well ahead of time and capture their salient results in a data structure that can be recalled, manipulated, and displayed instantly for the current prevailing atmospheric conditions.

Summary of the Invention A data structure with associated algorithms for representing and reconstructing contaminant transport in realistic complex geometries, e. g. cities and buildings, is described. The data structure has a pair of two-dimensional matrices compressed and extracted from three-dimensional data sets describing the transport and dispersion of contaminants in complex geometries using full-resolution, time-dependent computational fluid dynamics or compete and detailed experimental data. A first matrix has numerical values, in which continuous, monotone, gauge-invariant contours of the same value represent the edge of a family of contaminant clouds transported from one edge of the domain. A second matrix contains numerical values, in which continuous, monotone, gauge-invariant contours of the same value represent the opposite edge of a family of contaminant clouds transported from the other edge of the domain. This dual-matrix data structure is used by overlapping the two matrices within the domain, and reading the areas between the two edge contours at particular locations of interest.

The invention provides a system of storing and manipulating data within the two matrices to give much faster than real time data for first responders to CBR attacks.

The invention provides a method of manipulating data using the two matrices to enable quick response to imminent threat of CBR attacks.

Brief Description of the Drawings Figure 1. Overview of how to develop and use the data structure.

Figure 2. A depiction of the data structure (grey scale substituted for full colour representation) showing the dual-matrix and the continuous nature of the data stored within the structure.

Figure 3. (a) A left edge matrix (half of the data structure). (b) A right edge matrix (other half of the data structure). This figure uses contour lines (as on a topographic map) rather than continuous shading to show the values in the two matrix components of the data structure. The building/tree map, not actually part of the data structure, shows the relationship of the dual-matrix values and contours to the detailed geometry on which the dual matrix is based.

Figure 4. An overlay of the two edge matrices for the complex geometry shown, showing how the dual-matrix is used. <BR> <BR> <P>Figure 5. Overlay of the two edge matrices for a flat earth (no buildings, trees, etc. ) for the same wind conditions as used in Figure 4.

Figure 6 (a). Plume envelope and footprint of a contaminant source and upwind danger zone of a site of interest for the full geometry of buildings, trees, and terrain.

Figure 6 (b). Same as Figure 6 (a) using the flat earth dual-matrix of Figure 5. Note the simple symmetry of the plume, footprint, and danger zone about the wind direction and the qualitative differences of the results when the full geometry is considered.

Figure 7. Plots of the Figure of Merit using the data structure versus time for six different sets of full 3D simulations.

Figure 8. Comparison of the accuracy and costs of Experiment, Computational Fluid Dynamics, Dual-Matrix Reconstruction, Puff/Plume Models, and Simple Phenomenologies.

Figure 9. Comparison of the amount of data processed, time to run the models, and time required to learn to use the technologies for, CFD, Dual-Matrix Reconstruction, and Gaussian puff models.

Figure 10. A simple block diagram of the routine controlling access to and use of the data structure and associated algorithms.

Figure 11. A logical flow diagram of the procedure that processes sites of interest using the data structure.

Figure 12. A logic flow diagram of the procedure that processes contaminant observations using the data structure.

Figure 13. A logic flow diagram of the procedure that processes contaminant sources using the data structure.

Detailed Description of the Invention Figure 1 is an overview of the development process 160 and method of using 150 the dual matrix data structure. The data structure comprises a pair of two-dimensional matrices compressed and extracted from three-dimensional matrices describing the transport and dispersion of contaminants in complex geometries using full-resolution, time-dependent computational fluid dynamics or compete and detailed experimental data. One embodiment of the dual matrix data structure is a Nomograph. Algorithms using the data structure are both faster and more accurate than prior methods of forecasting transport and dispersion in complex geometries.

First, the region of interest is defined and an extensive database 130 of CFD results is assembled. A number of different wind directions around the compass must be simulated and results for each are tabulated in the three dimensional database 130. While the figures and description refer to CFD results generated on a supercomputer, if sufficiently complete and accurate experimental results were available they could also be used to develop the new data structure.

The CFD result database for the region of interest is next compressed, 140.

During compression 140, by a factor of about 10,000, extraneous and redundant information from the three-dimensional, time-dependant CFD results is stripped. During this database compression 140, detailed records the contaminant flow paths and turbulent dispersion for the urban area in question (region of interest) are reduced to the continuously nested and monotonic set of left and right cloud edge contours shown in Figures 2 through 5. The continuously nested set of left and right cloud edge contour levels, for all wind directions comprise the dual-matrix data structure described herein. In addition to the contour levels being continuous, the transition from one contour to the next is continuous. By summarizing the complete and resolved flow data, the dual- matrix data structure and associated algorithms allow users to view possible source areas based on backtracking contaminant observations to determine the unknown source location; view recommended evacuation routes; and view upwind danger zones, instantly.

This data structure, together with the set of algorithms to access and process the data structure, are assembled in the dual-matrix library 100. The dual-matrix library 100 includes a pair of two-dimensional matrices holding continuous contour values indicating the location of both edges of the dispersed cloud of contaminant within a region of interest. Users access the contour flow map dual-matrix library 100, with its embedded emergency-response information through a graphical user interface (GUI) 110, but other applications can call the control procedure 120, directly for autonomous operation. The control procedure 120 is described more fully below with respect to Figure 8. The control procedure 120 is a procedure that interprets scenarios defined in their entirety by two sets of numbers chosen by the GUI or by the user application program. In a preferred embodiment, the control procedure 120 is a software routine. One set of numbers, the environmental state vector, gives all the necessary information about the wind, all the settings for the various displays required, and specifies the instantaneous time after release of the scenario. The second set of numbers, the node state vector, lists the location and status of all the nodes such as what kind of node it is, whether it is turned on, and the like. The term node is used here to refer to a particular computational cell (location) where additional information is available or desired. A node is processed by using the information stored in the data structure, combined with the algorithms to use the data structure to give the user the requested information about the node. These two vectors, or arrays, allow the control procedure to oversee preparing the requested displays using the composite flow map data structure and algorithms in the library 100.

In particular, the control procedure 120 will, depending on the environmental state vector, invoke an algorithm to process the complete set of sensor data 170, the information that contaminant is or is not observed at a particular location within the region of interest. Sensor data may be anything from an anecdotal observation or news report, to readings from specialized sensors showing quantitative contaminant level. The control procedure 120 also can invoke an algorithm to process the site nodes 180, based on data in the environmental state vector. A site is a location of critical interest and may be permanent or transitory. For example a site may be a particular building or outdoor location, a parade route, or location of a sporting event. The algorithm to process sites 180 displays data regarding the danger zone for a particular site or location within the region of interest. The process sites algorithm 180 can also display the contaminated region resulting from a leak within a particular building. The danger zone is the region upwind from which a contaminant could reach the site in question. Finally, the control procedure 120, depending on the environmental state vector, can invoke an algorithm to process sources 190, specific locations where contaminant has been introduced into the region of interest. Together, use of the library 100, control procedure 120 and user interface 110 comprise an application of the dual-matrix for emergency response 150.

Figure 2 is a graphical grey scale depiction of the dual matrix data structure showing the two component matrices dual-matrix and the continuous nature of the data stored within this structure for one particular region of interest, or domain 200, and wind direction indicated by the arrow 270. The various gray shadings at each location within the domain 200 correspond to different values within each edge matrix. While the domain 200 is shown as a square area, it need not be square or even quadrilinear. Any contiguous shape, including irregular shapes, can be chosen for the region of interest. The data structure comprises two matrices of numbers, shown graphically as gray shadings, representing left edge values in the domain 200 and right edge values in the same domain 200. The continuous variation of the shading represents the continuity of the numbers stored in the matrices. It should be noted that the domain 200 must be the same shape and location for both the left edge values and right edge values.

For ease of interpretation, the left edge matrix of values in Figure 2 is also illustrated with contours 240, at evenly spaced intervals throughout the domain 200. The contours begin at the edge of the domain 200 evenly spaced both physically and numerically, but the shape and physical distance between the contours vary in response to the complex geometry of the domain 200, leaving only the contour values evenly spaced numerically. Similarly, the right edge matrix of values is marked with contours 230 throughout the domain. These contours correspond to the left and the right edges of a family of contaminant clouds. The contours, as well as the buildings and trees, are shown only for ease of interpreting the figures; they are not an explicit part of the composite flow map data structure. The processing algorithms use only the continuous array of numbers in the left and right edge matrices, allowing the processing algorithms to find a contaminant cloud edge at any point within the domain.

The lowest left edge matrix value is located in the upper right near 210 of the domain 200, while the highest left edge matrix value is located in the lower left region near 220 of the domain 200. The lowest right edge matrix value is located in the right region 260 of the domain 200, while the highest right edge matrix value is located in the upper left region 250 of the domain 200. Together, the curves and variations of the contours 230 and 240 show the effect of the wind and underlying terrain and buildings on the flow of contaminant through the region 200.

As can be seen from the contours 230 and 240 along with the shading, within each matrix the contours never cross and are defined by monotone values. By enforcing these requirements during the data compression stage, the resulting dual-matrix data structure is significantly smaller than it would otherwise be. In addition, the monotonicity requirement is an approximation of the original data that allows for a greater safety margin between the contaminant cloud edges modeled using the dual-matrix data structure and the actual cloud edges from full-resolution CFD results.

Figure 3 a is a black and white enlargement of the depiction of the left edge matrix or half of the data structure in Figure 2, showing only contours for ease of reading. The contour levels are equally spaced in value; their changing direction and separation indicates graphically how the new data structure captures the underlying urban geometry.

A family of contaminant clouds, all sharing the same left edge is defined by each contour.

The overall domain 200 of the data structure is shown, including the location of selected buildings 310 and trees 320. The domain in Figure 3a corresponds entirely to domain 200 in Figure 2. The trees 320 and buildings 310 are only shown for reference, to indicate the complexity of the region of interest through which contaminant transport can be accurately simulated. The left edge contour lines 330 are also shown, along with areas where the flow concentrates 340 and other areas where the flow expands 350.

Figure 3b is a black and white enlargement of the depiction of the corresponding right edge matrix, the second half of the data structure in Figure 2, showing only the contours for ease of reading. Another family of contaminant clouds is defined by each of these right edge contours. The domain 200 of the data structure is shown, including the location of buildings 310 and trees 320. This corresponds to domain of Figure 2. The right edge contour lines 330 are also shown, along with areas where the flow concentrates 340 and areas where the flow expands 350.

Figure 4 is an overlay of the left and right edge matrices shown in Figures 3 a and 3b, for the complex geometry of the domain 200. The wind is shown by a downward arrow. Overlaying the left and right edge matrices, here displayed as contours, illustrates geometrically how the upwind and downwind sectors, originating at the intersection of one right edge contour and one left edge contour, can be defined for any particular node location, in the domain of the data structure. While the contours shown here only intersect at a few points within the domain, the continuous nature of the values within the dual-matrix data structure ensures that a pair contours can be constructed at any point within the domain 200. The domain 200 must be identical for both the right and the left edge matrices in order to overlay them.

A site 410 is depicted as square icon representing a building, facility or other location of special interest needing to be analyzed or protected. By selecting the left edge contour 420 and right edge contour 430 passing right through the identified site 410, here highlighted by thick dashed lines in the upper half of Figure 4, a danger zone 440 upwind of the site of interest is defined. Since the expansion of a contaminant cloud is contained within the cloud edge lines, any contaminant outside of this danger zone 440, cannot reach the site 410 in the current wind condition. While the site 410 is shown in one particular place, users of the data structure may place any number of sites at any location or locations within the region of interest 200.

The location of a source of contamination 450 is marked with a star shaped icon.

The source of contamination 450 may be an accident, sprayer, broken container of hazardous chemicals or the like that needs to be tracked and analyzed in order to coordinate emergency response. By selecting the left edge contour 460 and the right edge contour 470, here highlighted with a thick dashed line, passing through the identified source location 450, the source processing algorithm identifies the eventual contamination footprint 480. A contamination footprint is the possible extent of contamination from a source given enough time to disperse downwind throughout the region of interest. The downwind contamination footprint 480 of the source 450 is defined by the intersection of the right 470 and left 460 edge contours passing through the source location 450. The actual contaminant plume envelope starts at the source 450, in the center of this footprint 480, and expands away from the source 450 and towards the left 460 and right 470 edges with time. The plume envelope display is not shown in Figure 4 or Figure 5 below but is illustrated in Figure 6.

Figure 5 is an overlay of the left and right edge matrices of the composite data structure when the complex geometry shown here, and in previous figures, is in fact neglected in assembling the data in the dual-matrices. Such contours represent an accurate solution to the overall contaminant transport problem in large, flat regions of interest, such as a desert or empty plain. Figure 5 represents an approximation, using a dual-matrix, to the solutions obtained by using Gaussian puff/plume methods to compute transport and dispersion. Overlaying the left and right edge matrices, here displayed as contours, illustrates geometrically how the upwind and downwind sectors, originating at the intersection of one right edge contour and one left edge contour, can be defined for a node, or specific location of interest, in the domain 200, which must be identical for both the right and the left edge matrices.

A square icon represents a site 510, such as a building, facility or other location of interest for protection or analysis. By selecting the left edge contour 520 and right edge contour 530 passing through the identified site 510, here highlighted by thick dashed lines in the upper half of the figure, a danger zone 540 upwind of the site of interest is defined. The danger zone 540 is the upwind area between the left edge contour 520 and the right edge contour 530 that intersect at the site of interest 510. Since the cloud-edge lines bound the contaminant cloud motion, any contaminant outside of this danger zone 540, is predicted to be unable reach the site 510 in the current wind conditions. However, in the complex geometry of the domain 200, the simulation that does not account for the domain geometry significantly under-predicts the danger zone. Comparing the danger zone 440 with the danger zone 540 shows this.

The location of the source of contamination 550 is marked with a star shaped icon. By selecting the left edge contour 560 and the right edge contour 570, here highlighted with a thick dashed line, passing right through the identified source of contamination 550, the algorithm identifies the predicted contamination footprint 580.

The downwind contamination footprint 580 of the source 550 is defined by the intersection of the right 570 and left 560 edge contours passing through the source location 550.

Figure 6a shows the reconstructed plume envelope 650 at 10 minutes after contaminant release, the entire footprint 660 of a source node, shown as star icon 600, and the upwind danger zone 640 of a site node, shown as a square icon 610. The assessment of the extent of these critical regions illustrates the method of use of the dual- matrix right edge and left edge matrices of Figure 4 capturing the contaminant flow using the full geometry of buildings, trees, and terrain. Figure 6b shows the reconstructed plume envelope 655 after 10 minutes and the entire footprint 665 for the same site 610 and source 600 in the same domain-but captured using flat-earth dual-matrices with no building or tree geometry, as illustrated in Figure 5. Note the simple symmetry of the plume, footprint, and danger zone about the wind direction in this latter case and the qualitative differences of the results when the full geometry is considered.

Sensors are indicated in these figures with a triangular icon. Sensors labeled 620 have not been triggered by contaminant at the time depicted in Figures 6a and 6b as they lie outside of the instantaneous plume envelope 650 and 655. Sensor 625 is located outside the footprint using no geometry but inside the footprint when full geometry is used. In other words, a simulation, such as a puff or plume model, that does not account for complex geometry gives a false negative result for the site 610, possibly resulting in high strategic losses. A measurable contaminant level in the plume envelope has triggered sensors 630 in Figure 6a but sensors 635 at the same location in Figure 6b have not yet been contaminated. In this case also, an emergency response tool that does not account for complex geometry results in false negatives. Thus sensor responses clearly differ depending on the configuration and situation of buildings and trees using this new data structure.

The shaded regions 640 in Figure 6a and 645 in Figure 6b estimate the upwind danger zone for the site of interest 610 with and without capturing the effects of the geometry, respectively. The danger zone for a location, or site, is the set of all possible positions upwind where a source of contamination could reach the indicated site. This important information is an entirely new capability made possible by the dual-matrix and algorithms for its use. The plume envelopes 650 and 655 show the geographical region that the contaminant plume may reach during its continuing expansion at the indicated time after release. The contamination footprints 660 and 665 in Figures 6a and 6b respectively, represent the full extent of the growing contamination region after the plume envelope has spread to its maximum toxic extent according to dual-matrix simulations with and without underlying urban geometry.

Figure 7 plots the figure of merit for the dual-matrix versus time for six different sets of full 3D CFD simulations representing different contaminant release scenarios. A number of validation simulations comparing the dual-matrix reconstruction with detailed CFD simulations using FAST3D-CT, described above, were performed to test the accuracy of the footprint and plume data recall approach. The figure of merit is a quantitative comparison between the two sets of results. A perfect match would be indicated by a figure of merit of 1.00, or 100% match. As can be seen from Figure 7, by about 10 minutes after release of a contaminant, the results from the dual-matrix match those obtained using full 3D CFD calculations to about 70-90% accuracy. At about 20 minutes after contaminant release, the dual-matrix results reach to between approximately 75-85% accuracy. And with time, the dual-matrix results converge to about 80% accuracy.

Figure 8 is a table comparing accuracy and other capabilities of experiments, full 3D Computational Fluid Dynamics (CFD), the dual-matrix data structure and associated algorithms, Gaussian or Lagrangian puff/plume models, and simple phenomenologies.

Phenomenologies are models representing dispersion in an approximate form based on physical intuitive, qualitative understanding of the underlying physics. The generally more accurate approaches are listed in the table above the generally less accurate approaches. The different approaches are evaluated with respect to the quality of approximation of the physics models and input, fluid dynamics and turbulence, boundary conditions, realistic geometry; the variability and uncertainty of the results; whether the approach allows sensor fusion and backtracking; and whether evacuation routes are provided. Sensor fusion is the process by which sensor data is combined with simulations based on a potential location of a contaminant source to increase the accuracy of the simulation results.

A rating of excellent indicates that the area of concern is quantitatively accurate.

A rating of very good means that the results can still be classified as quantitatively accurate, but there are some aspects of the model that could benefit from refinement. A rating of good indicates that more refinement is needed, although the results could still be classified as quantitatively correct. A rating of fair to good indicates that a good semi- quantitative understanding is available from the approach. A rating of fair indicates that good qualitative understanding may be obtained from the approach. A rating of poor indicates that acceptable qualitative agreement is the best that can be expected. The indication that the area is problematic means that the approach does not adequately address these areas.

As can be seen in the table, even baseline ground truth as measured by perfect experience is relatively weak with respect to providing accurate instantaneous boundary conditions for the region of interest. It is essentially impossible to determine the inflow boundary conditions fully at any one time during an experiment or field trial. The fact that boundary conditions are relevant at all times during the experiment makes the problem even more unfeasible. Perfect experiments are even weaker in their variability and uncertainty estimates because it is virtually impossible to repeat a field experiment under identical environmental conditions and verify that the conditions were the same.

Any experimental studies of variability are confused by uncertainty and vice versa. Even phenomenologies and puff/plume models provide some useful information on variability and uncertainty because they incorporate statistical considerations and analysis not available from experiments. The dual-matrix can be seen to be better across the board that existing puff/plume technology and simple phenomenologies and almost as good as CFD in most areas and actually better in some.

Figure 9 is a table comparing the amount of data processed, the time required to run the models, and the approximate time required to learn how to use the respective technologies for CFD, the dual-matrix and associated algorithms, and Gaussian puff models. The dual-matrix is approximately 1000 times faster than even the typical implementation of the Gaussian puff models, because it does not actually calculate the contaminant transport at the time of use, but compactly stores highly accurate results for effectively immediate recall when needed. In addition, the dual-matrix and associated algorithms are approximately 100 times faster to learn to use than current operational models. This allows more emergency response personnel to be trained to use the system without taking excessive time from other duties. Finally, the data requirements for the dual-matrix are very small and approximately 10 times less than the requirements of current operational models, and could be conveniently added to personal digital assistants and the like, for ease of field use.

Figure 10 shows a logical flow diagram of the master control procedure 120 controlling access to the dual-matrices 100 and the specific processing algorithms, or procedures 170,180, 190 of Figure 1. A user, client program or Graphical User interface connects to the display and analysis capabilities through this master control procedure 120. A number of different user programs are envisioned using this master control procedure.

A user, working through a Graphical User Interface (GUI) 110, or other user application program initiates the processing algorithms by invoking or"entering"the master control procedure 120. The first step in one embodiment is to enter 1000 the master control procedure 120. A software embodiment of the method of using the dual- matrix is a"user"or"client. "One embodiment of a client is CT-Analyst available from the U. S. Naval Research Laboratory. Other embodiments of a client are described in U. S. Patent Application No. 10/612582 filed July 2,2003.

The master control procedure begins processing with a valid environmental state vector and a node state vector constructed externally. These two state vectors define the current scenario, or system, and the displays that are to be produced and displayed for the user by the GUI or returned for further analysis to the client program 1080. Moveable nodes are located within the domain and defined in the node state vector. The master control procedure processes the moveable nodes contained in the node state vector one at a time, considering site, sensor, and source nodes. First, a check 1010 is performed to see if all the nodes have been processed.

If the nodes have not been all processed, the master control procedure then checks to see if any of the remaining nodes are site nodes 1020. If there are any site nodes remaining, the requested displays and analyses are considered, or processed, 170 one at a time. The processing of each site node 1030 is illustrated in detail in Figure 9. If there are any sensor nodes to be processed 1040, the requested displays and analyses are considered, or processed, 180 one at a time. The processing of each sensor node 1050 is illustrated in Figure 10. If there are any source nodes to be processed 1060, the requested displays and analyses are considered, or processed, 190 one at a time. The processing of each source node 1070 is illustrated in Figure 11. After all of the nodes have been processed by tasks initiated in the master control routine, the master control procedure returns 1080 the resulting display back to the GUI or other user application program.

Figure 11 shows a logical flow diagram of the algorithm that processes site nodes using the disclosed dual-matrices to construct the upwind danger zone and downwind leakage zone of the sites. When the master control procedure 1000 invokes"Enters" 1100 the site processing procedure, it provides the dual-matrix corresponding to the current wind speed and direction and passes the current environmental and node state vectors defining the scenario to the site processing procedure. Because the left and right edge matrices are invariant under addition or multiplication, that is they are gauge invariant, the matrices for different wind directions or other environmental conditions may be blended or"morphed"to describe a continuum of different conditions. Thus the dual-matrix for the current wind speed and direction, or even the current foliage coverage, atmospheric stability, may be a blend of two or more dual-matrices constructed from the original CFD or experimental data.

The site processing procedure 1030 first selects 1105 the next unprocessed, site node from the list of nodes or node vector. For example the selecting step 1105 could select node 410 in Figure 4, or 510 in Figure 5. The procedure next checks 1110 whether the danger zone display is to be constructed for site nodes, in other words, if the danger zone is"turned on"in the environmental state vector. If it is, the danger zone computation is initialized 1115. The data items defining the danger zone are initialized for the selected site node by finding the left and right edge values at the location of the site node (e. g. node 410 or node 510). To be specific, the contour level values from the right edge matrix R (XS, YS) and left edge matrix L (XS, Ys) at the site node location Xs, Ys are evaluated and stored because they are used repeatedly to compute whether each location, or computational cell, in the domain lies within the danger zone or not.

Each of the cells in the overall domain is"read"from the library one at a time 1120. For each cell, at location X, Y, the local R (X, Y) and L (X, Y) contour level values are read from the composite data structure and evaluated using Formula 1. Step 1125 uses Formula 1, below to determine whether the cell X, Y lies in the danger zone.

Formula 1 (1125): If R (Xs, Ys) =<R (X, Y) and L (X, Y) =< L (Xs, Ys) Then the location (cell) X, Y lies within the Danger Zone Otherwise the location (cell) the X, Y lies outside the Danger Zone.

If the cell is within the danger zone 1130, the procedure sets the danger count array value to 1 at that location, or cell. In other words, step 1125 checks to see if the right and left edge matrix values at the site are equal to or less than the right and left edge values at all locations within the domain. One embodiment of Formula 1, in the computer programming language FORTRAN is: inDangerZone = r (xs, ys). le. r (x, y). and. l (x, y). le. l (xs, ys) where inDangerZone is a logical variable. Formula 1 is the upwind formula, used to extract information from the dual-matrix. If all the cells in the data structure have not been read and checked, step 1135 goes back to step 1120 and repeats the process on the next cell in the domain of the data structure until the entire danger zone has been constructed.

Then the site processing procedure 1030 checks the environmental state vector in step 1140 to determine whether the leakage display is to be constructed for site nodes.

The leakage zone is the area around and a down wind of a site that may become contaminated if a contaminant leak occurs within or immediately adjacent to the site. If it is, the leakage zone computation is initialized in step 1145. The data items defining the leakage zone are initialized for the selected site node by finding the left and right edge values at the location, or computational cell, of the site node, e. g. node 410. To be specific, the contour level values from the right edge matrix R (Xs, Ys) and left edge matrix L (Xs, Ys) at the site node location Xs, Ys are evaluated and stored as they are used repeatedly to compute whether each location (cell) in the domain lies within the leakage zone or not.

Each of the cells in the overall domain is again"read"from the library one at a time 1150. For each cell, at location X, Y, the R (X, Y) and L (X, Y) contour level values are read from the composite data structure and evaluated using Formula 2. Step 1155 uses Formula 2, below, to determine whether the cell X, Y lies in the leakage zone.

Formula 2 (1155): If R (Xs, Ys) >= R (X, Y) and L (X, Y) >= L (Xs, Ys) Then the location (cell) X, Y lies within the Leakage Zone Otherwise the location (cell) the X, Y lies outside the Leakage Zone.

If the cell is within the leakage zone, the procedure sets the leakage count array value to 1 at that location or cell 1160. In other words, step 1155 checks to see if the right and left edge matrix values at the site are equal to or greater than the right and left edge values at all locations within the domain. One embodiment of Formula 2, in the computer programming language FORTRAN is: inLeakageZone = r (xs, ys). ge. r (x, y). and. l (x, y). ge. l (xs, ys) where inLeakageZone is a logical variable. Formula 2 is one of two formulas, the downwind formula, used to extract information from the dual-matrix. If all the cells in the dual-matrix have not been scanned, step 1165 goes back to step 1150 and repeats the process on the next unprocessed cell. Once all the sites have been processed from the node list, return to the master control procedure (1170 and Figure 8).

Figure 12 shows a logical flow diagram of the algorithm that processes sensor nodes using the data structure to construct the upwind backtrack analysis and downwind consequence zone of the sensor observations. Sensors may be either"hot,"when something dangerous is observed, or"cold, "i. e. sensing only clear air. When the master control procedure 1000 invokes or"enters"the sensor processing procedure 1050 in step 1200, it provides the data structure corresponding to the current wind speed and direction and passes the current environmental and node state vectors defining the scenario into the overall procedure 1050.

The procedure 1050 first selects 1205 the next unprocessed, sensor node from the list of nodes. For example the selecting step 1205 could select one of nodes 620,525, 630, and 635 in Figures 6a and 6b. The procedure next checks whether the backtrack analysis display is to be constructed 1210 for the set of sensor nodes. If the backtrack analysis is turned on in the environmental state vector, the backtrack analysis computation is initialized 1215 by calculating the wind direction and establishing a radius around each sensor location within which any detection of contaminant will trigger the simulated sensor. This initialization also sets the back count array to zero at every cell in the domain.

The data that defines the backtrack zone are initialized for each sensor node by finding the left and right edge values at the location of the sensor node. To be specific, the contour level values from the right edge matrix R (Xsn, Ysn) and left edge matrix L (Xsn, Ysn) at each sensor node location Xgn, Ygji are evaluated and stored as they are used repeatedly to compute whether each location, or cell, in the domain lies within the backtrack zone. A cell is a possible source location if the cell lies within the upwind backtrack zone of two or more"hot"sensors but lies outside the backtrack zone of all "cold"sensor. The backtrack analysis is based on the full set of sensor nodes, or observations, acting in concert, rather than on each node separately but additively as in the contamination footprint, danger zone, leakage zone, and consequence zone displays described elsewhere.

Each of the cells in the overall domain is"read"from the library one at a time in step 1220. For each cell, at location X, Y, the local R (X, Y) and L (X, Y) contour level values are evaluated from the composite data structure using Formula 1, below. Step 1225 uses Formula 1, below to determine whether the cell X, Y lies in the backtrack zone.

Formula 1 (1225): If R (Xg, Ygn) =< R (X, Y) and L (X, Y) =< L (Xsn, Ysn) Then the location (cell) X, Y lies within the Backtrack Zone for Sensor n Otherwise the cell X, Y lies outside the Backtrack Zone for Sensor n If the cell is within the backtrack zone for a particular sensor, "sensor n", the procedure checks if sensor n is"cold"in step 1230. If the cell is within the backtrack zone, or in the upwind region of a sensor or with a safety radius of any"cold"sensor, that cell cannot be the location of a source so the back count array value is set to-1 in step 1235. In other words, step 1225 checks to see if the right and left edge matrix values at the sensor are equal to or less than the right and left edge values at all locations within the domain. One embodiment of Formula 1, in the computer programming language FORTRAN is: inBacktrackZone = r (xs, ys). le. r (x, y). and. l (x, y). le. l (xs, ys) where inBacktrackZone is a logical variable. Formula 1 is used to extract information from the dual-matrix and is applied for all sensor nodes for all cells in the entire data structure.

If the sensor is"hot"according to step 1230 and the cell at X, Y has not been previously excluded in step 1235, indicated by the backcount array being-1, then step 1240 transfers the back count array cell value to step 1245, where it is incremented by one. Thus back count is an array of numbers counting how many different"hot"sensors indicate each location, or cell, might be the source of contamination as long as there is no sensor showing, according to a back count value of-1, that the given cell cannot, in fact, be the source location.

If all the cells in the dual-matrix have not been read and checked step 1250 goes back to step 1220 and repeats the process on the next cell until the entire backtrack analysis has been completed. Then the procedure 1050 next checks, in step 1255, whether the consequence zone display is to be constructed for"hot"sensor nodes. If the consequence zone is"turned on"in the environmental state vector, the consequence zone computation is initialized in step 1260 by setting the consequence count array everywhere to zero. The consequence zone is the total region downwind of all sensors that are reading a dangerous level of contaminant at their location, i. e. are"hot." The data defining the consequence zone are initialized for the selected"hot" sensor node or nodes by finding the left and right edge values at the location of each "hot"sensor node. To be specific, the contour level values from the right edge matrix R (Xs, Ys) and left edge matrix L (Xs, Ys) at the sensor node location Xs, Ys are evaluated and stored as they are used repeatedly to compute whether each location in the domain lies within the consequence zone or not.

Each of the cells in the overall domain is again"read"from the library one at a time 1265. For each cell, at location X, Y, the local R (X, Y) and L (X, Y) contour level values are read from the composite data structure and evaluated using Formula 2, below.

In step 1270, Formula 2 is used to determine whether X, Y lies in the consequence zone.

Formula 2 (1270): If R (Xs, Ys) >= R (X, Y) and L (X, Y) >= L (Xs, Ys) Then the location (cell) X, Y lies within the Consequence Zone Otherwise the location (cell) the X, Y lies outside the Consequence Zone.

If the cell is within the consequence zone, the step 1275 sets the consequence count array value to 1 at that location. In other words, step 1275 checks to see if the right and left edge matrix values at the sensor are equal to or greater than the right and left edge values at all locations within the domain. One embodiment of Formula 2, in the computer programming language FORTRAN is: inConsequenceZone=r (xs, ys). ge. r (x, y). and. l (x, y). ge. l (xs, ys) where inConsequenceZone is a logical variable. If all the cells in the dual-matrix have not been scanned, step 1280 goes back to step 1265 and repeats the process on the next unprocessed cell. Once all the"hot"sensor nodes have been processed from the node list, step 1285 returns to master control procedure of Figure 8.

Figure 13 shows a logical flow diagram of the algorithm that processes source nodes using the disclosed composite data structure to compute the contamination footprint. The composite flow map data structure, shown in Figures 2 through 6, and formula (1) is used to determine all points within the contaminant footprint (the gray regions 660 and 665 in Figure 6). When the overall control procedure 1000, invokes or "Enters, "in step 1300, the process sources procedure 1070, it provides the composite data structure corresponding to the current wind speed and direction and passes through the current environmental and node state vectors defining the scenario. The data controlling this processing is initialized 1310 and the plume cell count array is set to zero at all of the cells in the data structure domain. The initialization of step 1310 includes determining whether the footprint has to be recalculated since the last time the control procedure was invoked. The data structure contour level values from the right edge matrix R (XS, YS) and left edge matrix L (XS, YS) at the source node location XS, Ys are evaluated and stored because they are used repeatedly to compute whether each location in the entire data structure lies within the footprint or not.

Each of the NX by NY cells in the composite data structure is read from the library one at a time 1320. For each location X, Y the local R (X, Y) and L (X, Y) contour level values are read from the composite data structure and evaluated using Formula 1, below. Step 1330 uses Formula 1 determine whether X, Y lies in the contamination footprint.

Formula 1 (1330): If R (Xs, Ys) >= R (X, Y) and L (X, Y) >= L (Xs, YS) Then the location (cell) X, Y lies within the contamination footprint Otherwise the location (cell) the X, Y lies outside the contamination footprint.

If the cell is within the contamination footprint 1340, set the cell plume count array value to 1 at that location. These count values are zero by default and are only re-calculated when a source node has been moved or activated or when the wind and hence the composite data structure have been changed. One embodiment of Formula 1, in the computer programming language FORTRAN is: InFootprintZone = r (xs, ys). le. r (x, y). and. l (x, y). le. l (xs, ys) where inFootprintZone is a logical variable. If the escape display is active in the environmental state vector according to step 1350, the escape route lines are drawn perpendicular to the locally prevailing wind on top of the footprint display 1360. These lines show the optimal direction to walk, which is away from the local plume centerline, to avoid exposure.

The process the checks to see if all the cells in the domain have been processed 1370 and if they have not, goes back to step 1320. When all of the cells have been processed according to Formula 1 and the requested display constructed, the algorithm returns to the overall control routine of Figure 10.

Although this specification describes particular embodiments of the invention, those skilled in the art will understand that still other variations and modification can be made without detracting from the scope and spirit of the invention as described in the claims.