Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD FOR DISPLAYING MULTI-FORMAT DATA IN A 3D VISUALIZATION SPACE
Document Type and Number:
WIPO Patent Application WO/2017/066679
Kind Code:
A1
Abstract:
A system comprising: a database configured to store at least one point cloud, the point cloud having a plurality of data points representing distances between a source and each of the objects; at least one processor configured to: (a) access, from the database a point cloud, (b) determine, using the data points, (i) a plurality of spatial planes associated with each of the objects, and (ii) edges of each of the objects on each of the objects determined spatial plains; (c) segment each of the objects' determined spatial planes as a function of at least one of the determined edges; (d) merge all of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and (e) generate, using the spatial value data set, a multi-dimensional representation of at least some of the physical location within a virtual environment.

Inventors:
SATKUNARAJAH THARMALINGAM (US)
SATHASIVAM KALAYINI (US)
Application Number:
PCT/US2016/057190
Publication Date:
April 20, 2017
Filing Date:
October 14, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SATKUNARAJAH THARMALINGAM (US)
SATHASIVAM KALAYINI (US)
International Classes:
G06F7/60
Foreign References:
US20120256916A12012-10-11
US7995055B12011-08-09
US20150154467A12015-06-04
US20130121564A12013-05-16
Attorney, Agent or Firm:
CHENG, Susie, S. et al. (US)
Download PDF:
Claims:
What Is Claimed is:

1. A computer implemented method for generating a multi-dimensional representation of at least a some of a physical location that includes a plurality of objects, the method performed by one or more processors configured by executing code that cause the one or more processors to: access a point cloud having a plurality of data points representing distances that are measured between a source and each of the objects; determine, using the data points, (i) a plurality of respective spatial planes associated with each of the objects, and (ii) edges of each of the objects on each of the objects' determined respective spatial plains; segment each of the objects' determined respective spatial planes as a function of at least one of die determined edges; merge all of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and generate, using the spatial value data set, a multi -dimensional representation of at least some of the physical location within a virtual environment,

2. The method of claim 1, wherein the one or more processors is further configured to: display on a display device the multi-dimensional representations of the spatial value data.

3. The method of claim 1, wherein the processor is further configured to: determine that at least one data point does not correspond to at least one of the determined respective segmented planes; and remove the at least one data point from the point cloud.

4. The method of claim 1, wherein the processor is further configured to: determine that the spatial value dataset is valid or not valid by comparing the spatial value data set with at least one other data set corresponding to at least some of the physical location,

5. The method of claim 1, wherein the source is a light source and at least one of the objects is light reflective.

6. The method of claim 1, wherein the segmented spatial planes are segmented by a maximum likelihood estimator sampling consensus algorithm.

7. The method of claim 4, wherein the at least one other data set includes building information management data.

8. The method of claim 4, wherein the at least one other data set includes information obtained from the physical location.

9. The method of claim 4, wherein the at least one other data set includes image data.

10. The method of claim 4, wherein the processor is further configured to: remove any data point within the point cloud not corresponding to a segmented plane.

11. The method of claim 4, wherein the processor is further configured to: identify one or more values of the spatial value dataset lacking a corresponding value in at least one other data set, and remove the one or more identified spatial values from the spatial value dataset.

12. A system for generating a multi -dimensional representation of at least a some of a physical location that includes a plurality of objects, the system comprising: a database configured to store at least one point cloud, the point cloud having a plurality of data points representing distances that are measured between a source and each of the objects; at least one processor configured by executing code that cause the at least one processor to: access, from the database, a point cloud, determine, using the data points, (i) a plurality of respective spatial planes associated with each of the objects, and (ii) edges of each of the objects on each of the objects' determined respective spatial plains; segment each of the objects' determined respective spatial planes as a function of at least one of the determined edges; merge all of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and generate, using the spatial value data, set, a multi-dimensional representation of at least some of the physical location within a virtual environment.

13. The system of claim 12, further comprising, at least one display device configured to receive the multi -dimensional representation of at least some of the physical location and display the multidimensional representation of at least some of the physical location.

Description:
APPARATUS AND METHOD FOR DISPLAYING MULTI-FORMAT DATA IN A 3D

VISUALIZATION SPACE

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. provisional application serial number 62/241 ,394, filed October 14, 2015, which is hereby incorporated by reference in its entirety.

INTRODUCTION

[0002] The present invention describes an apparatus and method for integrating multi-sensor, multi- temporal, multi-spatial, multi-format data from multiple sensors or data stores in a real-time engineering grade location based analysis and predictive analytic 3D data stack and visualizing that data in real-time in response to user inquires.

BACKGROUND OF THE IN VENTION

[0003] Although there are many types of spatial and non-spatial data held by different organizations, agencies, and private companies, the data contained therein is rarely unified or in compatible formats. The disparate nature of the data repositories, formats and structures prevent the maximum utilization of the investment in the data, capture, initial analysis and maintenance. Thus, there exists a need therefore for harmonizing the data in a manner that allows these disparate data stores and historical records to be used in furtherance of development goals and tasks.

[0004] There are many software and database tools and environments that can access and analyze components or subsets of the data but a comprehensive geo-spatial based solution configured to read and access multi-format data models and real time data transactions is required to solve the complex multi -dimensional problems faced as part of the need for accurate spatial and contextual data to support smart city growth. [0005] Therefore, what is needed is a system and method that provides improved access, conditioning, integrating and visualization of geospatial and other actionable information and utilizing the same to provide answers to user queries regarding the location of various infrastructures and optimal positioning of actions within a defined space. In particular, what is needed is a system and method that provides real-time visualizations that combine data from multiple sources to present a cohesive analysis of the infrastructure and information relating to a specific location and serve the operational and business needs of industries such as Transportation, Water, Environmental, Engineering,

Telecommunication, Finance, Energy, Natural Resources, Defense, insurance, retail, city planning, utilities, and Security

SUMMARY OF THE INVENTION

[0006] According to one implementation of the invention described, a system for generating a multidimensional representation of at least a some of a physical location that includes a plurality of objects, includes a database configured to store at least one point cloud, the point cloud having a plurality of data points representing distances that are measured between a source and each of the objects; at least one processor configured by executing code that cause the at least one processor to: access, from the database, a point cloud, determine, using the data points, (i) a plurality of respective spatial planes associated with each of the objects, and (ii) edges of each of the objects on each of the objects' determined respective spatial plains segment each of the objects' determined respective spatial planes as a function of at least one of the determined edges; merge ail of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and generate, using the spatial value data set, a multi-dimensional representation of at least some of the physical location within a virtual environment.

[0007] In accordance with another aspect that can be implemented in one or more embodiments, the present invention is directed to a collection of networked apparatus or a method for improving the use of incompatible multivariate, multi-sensor, multi-temporal, multi-spatial, multi-format data spatial and non-spatial data obtained from at least one or more sensor devices by transforming the data into compatibie formats by accessing and transforming the data into compatible formats within the memory of a computer and generating a 3D visualization thereof configured to provide answers to user queries and predictive analytics. The method comprises using a computer, properly configured, to select a location of interest such as a particular area bound by geospatial data using a geospatial query generator. The query returns a data object that represents a 3D stack of information relating to the particular location . In one arrangement, the 3D stack is constructed is by accessing a plurality of data objects obtained from at least one of a plurality external data sets or active sensor devices using a input module configured as code executing in the processor, wherein the data is relevant to the geospatial data of the inquiry.

100081 More particularly, prior to generating the 3D data stack, each data object obtained from the plurality of external data sets or sensors is evaluated for proper format type using a format check module configured as code executing in the processor. The format check module is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configures the processor to identify data objects with an incompatible format type. The processor is configured to store each data object having an incompatible format as an element in a conversion array.

[0009] Using a conversion module configured as code executing in the processor, each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor for converting the data object to an approved format type, and applying the format factor to obtain a converted data object. These converted data objects are linked to one another and function as a 3D data stack for a given location. [0010] Tlie resulting 3D data stack is transmitted to a computing device that generates a three- dimensional visualization of the 3D data stack and allows the user to view and inspect the data represented by the 3D data stack either remotely or at the location corresponding to tlie query. The computing device is, in one implementation, a Virtual Reality and/or Augmented Reality hardware and software system that utilizes the 3D data stack to generate immersive environments to analyze and evaluate the user's queries. Any data obtained or input into the computing device is then used to update the 3D data stack in real-time.

[0011] Specifically, and without limitation to alternative uses of the systems, methods and tools described herein, the present invention is includes one or more geospatial data analysis system to visualize geospatial products & assets in 3D virtual platform, and to provide streaming 3D geospatial data services on from a remote service provider to multiple client devices such as desktops, tablets, and mobile devices. In a further implementation, the systems described utilizes a multilayer, multi domain spatial engine that uses augmented artificial intelligence and predication systems to generate and integrate geospatial data for use by the client devices. These and other aspects, features and advantages of the present invention can be further appreciated from the following discussion of certain more particular embodiments thereof.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] Tlie foregoing and other features of the present invention will be more readily apparent from the following detailed description and drawings of one or more exemplary embodiments of the invention in which:

[0013 J Figure 1A is an overview block diagram detailing tl e arrangement of elements of the sy stem described herein in accordance with one embodiment of the invention.

100141 FIG. I B is an overview block diagram detailing tlie further arrangement of particular elements of the system described herein in accordance with one embodiment of the invention. [0015] FIG. 1C is a block diagram detailing the relationship of and between different elements and sub-elements of the system described herein in accordance with one embodiment of the invention .

[0016] FIG. ID is a block diagram detailing the particular elements of the geospatial analytic appliance in accordance with one embodiment of the invention.

[0017] FIG. IE is an alternative block diagram detailing the particular elements of the geospatial analytic appliance in accordance with one embodiment of the invention.

[0018] FIG. 2 is a flow diagram detailing the steps of an embodiment of the method as described herein.

[0 19] FIG. 3 is a block diagram of an example system in accordance with an embodiment of the present invention.

[0020] FIG. 4 is a flow diagram detailing the additional steps of an embodiment of the method applied as described herein.

[0021] FIG. 5 is a flow diagram detailing the particular steps of an embodiment of the system as described herein.

[0022] FIG. 6 details particular implementations of the operational language used to instruct one or more processors of the system, as described herein.

[0023] FIG. 7 details visual output according to one implementation of the simulations of the system as described herein.

[0024] FIG. 8 is a flow diagram detailing additional steps of an embodiment of the method applied as described herein.

[0025] FIG. 9 is a flow diagram detailing further additional steps of an embodiment of the method applied as described herein. [0026] FIG. 10 is a flow diagram detailing yet additional steps of an embodiment of the method applied as described herein .

[002,7] FIG. 11 is a block diagram, detail ing the relationship between elements of the system described herein.

[0028] FIG. 12 is a diagram detailing particular functionality of the modules of the system described herein.

1002 1 FIG. 13 is a flow diagram detailing f additional steps of an embodiment of the method applied as described herein.

[0030] FIG. 14 is a flow diagram detailing yet additional steps of an embodiment of the method detailed in FIG. 13 as described herein.

[0031] FIG. 15 is a block diagram detailing the relationship between sensor elements of the geospatial analytic appliance of the system described herein.

[0032] FIG. 16 is an overview of the communication protocols utilized by the sensors of FIG. 15.

[0033] FIG. 17 is a block diagram detailing the relationship between one or more elements of the geospatial analytic appliance of the system described herein,

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE PRESENT INVENTION

[0034] By way of overview and introduction, the present invention concerns a system and method for accessing, transforming and visualizing spatial and non-spatial data related to a geographic location and providing such a transformations and visualizations to a remote computing device, such as a smart phone, virtual reality interface (VR), augmented reality (AR) interface device, or autonomous or semiautonomous device . [0035] Specifically, the present system and method are directed to running queries in a data object database for a geographic location and receiving a customized data package that combines available geospatial data, contextual data, metadata and predictive data that provides a custom solution to the user query. Such a data stack, when implemented in a 3D environment is used to provide actionable information to entities in the Transportation, Water, Environmental, Engineering, Telecommunication, Finance, Energy, Natural Resources, Defense, Insurance, Retail, City planning, Utilities (e.g. Gas, Oil, Electric), and Security industries.

[0036] For example, the system enables the integration or registration of multiple datasets providing or having geospatial information relative to a given location. Such datasets are aligned using one or more common reference points. Here, such common reference point might be a GPS coordinate in a first dataset that corresponds to a landmark in an image based data set. By identifying one or more commonalities between the datasets pre or post-format conversion, the multiple dataset can be registered or integrated with one another. Various registration algorithms are used to integrate different geospatial datasets into a single 3D data, stack that permits visualization of the entire data stack in a multi-dimensional virtual representation.

System Overview

[0037] Turning to FIGS. 1A-1D, a block diagram of the overall system 100 is provided. As shown, current geospatial data in a variety of data (e.g. raster, vector, point, contextual, point, dynamic / sensor) are stored in a plurality of external databases 1 2. The databases have a connection to the present geospatial analytic appliance 104.

[0038 j The physical structure of the databases 102 or 108 may be embodied as solid-state memory (e.g., ROM), hard disk drive systems, RAID, disk arrays, storage area networks ("SAN"), network attached storage (" AS") and/or any other suitable system for storing computer data. In addition, the database(s) 102 may comprise caches, including database caches and/or web caches. Programmatically, the databases 102 or 108 may comprise flat-file data store, a relational database, an object-oriented database, a hybrid relational-object database, a key-value data store such as HADOOP or MONGODB, in addition to other systems for the structure and retrieval of data that are known to those of ordinary skill in the arc. The database includes the hardware and software to enable a processor local to geospatial analytic appliance 104, or remote to the geospatial analytic appliance 1 4 to retrieve and store data within the database(s) 102.

[0039] In one particular configuration, the databases 102 are SQL, NoSQL, flat, relational, object or other commonly used databases types and schema. In the configuration provided in FIG. 1A, each of the databases 102 are remote to the geospatial analytic appliance 104 and connections between the external databases 102 and the analytic system are accomplished by network connections (such connections shown as arrows). The external databases 102 are configured to contain accessible data relating to specific geographic locations, including data feeds or streams obtained from direct and remote sensing platforms. The data, in one embodiment is stored in one or more proprietary vendor formats. For example, one or more of the external databases 102 stores data, obtained from ultra, high, medium and low resolution or accuracy sensor devices. The sensor devices might use optical, laser, radar, thermal, sonar/acoustic, seismic, bathymetric, and geological sensors owned or operated by- private companies, government agencies or other organizations. In a particular embodiment, these sensors are space-based, airbome-based, ship-based, vehicle-based, hand-held, or permanent terrestrial installations that provide periodic, single use, or continuous feeds and steams of data relating to physical conditions and properties under observation and analysis. In one particular arrangement, the data stored in the external databases 102 and accessed by the geospatial analytic appliance 104 are geospatial data files or data objects. The external database(s) 102 also contain archival records, customer, survey, municipal, zoning, geologic, environmental and other data collected over time by- various governmental, scientific, or commercial entities. In one embodiment, the data and associated metadata obtained from environmental sensors is stored in SQL format databases in the form of spreadsheets, tabular, textual, html/XML or other file or formats.

[0040] In a particular implementation, the database(s) 102 is an intelligent database combining multiple datasets of point cloud data, GPR dataset, images, BIM models, B1M classified objects, IoT sensor data etc. Such combined databases are referred to herein as Enterprise Data Lakes. Here, large amounts of multi-format spatial data are transformed into an integrated and extracted form prior to access by the geospatial analytic appliance 104. All the extracted data is saved in an integrated database by data synchronizing, data processing, and data management modules local to the database(s) 102, or the geospatial analytic appliance 104.

[0041] The geospatial analytic system 104 is configured to access and transform data obtained from the external database(s) 102. In one arrangement, the geospatial analytic appliance 104 is a computer equipped with a one or more processors (as shown in FIG. 3), RAM and ROM memory, network interface adaptors and one or more input or output devices. In a further arrangement, the geospatial analytic appliance 1 4 is a computer server or collection of computer servers, each server configured to store, access, process, distribute or transmit data between one another and other computers or devices accessible or connectable therewitli. In still a further implementation, the geospatial analytic appliance 104 is a hosted sen/er, virtual machine, or other collection of software modules or programs that are interrelated and hosted in a remote accessible storage device (e.g. cloud storage and hosting implementation) that allows for dynamically allocated additional processors, hardware or other resources on an "as-need" or elastic need basis. Furthermore, elastic load balancing algorithms are utilized to ensure that sufficient back-end capacity is present to enable the system to handle multiple concurrent connections and requests.

[0042] In one implementation, geospatial analytic appliance 1 4 is one or more portable computing devices such as an Apple iPad/iPhone® or Android® device or other electronic device executing a commercially available or custom operating system, e.g., MICROSOFT WINDOWS, APPLE OSX, UNIX or Lmux based operating system implementations. In other implementations, the geospatial analytic appliance 104, is or includes custom or non-standard hardware, firmware or software configurations. For instance, the data processing elements of the geospatial analytic appliance 104 comprise one or more of a collection of micro-computing elements, computer-on-chip, home entertainment consoles, media players, set-top boxes, prototyping devices or "hobby" computing elements. The computing elements, such as processor(s) of the geospatial analytic appliance 104 can comprise a single processor, multiple discrete processors, a multi-core processor, or other type of processor(s) known to those of skill in the art, depending on the particular embodiment.

[0043] In a particular implementation the geospatial analytic appliance 104 is configured with code executing within one or more processors contained therein to access data collected from both databases and remote data collection devices. For instance, the geospatial analytic appliance 104 is configured to communicate with the one or more data capture devices 1 10, through wired and/or wireless communication linkages or interfaces as shown in FIG. IC. Here, the data capture devices 110 communicate with the geospatial analytic appliance 104 using USB, RF transmitters, digital input output pins, eSATA, parallel ports, serial ports, FIREWIRE, WI-FI, BLUETOOTH, or other communication interfaces. The processors are also configured, through hardware and software modules, to connect to more remote servers, computers, peripherals or other hardware using standard or custom communication protocols and settings (e.g., TCP/IP, etc.) either ether through a local or remote network or through the Internet.

[0044] The geospatial analytic appliances 104 are directly or indirectly connected to one or more memory storage devices (memories) to form a microcontroller structure. The memory is a persistent or non-persistent storage device that is operative to store an operating system for the processor in addition to one or more of software modules. In accordance with one or more embodiments, the memory comprises one or more volatile and non-volatile memories, such as Read Only Memory ("ROM"), Random Access Memory ("RAM"), Electrically Erasable Programmable Read-Only Memory ("EEPROM"), Phase Change Memory ("PCM"), Single In-line Memory ("SIMM"), Dual In-line Memory ("DIMM") or other memory types. Such memories can be fixed or removable, as is known to those of ordinary skill in the art, such as through the use of removable media cards or modules. In one or more embodiments, the memory of one or more processors of the geospatial analytic appliance 104 provides for the storage of application program and data files. For example, as shown in FIGS. ID and IE, one or more modules (e.g. sen/ice engines, messaging engines, visualization engines, analytics engines, enterprise modules, database interface modules, simulation engines, and real-time engines, applications services) distributed in a plurality of application layers, are stored within the working, or accessible memory of the geospatial analytic appliance 104.

[0045] Furthermore, one or more of the modules provided in FIG. ID are stored in secondary computer memory, such as magnetic or optical disk drives or flash memory, that provide long term storage of data in a manner similar to the persistent memory device. In one or more embodiments, the memory of the processors provides for storage of application program and data files when needed.

[0046] The geospatial analytic appliance 104 is configured to execute code written in a standard, custom, proprietary or modified programming language such as a standard set, subset, superset or extended set of JavaScript, PHP, Ruby, Scaia, Erlang, C, C++, Objective C, Swift, C#, Java, Assembly, Go, Python, Pearl, R, Visual Basic, Lisp, or Julia or any other object oriented, functional or other paradigm based programming language. One or more of the programming languages (702) provided in FIG. 6 are implemented by the geospatial analytic appliance 104, while alternative software development kits (SDKs 704) are utilized to provide the functionality described herein to one or more output display devices 106. [0047] In one or more implementations, the geospatial analytic appliance 104 provides an analysis environment for both structured data (addressed by SQL) and unstructured (addressed by NOSQL) data obtained from the databases 102. Particularly, the data obtained from the databases 102 are provided as inputs to Building Information Model (BIM) systems configured generate engineering grade spatial asset datasets that provide virtual representations of buildings and other infrastructure. In one or more implementations, the geospatial analytic appliance 104 includes one or more GIS components, configured as code executing within one or more processors and enabling users to implement dynamic, real time mapping, adding GIS and BIM modification capabilities to existing applications or to custom build new mapping/BIM applications.

[0048] For example, the geospatial analytic appliance 104 is configured to deploy engineering grade GIS data, maps, and geo-processing services in desktop web or mobile platform using application programming interfaces (APIs) for various languages and standard Software Development Kit (SDK)/Dev-kit tools support for application development. In a particular implementation, the geospatial analytic appliance 104 is configured to generate an editable collection of data that can be used in BIM based applications, GIS-enabied applications or as part of an extensible developer control arrangement allowing for cross platform web, mobile and desktop application development or use. In one instance, the geospatial analytic appliance 104 permits a user to create, model, or draw 2D graphic features such as points, lines, polygons, ellipse, Annotation, Dimension (Χ,Υ,Ζ,Μ) on real time or accessed geospatial data.

[0049] As a further implementation, the geospatial analytic appliance 104 is configured by one or more geographic operations modules to edit or annotate obtained geospatial data by adding virtual shapes to the data in order to create buffers, calculate differences, and find intersections, unions, or inverse intersections of shapes. In yet a further implementation, the geospatial analytic appliance 104 described herein is fiirther configured to perform network analysis to find the best routes between two points, the closest facilities from a given location and/or determine which routes should be assigned to a transport vehicle using one or more shortest path estimation modules.

[0050] In a further arrangement, the geospatial analytic appliance 104 provides for effective AR/VR visualization, simulation and analysis of surface, B1M .geographic and IOT dataset in a high operations per second and large memory bandwidth game engine (such as Unity®). Furthermore, the geospatial analytic appliance 104 provides is configured to output three dimensional environmental data, such as both above ground/water and below the ground/water datasets as data stream to one or more standalone computers or mobile based platforms.

[0051 ] With particular reference to FIG. IB, the geospatial analytic appliance 104 is further configured to generate the data steam output in a native formation of the remote display device 1 6, As shown, various web services such as, but not limited to RESTful services, SOAP services, and hardware dependent APIs (e.g. Android or IOS) are used to integrate the output of the geospatial analytic appliance 104 with Enterprise Software Solutions, such as through Enterprise Service Bus (ESB) (as shown in FIG. 1 C) along with existing commercial asset management solution using one or more standard, custom or proprietary augmented artificial intelligent ΒΓΜ based spatial asset management solutions. With continued reference to FIG. IB, the Enterprise Software solutions module of the geospatial analytic appliance 104 provides a users (such as one connecting through the remote display device, a set of software tools, configured as one or more controls presented through one or more user interface modules, presented to the remote display device 106. For example, through a connection between the remote display device 106 and the geospatial analytic appliance 104, a user is able to create applications, packages, framework, for use in remote hardware platforms that access the geospatiai data analyzed and provided by the geospatial analytic appliance 1 4. Here, one or more remote display devices 106 connected to the geospatial analytic appliance 104 can generate downloadable or transferable software packages to utilize the data accessible to the geospatial analytic appliance 104, such data packages can include one or more software modules that permit a remote device to access the geospatial analytic appliance 104 through REST and SOAP API's which configure hardware to transfer the response and request data between client and server devices using one or more HTTP protocols. Extensible Markup Language (XML) or JavaScript Object Notation (JSON). Here, remote based API allows a user to access directly or indirectly one or more hardware resources (such as direct sensor feeds from sensors 2.002. Furthermore, users are able to generate software modules that implement one or more video acceleration APIs, to provide low level hardware access (such as Graphics Processing Units) to assist in rendering or encoding video or graphic data provided by the geospatial analytic appliance 104 or generated locally in response to data acquired from the geospatial analytic appliance 104. In one or more further implementations, the geospatial analytic appliance 104 also possess a novel way of on field mobile BIM modeling with this technology with hand held computer vision based geo-tab.

[0052] A model database 108, such as a No SQL database, is connected to the analytic system 104 and is used to store data output from the processing of input data from, the geospatial databases 102. In an alternative configuration, the model database 108 is a SQL, relational, flat, object or other configuration database. Tire model database stores model data objects (MDO) that represent a collection of data elements corresponding to a particular geographic location, structure or entity. Kkwever, in further arrangements, the MDO contains links to other MDOs in close proximity to the location in question. In this way, queries that request information within a radius or given distance from a location can also be utilized and accessed. The NoSQL database 108 uses Object Based intelligence (OBI) architecture such that a data object representing a tangible or intangible item (e.g. person, place, or tiling) exists only in a single place across time or at an instant in time. The NoSQL database is, in one configuration, implemented with BIM (Building Information Modeling) architecture. BIM architecture allows for the MDOs to be associated with additional information or features. For instance, a detailed design, building analysis, documentation, fabrications, construction 4D/5D, construction logistics, operation and maintenance, demolition, renovation, programming and conceptual design data, is included in the MDO ,

[0053] In one non-limiting example, a MDO for a particular address contains information about the subterranean infrastructure present at the address as well as other data relating to the same. All of other MDOs relating to a particular geographic location such as specific MDOs detailing zoning regulations at that location or traffic patterns) are collected and transformed by the geospatial analysis system 104 into a 3D data stack of real time and historical data to a user regarding the geospatial and infrastructure features present at a specific location based on the data from the databases 102.

[0054] In a particular embodiment, each MDO contains one or more geospatial or identification reference points to align the contained within one MDO with data contained in a second MDO. Through an iterative process, the data sets are aligned and conformed to one another such that the 3D stack contains a collection of aligned MDO orientated to a common reference point. For example, where a first MDO contains sub-surface infrastructure data and a second MDO contains BIM data, the resulting 3D stack will align the two data sets so that a visualization will reproduce spatial relationship between the two MDO. Here, the alignment might be to align the MDOs according to one or more GPS coordinates found within each MDO. Such an alignment can produce a 3D stack that encompasses a larger (or greater) area or volume that either of the individual or component MDO based on the alignment process. Alte natively, the alignment can be accomplished using LIDAR (Point cloud, and Point clouds subject to registration) data, image data or other geospatial measurement data and procedures. By identifying one or more commonalities between the datasets pre or post-format conversion, the multiple datasets can be registered or integrated with one another. Various registration algorithms are used to integrate different geospatial datasets into a single 3D data stack that permits visualization of the entire data stack in a multi -dimensional virtual representation [0055] The real-time and historical data collected into the 3D data stack is provided to a user though a user interface device 106. In a further implementation, the data obtained from the one or more databases 102 and analyzed by the geospatiai analytic appliance 104 is output to one or more client devices, such as the user interface device 106.

[0056] In one configuration, the user interface device 106 is a desktop computer. Alternatively, the user interface device 106 is a mobile computing device, such as a smart phone or table computer using an Apple ® or Android <E> operating system, and / or hardware. In a further example, the mobile computing device is an augmented reality (AR) interface. In one implementation, AR devices function by overlaying data from the analytic system 104 onto the field of vision or view window integrated into the output device 106. In yet a further implementation the input device is a virtual reality device. Virtual reality devices (VR) are immersion technology that projects images, video and data into a sphere encircling the user. Such technology employs motion tracking and other technologies to track a person's movement so as to provide the sensation of total immersion within a projected stage or area. Those possessing the requisite level of skill in the art will appreciate that VR, AR and mobile technology encompasses sufficient processors, software, firmware, audio visual devices, user interfaces, geospatiai locators and anatomical tracking technology that is used to implement, construct or display a vistual and version of a real or imaginary location and identify the location of the user therein.

[0057] In a particular implementation, the geospatiai analytic appliance 104 provides a web based interface that is accessible by the output display device 106. A Geo- Engine Web-Interface (GWI) client consists of easy access user interfaces, and one or more toolsets, encodings, profiles, existing sample applications and best practice documents. Such a web-interface also includes one or more BIM activity maintenance web portals having standard revision maintenance functionality. For example, one or more output display devices 106 are able to access and import raw geospatiai data (from Shape-files, PostGIS, SpatiaLite) and BIM revision data into a repository stored within one or more data storage locations accessible by the output display device 106 where every change to the data is tracked.

[0058 J In one particular implementation, the revision data is stored locally to the geospatiai analytic appliance 104 or in a storage location remotely accessible by the geospatiai analytic appliance 104. Thus, changes can be viewed by the output display device accessing the Web-interface such that a time-line or version history is presented. Additionally, the web-interface includes one or more branched, forked, or diverted sty le version control systems to permit the roll back of changes or reversion to older versions of data. Here, through the use of the described web interface, remote users of the system described herein (clients, app developer, modelers, map makers designers or end users) have access to tools, features and web processing sendees offered by the geospatiai analytic appliance 104. In a further implementation, the web-interface provides for simultaneous messaging

functionality, such that multiple members of a team or group can collaborate on a data set accessed or manipulated by the geospatiai analytic appliance 104. For example, the web-interface includes one or more messaging protocols (such as WebRTC based video discussion, screen share, voice and text chat room facility) to enable discussion . The Web interface also provides functionality, in the form or one or more submodules that configure the output device to make delta edits on BIM models, geo web processing services, interface tools, API integration, and SDK tools to make applications and services that use data provided by the geospatiai analytic appliance 104. Tims, in addition to directly accessing the data provided by the geospatiai analytic appliance 104, one or more software application are able to integrate the data provided or hosted by the geospatiai analytic appliance 104 into various user applications. In one or more further implementations, where an application is receiving data from the geospatiai analytic appliance 104, one or more additional submodules are provided by the web- interface that integrate a query builder user interface to make intelligent queries on any data packages (addressed by MapReduce, jquery etc.) or BIM models (Addressed by B1MQL) according to the security or credential privileges permitted for respective users. The interface support also includes touch support, keyboard support, mouse support, VR headset and joystick support.

Accessing geospatial data from external databases or environmental sensors

[0059] FIG. 2 details particular work -flows in accordance with aspects of the invention. The steps shown in FIG. 2 can be carried out by code executing w ithin the memory of the processor 1 5, as may be organized into one or more modules, or can comprise firmware or hard-wired circuitry as shown in FIG. 3. For simplicity of discussion, the code referenced in FIG. 3 is described in the form of modules that are executed within a processor 105 of the analytic system 104 and which are each organized to configure the processor 105 to perform specific functions. The block diagram of FIG. 3 provides exemplary descriptions of the modules that cooperate with a memory and processor 105 of the analytic system 104 and cooperate to implement the steps outlined in Figs. 2. Those possessing an ordinary level of skill in the art will appreciate that any processor of the analytic system can comprise a plurality of cores or discrete processors, each with a respective memory, which collectively implement the functionality described below, together with associated communication of data there between,

[0060] With reference now to Figs. 2 and 3, the geospatial data, transformation is initiated and implemented by at least one query module 310 which comprises code executing in the processor 105 to access and search the records in the model document database 108 according to step 210. In one particular implementation, the query generated according to step 210 is a given set of coordinates or other location identifiers e.g. place name, survey plot, or beacon serial number. In an alternative arrangement, the query generated is contextual. In this arrangement additional data, e.g. coordinate location of the user is also generated and supplied as part of the query. Furthermore, additional query types, such as semantic, spatial, contextual, remote sensing, situational or temporal queries are envisioned. For instance, a semantic query might entail encoding in search parameters a request for the location and history of all underground utilities within a 75 foot radius of a given address along with design plans and any updated records in the last two years for a particular utility provider. In particular embodiments, queries cart be voice input, text input or contextual using images or video of a specific location. Depending on the query type, additional modules used to enable voice to text conversions and image recognition module. For instance, natural language processing interfaces and speech recognition applications are deployed to parse the input and pass it to the remaining modules.

[0061] In a particular embodiment, the user's requests or inputs are used as queries are used to generate a data return. In one non-limiting example, the queries contain or include specific coordinates, geographic markers or references corresponding to an entry or collection of entries stored within the NoSQL database 108. In a further embodiment, the model document database 108 is a geospatial "global map" as per FIGS. 2 and 5. In the present embodiment, all data vector, raster, imagery, text, video is natively geo-referenced based on relevant source data or formats. Alternatively, such data is tagged based on a location identifier (e.g. global localization, zip code, latitude and longitude coordinates etc.) of the origin of the data or the query. Queries that do not have location based parameters are, in particular embodiments, defaulted to query origin location with default parameters. In a further arrangement, the model document database 108 implements a "many to many" relationship which allows for targeted spatial data (e.g. the data stack) by default or inference. The location search can be based on point (discreet location, user location (viaLBS) or area users defined (via GUI, contextual or test/string based).

[0062] The query generated in step 210 is used to search the model database 108 as in step 220. In one implementation, a database search module 220 is used to query or search the model database 108 for data relating to the query. Here, the model database 108 utilizes building information modeling (BIM) architecture to store a MDO. For example, a query of a specific building address will result in the searching of the model database 108 for a collection of model data objects (combined as a 3D data stack) that represents all of the data corresponding to that building or location. In this way, municipal, infrastructure and other data corresponding to a real-world location is sent for transformation by a data platform module 306. In one embodiment, the BIM model architecture contains data that allows multiple MDOs to be queried such that an integrated city landscape can be generated from a collection of MDOs representing geographic proximate locations. For example, a number of buildings on either side of a street are each represented by MDOs. The BIM architecture allows for the MDOs to be queried as a group and supplied as a composite 3D stack detailing a particular above ground and subsurface urban landscape.

[0063] In a further implementation, the data platform module 306 configures one or more processors of the geospatial analytic appliance 104 to implement an integrated database management system (DBMS), as provided in FIG. 11. As used herein, geospatial data management refers the necessity of having multiple copies of a dataset be coherent with one another or to maintain data integrity. Data synchronization refers to keeping multiple copies of a dataset in coherence with one another so as to maintain data integrity. Data integrit 7 ensures that all data in a database (such as database(s) 102) can be traced and connected to other data. As a result all the data, whether mutable or immutable can be reco vered in an original form or format, without loss of information. Having a single, well-defined and well-controlled data integrity system increases stability, performance, reusability and

maintainability of the geospatial dataset.

[0064] Here, the geospatial analytic appliance 104 is configured to communicate with the database(s) 102 to capture and analyze data stored in the database. For example, all incoming data such as data sets obtained from outside data vendors or repositories is synchronized and managed prior to use by the analytic, simulation or visualization modules. For instance, a processor of the geospatial analytic appliance 1 4 is configured by one or more data synchronization submoduies to group or label data from similar geographic locations together. Here, data corresponding to a particular locality are synchronized together, where all those copies of data are managed and stored in clusters of remote or local processors and storage devices such as a Hadoop based cluster.

[0065] In a further implementation, one or more data synchronization submodules configure the geospatial analytic appliance 104 to synchronize a single set of data between two or more remote display devices 106, such as by automatically copying changes back and forth when a change in the dataset is detected. For example, the geospatial analytic appliance 104 is configured to synchronize data files such as Raster, images, videos, shape files, text documents, machine logs, ERP etc, sensor data and other obtained information within and among storage devices and platforms.

[0066] As used herein, and with continued reference to FIG . 11, data synchronization can be local synchronization where the remote display devices 106 are side-by-side and data is transferred through one or more local connections with the geospatial analytic appliance 104 or remote synchronization when a user is mobile and the data is synchronized over a network, such as the Internet.

[0067] Furthermore, the web interface permits users working on projects with different coding languages like Java, C/C++, Python, R, SAS, SQL, NoSql, to access the content stored within the databases natively. However, in one or more implementations, in order to access data from the database (such as synchronized data according to FIG. 11) the web-interface will implement one or more wrappers, drivers or software modules (such asjdbc, odbc, pyodbc etc.) to access the data within the database from one or more web-interfaces. If the requested data is available in the one or more local databases or storages location, the data is provided directly to the remote display device 106. If the requested data is not available in the cache (e.g. local storage) one or more processors of the database or the geospatial analytic appliance 104 will use the one or more search algorithms (such as MAPReduce) to search in all the databases and fetch the data available from the different databases. As used herein, the MAP REDUCE algorithm implements at least two functions, namely a Map function and Reduce function. The map function is done by means of Mapper function or method. Mapper method obtains an input, tokenizes it, maps and sorts it. The output of Mapper function is used as input by Reducer function or method, which in turn searches matching pairs and reduces them. The reduce task is done by means of Reducer class. The Reduce function or method, as implemented by one or more sufficiently configured processors of the database(s) 102 or the geospatial analytic appliance 104 implements various mathematical algorithms to divide a task into small parts and assign them to multiple systems. In technical terms, Map Reduce algorithm sends discrete tasks to appropriate servers in a cluster. These mathematical algorithms may include Sorting, Searching, Indexing, and TF-IDF features subroutines or functionality.

[0068] In particular work-flows where the model database 108 does not contain a specific or generic MDO for the location indicated by the query, a search of the remote database(s) is conducted as in step 230. According to one non-limiting embodiment of the system described, external search model 308 comprises code that configures one or more processors to access and search the remote databases accessible by the analytic system 104. For instance, the external database search module 308 queries municipal, zoning, planning, waste management and utility databases for information relating to the location identified in the query. In a further arrangement, the data obtained from the external databases is passed first through an application or software interface (e.g. Safe SW or Blue Marble), software development kits, or application programming layers (API) that use real time format conversions, web-forms and/or ASCII (RMDS) implementations to condition the data prior to handing or passing off to the other modules described herein. In one embodiment, the external databases are connected via a secure authorized socket and the database search module configures the processor to implement the suitable communication protocol For example, the processor is configured to implement a structured connect routine to access the data models and define the translation and relationships schema for the data. Furthermore, the database search module 308 configures the processor to create an indexing table within the local or remote memory location during the connection/ingest process to enable "real time" searches. [0069] The results of the search of the external databases are then transfonned into model data object compatible formats and a model data object is created and stored in the model database as shown in step 240. In one implementation, a data transformation module 310 comprises code that configures the processor to convert the data found in the external databases into model data formats using proprietary or open source algorithms configured to convert file types and transform data types while preserving the fidelity of the underlying content and data.

[0070] Additionally, the model data object is stored in the model database and is associated with, linked to or incorporating sub-objects or properties that describe the semantic relation of the given object to other data.. Such properties include accuracy values and attributes of the object model, including the scale and class of data as well and inheritance and data lineage. Additionally, the data model object has, in particular embodiments, attributes detailing the history, temporal or dynamic nature of the data, such as time stamps, changes over time, or durations. Furthermore, the model data object has attributes in a particular configuration addressing interoperability of the data, such a spatial attributes and SPARQL information and data. In further implementations, the model data object includes sub-attributes and data relating to cartographic, topographic and area rele vant information to update and expand the contextual and semantic attributes of the object model.

[0071] With particular reference to Fig 4, a user initiated query is parsed using query parse module 410. Where no data relating to the query is identified in the model database 108, the parsed query is used to search the plurality of external databases or sensors 102. The results of this query are received by an input module 408 of the analytic system evaluated for proper format type using a format check module 402 configured as code executing in the processor of the analytic system 1 4, The format check module 402 is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configured the processor to identify data objects with an incompatible format type. The processor is configured to store each data object having an incompatible format as an element in a conversion array.

[0072] Using a conversion module 406, configured as code executing in the processor, each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor, such as stored within a conversion software development kit 406, for converting the data object stored in the element of the conversion array to an approved format type in the format array, and applying the format factor to the element in the conversion array to obtain a converted data object, the converted data objects are linked to one another and function as a 3D data stack for a given location.

[0073] In one embodiment, the open source tools include the GDAL (Geospatial Data Abstraction Library) Tools which are released under the Open Source License issued by the Open Source

Geospatial Foundation. Such open source tools can include, but are not limited to, tools for conversion / manipulation of raster data formats and vector data formats, including geospatial industry standard formats. Likewise, geospatial projections and geodetic libraries are available through Proj4 public libraries for base geospatial information, definitions and translations.

[0074] Upon transformation into a model data compatible format, the converted or transformed data is stored to the model database 108 for further use, as in step 245.

[0075] By way on non-limiting examples, the vector data (autocad - .dxg, .dxt, .rvt, .3ds, fc; bently - ,dgn; Archicad - .3ds, .obj, fc ,vrl; sketchup - ,u3d, obj IFC; Google - KML ,kmz; ESRI .shp, .sde, GEORSS and GEOJSON file formatted data can be converted using the conversion module. Raster data such as iff, mg, jpg, .png format data can be converted as well. Elevation data can also be converted from such formats as las, DTED, ASCII, LSS XSE, xtf, jsf(bathy). Data obtained from dynamic data sensors (e.g. .wav, MP3/4, .avi, xinl, .mov, .html, ,3gp, json) can also be converted and used by the system described. Additionally, binary data such as .pdf, .xls, .doc, .txi and .dbf can be input and converted using the conversion modules as described herein.

[0076] In a further embodiment, where new or custom data is available, a user may enter this data into the system and convert the data into a model data object. In this arrangement, a user interface for data input is provided. In one arrangement, this user input interface has additional functions beyond data input. In an alternative arrangement, the data input user interface is a standalone component of the system. The user interface for data input allows for the uploading or transfer of data or files to the system. U ploaded data is checked for appropriate data formats. If the uploaded data is not in a compatible format, then the conversion module 406 or another suitable module or submodule is used to convert the data into a compatible format using open source or proprietary conversion modules.

[0077] In a further implementation, the environmental or sensor data is obtained by one or more remote sensors. The sensors are positioned throughout a geographic area and are configured to wirelessly transmit data about elevation, moisture, road conditions, temperature, noise, seismic data, soil content, geochemistr ' data, pollution levels, etc. to one or more receiving devices. In one or more implementations, the sensors are selected from image capture devices, light detection and ranging devices (LIDAR), moisture sensors, chemical sensors, seismic detectors, flow monitors, altimeters, microphones and thermometers. These receiving devices in turn provide the data from the sensors in real or near real time to the geospatial analytic appliance 104 or to a database(s) 102. For example, FIG. 15 illustrates a plurality of sensors 2002 obtaining environmental data and transmitting the data to one or more embedded hardware clients. In one particular implementation, die hardware clients are internet routers, hardware transmitters, or other device configured to receive wireless data transmitted by the sensors 2002. The embedded hardware clients in turn transmit the sensor data to one or more network gateways. For example, the embedded clients send the received sensor data over one or more satellite, microwave, or RF communication channels to a central or common receiving station, or node of a network. Alternatively, the embedded hardware clients are each configured to communicate with a different node of a network gateway, or separate network gateways. The sensor data is transmitted to the database(s) 102 for storage, or directly to the geospatial analytic appliance 104 for real, or near real, time use. As shown with further reference to F G 16 die sensors 2002 are configured to output data in one or more formats (JSON, binary-, CBOR). Various communication protocols, such as MQTT ( a protocol for collecting device data and communicating it to servers (D2S)), XMPP (a protocol best for connecting devices to people, a special case of the D2S pattern), DDS (a fast bus for integrating intelligent machines (D2D)); and AMQP (a queuing system designed to connect servers to each other (S2S)) or other protocols can be employed to transmit the data from, the sensor obtaining the measurement and the geospatial analytic appliance 104 or the database(s) 102.

[0078 J Once the model data corresponding to a particular query has been obtained, either directly from the model database, real time measurement sensors, or via transformation of external data sets or user input into a model object compatible format, the data and associated metadata returned by the query is sent to a data stack platform as in step 250. In one arrangement the data stack platform is a software and or hardware appliance configured to take the data object model as an input and constract a virtualized representation of the data. More specifically, one or more modules of the geospatial analytic appliance 104 are configured to analyze large datasets (colloquially referred to as 'Big Data') using one or more large dataset applications or algorithms (e.g. HDFS and MapReduce) in order to provide real time data analysis. In accordance with various implementations described herein, the geospatial analytic appliance 104 is configured to evaluate the datasets obtained from databases 102 and 108 and generate descriptive, predictive, and prescriptive analytics relating thereto. Specifically, the geospatial analytic appliance 104 is configured to evaluate features and structures of a building from BIM data and generate building asset management data, new structures simulation, building cost estimation functionality, as well as general data manipulation of geospatial data.

[0079] By way of example, the geospatial analytic appliance 104 is configured in one or more examples to evaluate the need of parking space in a particular area, demand for spaces in any of a defined number of parking lots, cost estimation, profit and loss management etc. relating to the management of such spaces. Likewise the geospatial analytic appliance 104 is configured to evaluate building data (cost, value, etc.) relating to building in locations where geographic conditions can alter valuations, building plans or permits (such as in flood plains or tornado paths).

[0080] More particularly, with reference to FIG. 12, the geospatial analytic appliance 104 is configured by one or more real-time evaluation modules, for example configured as submodules of the Data stack transformation 310 module, to configure the geospatial analytic appliance 104 to generate analysis of the geospatial data obtained or stored in the database. In one or more implementations, the geospatial analytic appliance 104 is configured by the real-time evaluation modules to evaluate geospatial data accessed from the database(s) 102 using one or more quantitative and qualitative analysis methods. For exampl e, one or more processors of the geospatial analytic appliance 104 are configured to evaluate geospatial data as it arrives from sensor de vices (such as sensor devices 2002, rather than storing the data and retrieving it at some point in future in the database(s) 102).

[0081 ] In more detail, and with reference to FIG. 13, one or more processors of the geospatial analytic appliance 104 are configured by one or more submodules of the data stack transformation module 310, to generate spatial datasets from stored mapped data. As shown, a geographic information system submodule configures one or more processors of the geospatial analytic appliance 104 to access mapping data as in step 1302, Here, one or more algorithms are used to convert mapped data into spatial information. For example, the one or more processors of the geospatial analytic appliance 104 are configured to convert mapped data into spatial information, such as by converting map points into vector and raster data. It should be appreciated that are two primary approaches used in generating spatial data from map data, spatial analysis and spatial statistics. Spatial analysis, such through the use of a suitably configured processor, is accomplished by calculating a grid of vectors corresponding to the map points as in step 1306. In a preferred embodiment, spatial data is best represented as grid- based continuous map surfaces that are preconditioned for use in map analysis and modeling systems. Alternatively, spatial statistics are used to generate future data or information relating to the converted map data, such as how prone an area is to flooding etc., as in step 1308.

[0082] FIG. 14 provides additional detail regarding using spatial statistics to generate predictions relevant to the mapped data. Here, one or more processors of the geospatial analytic appliance 104 are configured to map the factors of dependent and independent variables present in the map

(Map Variables) as in step 1402. Using the derived variables, one or more processors of the geospatial analytic appliance 104 determine related map variables from the input data as in step 1404. The determined relationship between the independent variables and dependent variables are analyzed according to one or more estimation algorithms, for example a regression algorithm as is step 1406. In alternative arrangements, the estimation algorithm is a neural network, linear regression estimate, support vector machine or other machine learning algorithm. In a particular implementation, a vector based map of the original mapped data is provided to an output display along with one or more results from the statistical analysis performed in step 1406. Thus, as shown in step 1408 a prediction map is generated for use by a user or stored in the database(s) 102 for future access.

[00831 Regardless of the temporal state of the data, one or more processors of the geospatial analytic appliance 104 are configured to implement predictive analytics on the received data (such as through a number of commonly understood analytical and statistical techniques used for developing models) in order to evaluate the likelihood of future events or behaviors. Those possessing an ordinary level of requisite skill in the art will appreciate that there are different forms of predictive models, which vary based on the event or behavior that is being predicted,

[0084] For instance, one or more processors of the geospatial analytic appliance 104 are configured by a predictive analytics submodule (including time-series or advanced regression implementations); along with one or more data mining submoduies to extract from multivariate data trends and predict future behaviors or events,

[0085] Turning to one particular example of the geospatial analytic appliance 104, the data stack transformation module 310 is configured to receive the model data object and generate a 3D virtualization of the data suitable for use with 3D configured displays. Hie data stack transformation module 310 parses the data included in the data module, or the data linked to the data module and generates visual representations or identifiers of the specific features, elements or characterizes of a given location. For example, the transformation module uses or parses data in the MDOs into geographical markup language (GML) or other mapping formation useful for generating

visualizations.

[0086] Here, the 3D virtualization includes parsing the data model to determine the path of buried utility infrastructure on a building site. This information is projected into a 3D virtual space along with information on building plots and zoning envelopes, subsurface structures, above surface structures and other characteristic of the location. Additionally, in implementations where the data model contains temporal information, one of more of the visualized features can be represented in time series such that animations showing the development of a feature of condition over time can be demonstrated in the visualization. In one embodiment, WebGL or similar and successor APIs are used for 3D visualization. Along with the visualization, tables, graphs reports and lists of metadata can be generated and provided to the user or stored in the database. [0087] In a specific embodiment, a game engine or module configured as code executed in the processor 105 is used to visualize, access and manipulate data representing some portion or the entire 3-dimensional data stack. For instance, the game engine configures the processor to render the data stack as a 3-dimensional environment. The 3D stack is stored, in a particular configuration, as language independent JSON format information with specific address with reference to real geographic coordinates. As an example, the data stack module includes one or more visualization engine submodules that implement a custom, standard or extended version of one or more game engines. For ease of explanation, and in no way limiting, an example of such a game engine is the Unity 3D game engine, made by Unity Technologies ApS of San Francisco, California. One or more processors of the geospatial analytic appliance 104 are configured by the visualization submodules to perform visualization, operational and simulation procedures on the geospatial data accessed or generated by the geospatial analytic appliance 104.

[0088] For example, the visualization submodule communicates with one or more additional submodules to provide a simulation engine or appliance to generate simulations of the data obtained in the database according to various historical, procedural, and analytic constraints. For instance, a processor of the geospatial analytic appliance 104 is configured by the one or more simulation submodules to utilize procedural movement algorithms to predict or simulate the movement or tracking of various autonomous vehicles, such as a smart city vehicle deployed to an urban environment. Likewise, one or more processors of the geospatial analytic appliance 104 is configured to generate interior or exterior content for rendered buildings based on internal datasets relating to specific types, categories, or classes of BIM objects. For example, the simulation engine produces one or more interior objects (such as furniture, appliances etc.) based purely in BIM data for the building. These produced items are visualized in real time, and output to one or more output display devices. For example, during a virtual walk-though of a modeled environment utilizing AR/VR enabled devices, the geospatial analytic appliance 104 is configured to display such generated objects and permit the movement of the objects in the generated environment in response to voice or motion based controls.

[0089] As an example, one or more processors of geospatial analytic appliance 104 configured by a simulation submodule to reproduce the characteristics of the real time environment within a generated virtual environmental, such as oil spills, storm detection, river flow simulation, flood simulation, fire simulation, earthquake simulation, stonn simulation, mini-weather simulation, explosion simulation, smoke propagation and distribution simulation, rainfall simulation, river flow simulation, drainage water flo simulation, traffic simulation, aerial flight simulation, energy transmission simulation, BIM based scheduling simulation and the like. In a " further implementation, one or more discrete packages of add-ons such as Nvidia PhysX software modules are utilized to enhance the fidelity of the simulation.

[0090] In one particular implementation of the simulation su bmodule, one or more real time submodules configure the processor of the GEO to combine or integrate IOT data streams, geo-tagged and time-tagged image streams into a real-time simulation. In a particular arrangement, the real time integrated data set is first conditioned by various machine learning, neural and fuzzy logic prior to presenting the output simulation to a user though the output display device 106. In a specific implementation, the simulation submodules access data from one or more Light Detection and Ranging (LIDAR) servers, image servers, terrain server, BIM servers, generic 3-D model servers, GPR servers, map server, and embedded IOT servers.

[0091] As shown in FIG. 1C, these servers are, in one implementation, local to the geospatial analytic appliance 104. As further provided, where the data is conditioned, normalized or evaluated by one or more machine learning algorithms prior to use in a real time simulation, the geospatial analytic appliance 104 provides for the one or more deep Learning and artificial Intelligence modules to access data output by the servers and to provide evaluated or conditioned data back to the enumerated servers for storage or further evaluation. Here, with continued reference to FIG. 1C, the LIDAR server holds the LIDAR surveyed point cloud data in form of LAS files, txt file, pnts file etc, or other data formats. Other data formats are envisioned where the data was converted according to the conversion module (described herein using such conversion tools as liblas/las tool, ogr2ogr, etc library.

[0092] The terrain server contains, in one arrangement, LIDAR sourced terrain data. Such terrain servers are configured to output the terrain data pre-classified as ground or non-ground based on the LIDAR data, and any segmentation or object classification that is associated with such data classification. Additionally, terrain servers are configured to output the results of linear profiling and cross sectional profiling of a surface based on the LIDAR data. The terrain server is also configured to provide digital terrain models (DEM/DTM).

[0093] The BIM server provides BIM models of buildings, terrain surfaces and classified BIM objects. In one or more implementations, the BIM server is configured to provide BIM objects, either alone or in combination with mapped environments. In a particular implementation, the BIM server is configured to implement one or more deep convolution neural network based shape search algorithms to provide a requested BIM dataset based on search queries described herein.

[0094] The map server provides the location and navigation data based on GPS and Pseudolites (e.g. terrestrial based 'GPS alternative' transmission devices) coordinates. Such location data can include inertia] data obtained from one or more IMU (inertial measurement units) used alone or in connection with SLAM (simultaneous location and mapping) algorithm based localization methods. The map server further includes images, such as satellite imagery or live stream satellite data. Furthermore, the map server includes images and stream data from one or more autonomous vehicles in communication the database(s) 102 (such as through a communication linkage with an onboard supervisory control and data acquisition (SCADA) system. [0095] In one or more implementations, time series data from any one of the data servers are provided to the geospatial analytic engine for processing, filtering and modification.

[0096] In one particular implementation of the simulation submodule implemented by the geospatial analytic appliance 104, is used to generate one or more digital terrain models (DTM). Point cloud data is evaluated to determine surface planes within tlie point cloud (such as one obtained from tlie LIDAR server). The geospatial analytic appliance 1 4 is further configured to segment the detected planes, including segmenting the corners and extracting the geometry information of the segmented planes to build BIM model. Planes in the point cloud (as obtained from one or more LiDAR servers) are segmented using MLESAC (Maximum Likelihood Estimation SAmple Consensus) algorithm. As used in described implementation, the MLESAC algorithm segments every plane in point cloud along with its comer.

[0097] With particular reference to the flow diagram of FIG. 8, one implementation of the simulation submodule includes detecting within one or more planar detection submodules one or more planes within the point cloud as in step 802. Upon detection, each identified plane is segmented along with its comers as provided in step 804. These groups of points representing planes and comers (segments) are then subjected to de-noising as in step 806. As used herein, de-noising of segmented planes discards points which are not necessary and thus only important information of each segmented plane is retained. These de-noised segments are merged to form, a 3D model of the location or object corresponding to the point cloud data as in step 808. An example of such a 3D model s provided in FIG. 7. Thus, from the spatial information corresponding to the segmented point cloud, geometric data (e.g. wall width, height etc.) is extracted from the merged segments as in step 810. The extracted geometric data is then used to generate a BIM model step 812. For instance, the 3D model is used as a platform or basis for rendering various features, such as walls and ceiling based on the location of the merged segmented planes. In yet a further implementation, one or more processors of the geospatial analytic system are configured to automatically render digital representations of walls and other surfaces as a dynamic overlay presented to a user accessing the generated 3D model .

[0098] Those possessing an ordinarily level of requisite skill in the art will appreciate that points within the point cloud that correspond to the ground are, on occasion, filtered out of the analysis prior to merging the segments. Various types of filtering algorithms can be used to automatically extract ground points from LiDAR point, such as slope-based methods, mathematical morphology-based methods, and surface-based methods. The common assumption of slope-based algorithms is that the change in the slope of terrain is usually gradual in a neighborhood, while the change in slope between buildings or trees and the ground is very large.

[0099] Another type of filtering method uses mathematical morphology to remove non-ground LiD AR points. Selecting an optimal window size is critical for these filtering methods. A small window size can efficiently filter out small objects but preserve larger buildings in ground points. On the other hand, a large window size tends to smooth teirain details such as mountain peaks, ridges and cliffs.

[ 100] However, poor ground extraction results may occur because the terrain slope is assumed to be a constant value in the whole processing area. In one particular implementation, a mathematical morphology based implementation is used to overcome the error prone in other ground extraction algorithms. Previous algorithms separated ground and non-ground measurements by removing non- ground points from LiDAR datasets. In contrast to these algorithms, surface-based methods gradually approximate the ground surface by iterative!}' selecting ground measurements from the original dataset, and the core of this type of filtering method is to create a surface that approximates the bare earth. Here, the advantage of mathematical morphology-based methods.

[0101] The use of the aforementioned filtering algorithms has proven to be successful, but the performance of these algorithms changes according to the topographic features of the area, and the filtering results are usually unreliable in complex cityscapes and very steep areas. T ms, the implementation of these filtering methods requires a number of suitable parameters to achieve satisfactory results, which are difficult to determine because the optimal filter parameters vary from landscape to landscape, so these filtering methods are not easy to use by users without much experience.

[0102] To cope with these problems, in one particular implementation, the simulation module of the geospatiai analytic appliance 104 configures one or more processors thereof to simulate a physical process akin to a virtual cloth dropped on an inverted (upside-down) point cloud. Using the foregoing process results in few parameters needed in the segmentation algorithm, and these parameters are easy to understand and set. Furthermore, segmentation algorithm, can be applied to various landscapes without determining elaborate filtering parameters. Importantly, using such a process enables raw LIDAR data (from the LiDAR server or one or more sensors) to be used to generate the eventual BIM model.

[0103] In a particular implementation the original point cloud is turned upside down, and a simulated 'point cloth' is dropped onto the inverted surface. By analyzing the interactions between the nodes of the cloth and the corresponding LIDAR points, the final shape of the cloth can be determined and used as a base to classify the original points into ground and non-ground parts.

[0104] Cloth simulation is a term of 3D computer graphics. It is also called cloth modelling, which is used for simulating cloth within a computer program. During cloth simulation, the cloth can be modelled as a grid that consists of particles with mass and interconnections, called a Mass-Spring model. A particle on the node of the grid has no size but is assigned with a constant mass. The positions of the particles in three-dimensional space determine the position and shape of the cloth. According to tins model, the interconnection between particles is modelled as a "virtual spring", which connects two particles and obeys Hooke's law. To fully describe the characteristics of the cloth, three types of springs have been defined: shear spring, traction spring and flexion spring.

[0105] When applying the cloth simulation to LIDAR point filtering, a number of modifications can be implemented to adapt the point cloud to implement ground and non-ground filtering. First, the movement of a particle is constrained to be in vertical direction, so the collision detection can be implemented by comparing the height values of the particle and the terrain (e.g., when the position of a particle is below or equal to the terrain, the particle intersects with the terrain).

[0106] Second, when a particle reaches the "right position", i.e., the ground, this particle is set as immovable. Third, the forces are divided into two discrete steps to achieve simplicity and relatively- high performance. Usually, the position of a particle is determined by the net force of the external and internal forces. In this modified cloth simulation, one or more processors of the geospatial analytic appliance 104 compute the displacement of a particle from gravity (the particle is set as immovable when it reaches the ground, so the collision force can be omitted) and then modify the position of this particle according to the internal forces.

[0107] The classification of this data, is undergone using a specific LIDAR classifying algorithm, CSF (Cloth Simulation). The same process is also undergone with other LIDAR classifying algorithms or Progressive Morphological Filters. However, the latter is used to design DTM from the point cloud(s).

[0108] With particular reference to FIG. 9, a processor of the geospatial analytic appliance 104 is configured to obtain a LIDAR data LAS file as in step 902. For instance, the LAS file is stored in the database(s) 102. The co-ordinate points x, y and z are extracted from the LAS data by a processor of the geospatial analytic appliance 104 as in step 904. In one particular implementation, a Cloth Simulation (Mass- Spring Model: Particles with Mass and interconnections or CSF) algorithm is then applied by a processor of the geospatial analytic appliance 104 to separate ground and non-ground points as in step 906. Included in the cloth simulation carried out by one or more processors of the geospatial analytic appliance 104, is the manipulation of the original point cloud so that its data points are inverted relative to vertical direction (i.e. turned upside down). A simulated cloth grid is applied to the inverted surface from above. A processor of the geospatial analytic appliance 104 is configured to calculate the displacement of each particle comprising the simulated cloth only from gravity. For ease of description, it should be noted that in order to constrain the displacement of particles in the void areas of the inverted surface, the interna! forces at the second step after the particles have been moved by gravity must be considered. As a result of internal forces, particles will attempt to stay in the grid and return to the initial position. Instead of considering the neighbors of each particle one by one, a processor of the geospatial analytic appliance 104 is configured to traverse all the springs. For each spring, the height differences between the two particles which form this spring are compared. Thus, the 2 -dimensional (2-D) problem is abstracted as a one-dimensional (1-D) problem. Two particles with different height values will try to move to the same horizon plane (cloth grid is horizontally- placed at the beginning). If both connected particles are movable, they are moved by a processor of the geospatial analytic appliance 104 by the same amount in the opposite direction. If one of them is immovable, then the other will be moved. Otherwise, if these two particles have the same height value, neither of them will be moved.

[0109] The main implementation procedures of CSF are described as follows. First, the geospatial analytic appliance 104 is configured by one or more simulation submodules to project the cloth particles and LiDAR points into the same horizontal plane and then find a nearest LiDAR point (named corresponding point, CP) for each cloth particle in this 2D plane. An intersection height value (IHV) is defined to record the height value (before projection) of CP. This value represents the lowest position that a particle can reach (i.e., if the particle reaches the lowest position that is defined by this value, it cannot move forward anymore). During each iteration, one or more processors of the geospatial analytic appliance 104 compare the current height value (CHV) of a particle with IHV. Where CHV is equal to or lower than IHV, one or more processors are configured to move the particle back to the position of IHV and make the particle immovable. The initial position of cloth is usually set above the highest point, thus projecting all the LIDAR points and grid particles to a horizontal plane and finding the CP for each grid particle in tins plane. For each grid particle, a processor of the geospatial analytic appliance 104 calculates the position affected by gravity if this particle is movable, and comparing the height of this cloth particle with IHV. If the height of particle is equal to or less than IHV, then this particle is placed at the height of IHV and is set as "immovable".

[0110] The simulation process will terminate when the maximum height variation (MHV) of all particles is small enough or when it exceeds the maximum iteration number which is specified by the user. Relatively large errors can occur, because the simulated cloth is above the steep slopes and does not fit with the ground measurements very well due to the internal constraints among particles.

However, in one or more implementations, the errors from using the described process are minimized using a post-processing method that smooths the margins of steep slopes. This post-processing method finds an immovable particle in the four adjacent neighborhoods of each movable particle and compares the height values of CPs. Then, a processor of the geospatial analytic appliance 104 compares the height values between C and B (the CPs for D and A, respectively). If the height difference is less than hep, then this candidate point D is moved to C and is set as immovable. One or more processors of the geospatial analytic appliance 104 repeats this procedure until all the movable particles are properly handled (either set as immovable or kept movable). Ail the movable particles should be traversed, if we scan the clo th grid row by row, the results may be affected by this particular scan direction. Thus, sets of strongly connected components (SCCs) are built and each SCC contains a set of connected movable particles.

[0111] In a further implementation, the CSF algorithm implemented mainly consists of four user- defined parameters: grid resolution (GR), which represents the horizontal distance between two neighboring particles; time step (dT), which controls the displacement of particles from gravity during each iteration; rigidness (RI), which controls the rigidness of the cloth; and an optional parameter steep slope fit factor (ST), which indicates whether the post-processing of handling steep slopes is required or not. Analyzing the interactions between the nodes of the cloth and the corresponding LIDAR points, the final shape of the cloth can be determined and used as a base to classify the original points into ground and non-ground parts. The output of this filtering and classification procedure is saved to a separate LAS files as in step 908. Thus, the processor is configured to separate ground and non-ground points on askew, tiled flat and city area from LIDAR data. Using the segmented or filtered LAS file, one or more noise reduction procedures (such as to remove non-relevant point cloud data) is performed as in step 910.

[0112] It should be appreciated that the result saved in the separate LAS files as provided in step 908 can include data points relating to objects that are connected to the ground (e.g., bridges). In one or more implementations the data contained in the updated LAS file does not distinguish between road features and those features that are connected to the ground at certain points but not others, such as bridges over road-ways, skyway and the like. Thus in one or more further implementations, one or more processors of the geospatial analytic appliance 104 is further configured to merge the LAS file with one or more alternative data sources for a given location. For example, where the geometry information of LiDAR points or optical images of the location in question are stored in the database(s) 102, such data can be used to identify area where a surface is not directly in contact with the ground. In a further implementation, one or more secondary data sources, such as image data, are used to correct the updated LAS file. Here, where the pixel data for an image corresponding to the same location as the LAS file indicates the absence of a landmark or structure, the LAS data is amended to reflect the pixel data. [0113] In a further implementation of the simulation submodule, one or more processors of the geospatial analytic appliance 104 are configured to implement a surface model algorithm to generate digital terrain models (DTMs) essential for many geographic information sy stems (GlS)-related analysis and visualizations. In the foregoing implementation, airborne light detection and ranging (LIDAR) systems are used to obtain measurements for the horizontal coordinates (x, y) and elevation (z) of reflective objects scanned by the laser beneath the flight path. These measurements generate three-dimensional point clouds with irregular spacing. The laser-scanned objects can include buildings, vehicles, vegetation (canopy and understory), and "bare ground." To generate a DTM, measurements from, ground and non-ground features have to be identified and classified. For example, with reference to FIG. 10, one or more processors of the geospatial analytic appliance 104 are configured by one or more submoduies of the simulation module to remove tree vertices iterativeiy from a triangulated irregular network (TIN) constructed from LIDAR measurements. Alternatively, one or more submoduies of the simulation module configure a processor of the geospatial analytic appliance 104 to classify ground points by iterativeiy selecting ground measurements from an original dataset.

[01 14] In a particular implementation, by one or more processors of the geospatial analytic appliance 104 are configured to evaluate adaptive TIN models to find ground points in a location of interest (such as an urban area). The processor is configured to seed ground points within a user-defined grid of a size greater than the largest non-ground. One point above each TIN facet is added to the ground dataset every iteration if its parameters are below threshold values. The iteration continues until no points can be added to the ground dataset. T he problem with the adaptive TIN method is that different thresholds have to be given for various land cover types.

[01 15] If LIDAR points are converted into a regular, grayscale grid image in terms of elevation, then the shapes of buildings, cars, and trees can be identified by the change of gray tone. It is well known thai compositions of algebraic set operations based on mathematical morphology can be used to identify objects in a grayscale image. Therefore, mathematical morphology cars be used to filter LIDAR data. Mathematical morphology composes operations based on set theory to extract features from an image. Two fundamental operations, dilation and erosion, are commonly employed to enlarge (dilate) or reduce (erode) the size of features in binary images. Dilation and erosion operations may be combined to produce opening and closing operations. The concept of erosion and dilation has been extended to multilevel (grayscale) images and corresponds to finding the minimum or maximum of the combinations of pixel values and the kernel function, respectively, within a specified neighborhood of each raster.

[0116] Thus, these concepts can also be extended to the analysis of a continuous surface such as a digital surface model as measured by LIDAR data. Here, the window can be a one-dimensional (1-D) line or two-dimensional (2-D) rectangle or other shapes. The dilation output is the maximum elevation value in the neighborhood.

[0117] The combination of erosion and dilation generates opening and closing operations that are employed to filter LIDAR data. The opening operation is achieved by performing an erosion of the dataset followed by dilation, while the closing operation is accomplished by carrying out dilation first and then erosion. For example, an erosion operation can remove tree objects of sizes smaller than the window size, while the dilation restored the shapes of large building objects. In one implementation, the measurements of cliff edges can be preserved if the morphological filters are applied to the LIDAR measurements for rocky coasts.

[0118] The selection of a filtering window size and the distribution of the buildings and trees in a specific area are critical for the success of this method. If a small window size is used in Kilian's method, most of the ground points will be preserved. However, only small non-ground objects such as cars and trees will be effectively removed. The points corresponding to the tops of large-sized building complexes that often exist in urban areas cannot be removed. The risk of making commission errors is high . On the other hand, the filter tends to over-remove the ground points with a large window size.

[011 ] It has been shown that morphological filters can remo ve measurements for buildings and trees from LIDAR data but it is difficult to detect all non-ground objects of various sizes using a fixed filtering window size. This problem can be solved by increasing window sizes of morphological filters gradually. An initial filtered surface is derived by applying an opening operation with a window of length to the raw data. The large non-ground features such as buildings are preserved because their sizes are larger than, while individual trees of size smaller than are removed. For the terrain, features smaller than are cut off and replaced by the minimum elevation within. In the next iteration, the window size is increased to, and another opening operation is applied to the filtered surface, resulting in a further smoothed surface. The building measurements are removed and replaced by the minimum elevation of previous filtered surface within, since the Size of the building is smaller than the current window size.

[0120] By performing an opening operation to laser-scanned data with a line window that increases in size gradually, the progressive morphological filter can remove buildings and trees at various sizes from a LIDAR dataset. However, the filtering process tends to produce a surface that lies below the terrain measurements, leading to incorrect removal of the measurements at the top of high-relief terrain. Even in the flat ground areas, the filtered surface is usually lower than the original measurements. Therefore, most point measurements for terrain are removed, and only a filtered surface is available if the opening operation is performed to the LIDAR data directly. This problem can be overcome by introducing an elevation difference threshold based on elevation variations of the terrain, buildings, and trees. [0121 ] Each building has a certain size and height. There is an abmpt change in elevation between the roof and base of a building, while the elevation changes of terrain are gradual . The difference in the elevation variations of buildings and terrain can help the filter to separate the building and ground measurements.

[0122] Returning to FIG . 10, such errors can be overcome though the application of the forgoing filtering process. First, the irregularly spaced (x; y; z) LID AR measurements are loaded as in step 1002. A regularly spaced minimum surface grid is constructed by one or more processors of the geospatial analytic appliance 104 by selecting the minimum elevation in each grid ceil as in step 1004. Furthermore, a processor of the geospatial analytic appliance 104 is configured by one or more submodules to store point coordinates (x; y; z) in each grid cell. Here, if a ceil contains no measurements, it is assigned the value of nearest point measurement.

[0123 J One or more processors of the geospatial analytic appliance 104 are configured to pro vide a progressive morphological filter utilizing an opening operation to the grid surface. At the first iteration, the minimum elevation surface together with an initial filtering window size provides the inputs for the filter as shown in step 1006. In following iterations, the filtered surface obtained from the previous iteration and an increased window size from last step is used as input for the filter. As show, the output of this step includes a) the further smoothed surface from the morphological filter and b) the detected non-ground points based on the elevation difference threshold.

[0124] Continuing with reference to FIG. 10, the size of the filter window is increased and the elevation difference threshold is calculated according to step 1008. The foregoing steps are repeated until the size of the filter window is greater than a predefined maximum value as provide in step 1010. This value is usually set to be slightly larger than the maximum building size. The last step is to generate the DTMs based on the dataset after non-ground measurements have been removed as shown in 1012. [0125] It will be appreciated that the selection of the window size and elevation difference threshold are critical to achieving the desired results when applying the morphological filter. Alternatively, the window size can be increased exponentially to reduce the number of iterations used in the filter. The sizes of individual cars and trees are much less than that of the buildings, so most of them are often removed in the first several iterations, while the large buildings will be removed last. The maximum elevation difference threshold can be set to a fixed height (e.g., the lowest building height) to ensure that building complexes are identified. The optimum is usually achieved by iterative!}' comparing the filtered and unfiltered data. On the other hand, the non-ground objects in the mountainous areas are primarily vegetation (trees). There is no need to set up a fixed maximum elevation difference threshold to remove trees, and is usually set up as the largest elevation difference in the study area.

[0126] The progressive morphological filter can be either 1- or 2-D depending on its window shape. The filter is 2-D if its window is a 2-D shape such as rectangle or circle, while the filter is 1-D if its window is defined by a segment of a line. The algorithms for 1 - and 2-D filters are similar. The above 1-D erosion algorithm, can be easily extended to a 2-D one with a square window by performing erosion in the x direction first and then in the y direction. The same rale can be applied to dilation as well. The major computation time needed by the progressive morphology filter is the erosion and dilation in addition to the interpolation.

[012,7] To the obtained LIDAR data, series of opening operations is applied against a DSM (digital surface model) derived from a LIDAR point cloud. A progressive morphological filter is, in one implementation, applied to two test LIDAR datasets to examine its filtering effect. The opening operation was applied to both x and y directions at every step to ensure that the non-ground objects were removed. For the dual purpose of creating a gridded model of the ground surface, (and its referencing matrix R) and a vector of Boolean values for each tuple (x, y and z) describing it as either ground (0) or object (1). This algorithm work s verified by different window sizes, elevation difference thresholds and maximum expected slope. Thus Surface model and Terrain model is created at the end of the process using progressive morphological filter algorithm.

[0128] Furthermore, additional systems such as autonomous parking lot management systems, autonomous and normal vehicle tracking systems, localization of IOT sensors systems, are managed using the simulation submodule and provided to a user through the visualization submodule. By way of non-limiting example, a simulated oil spill wi thin a given body of water can be simulated and visualized using the systems prov ided herein. For instance, the movement of oil spill can be simulated accounting the necessary parameters like velocity of the w ind and its direction, force at winch the waves hit the bay and many more. Likewise, storm detection etc., can be simulated which would exhibit the movement of the storm with respect to time, in Building information modelling, the point cloud data of buildings, Spec driven valve, flange, reducer placement etc., obtained from LID AR can be constructed into 3D BIM models.

Augmented Intelligence use of the 3D stack

[0129] In a further arrangement of the system and method described, the 3D data stack is used as input to a predictive engine as in step 295. With reference to FIG. 17, in a particular implementation, the prediction engine is configured as a module 330 via code associated therewith and executing in the processor 105 of the analytic system 104. However, in an alternative arrangement, the prediction engine module is remote to the analytic system 104 and hosted as accessible software. The predictive engine module 330 comprises code that configures the processor 105 to analyze the 3D stack for a location in response to a user query regarding a potential event. For example, the prediction engine module configures the processor 105 to analyze the 3D stack and indicate portions of the location that are prone to flooding, or that are anticipated to be prone to flooding in the event of a major weather event. Likewise, the prediction engine is configured to estimate or predict the effect of road closures on traffic, evacuation routes, police response time or logistical and delivery- options in response to such weatlier events. The prediction engine is configured as a neural network that takes historical data and provides probabilities of future outcomes based on an analysis of prior data. In an alternative configuration, the prediction module incorporates cognitive science applications, such as support vector analysis, fuzzy logic, expert systems, neural networks, intelligent agents or other supervised or unsupervised machine learning algorithms utilized to extract data from the model database 108 or the external database(s) 102 to obtain historical information and generate predictions thereof. In one particular implementation, the data used by the prediction module is unsupervised learning data training sets, supervised learning data training sets or a combination of the two.

[0130] For example, in the road traffic scenario described above, where municipality sensors monitor the vehicle density and traffic flow, the predictive engine provides a user of the 3D stack with a suggested list of measures to be taken to reduce congestion based on a set of rules and algorithms.

[0131] In another embodiment, the predictive module is configured to compare muiti -temporal and multi-spatial aspects of the data stored by the system to integrate queries and predictive analytics to model complex systems and variables which are then presented to a user in 3D/ 4D (time slices). Such data is then used to model and display solutions based on user defined criteria. This time based analysis can, in one arrangement be used to assist law enforcement, or government agencies in conducting situational and threat assessment utilizing geospatial data.

[0132] In further embodiments, the AT system encoded in the prediction module 330 is also configured to generate options or actions in response to a real or hypothetical / simulated event. For instance, in the event of an extreme weather event, the predictive module is configured to generate solutions that would provide alternative evacuation routes, traffic signal control modification to expedite traffic, efficient routing plans for EMS / Fire /' police officials, food and shelter logistics and predicated economic and infrastructure damage. [0133] When used in infrastructure planning, the predictive module would provide information regarding housing and impact assessment data, environmental maps, geologic and engineering routes vegetation route analysis, location and co-location of industrial and commercial clients, and physical plane and line security information.

[0134] The predictive module uses machine learning to optimize solutions given the variables and goals of the users. This information would enable the generation of new data and information that can be used to update the database and be available to other users. The predictive module is also used to data mine the database(s) 108 or 102 to determine relationships and outcomes of variables to interpret new data and query results.

[0135] In a further embodiment, the AI system encoded in a predictive model implements machine learning to generated predictive analysis and information. Those skilled in the art will appreciate that machine learning is an evolution from pattern recognition and computational learning theory in artificial intelligence. As used and understood herein, machine learning represents the analysis and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions.

[0136] In one embodiment of the present visualization system and apparatus, the predictive models configures the processor to evaluate different format inputs, and make a comparison between formats, and checks the date on timely basis and extrapolates and predicts future events and circumstances and provide solutions thereto. Machine learning is closely related to computational statistics; a discipline that aims at the design of algorithm for implementing statistical methods on computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include weather prediction, optical character recognition (OCR), search engines and computer vision. Machine learning and pattern recognition can be viewed as two facets of the same field. When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or predictive modelling.

[0137] The predictive model, utilizing augmented artificial intelligence, configures the processor to implement one or more algorithms to utilize virtual machine learning to generate predictions and alerts based on analyzed large data sets. In accordance with the described embodiment, the predictive module implements different types of machine learning depending on the nature of the learning "signal" or "feedback" available to the 3D visualization system 100.

[0138] In a non-limiting example, the predictive module is configured to use supervised learning methods and implementations. For instance, the predictive module configures the processor to evaluate example data inputs and their desired outputs, and generate a general rale that maps inputs to outputs. For instance, the processor is fed data and a goal is set for the engine to solve traffic congestion at a particular location. Here, the inputs are fed manually, obtained from sensors and/or computer vision system utilizing digital image processing. The predictive module then evaluates the input data and the desired output state and generates a solution that is predicted to result in the desired outcome of reduced congestion.

[0139] In an alternative embodiment, the predictive module utilizes unsupervised learning implementations. Under this system, no labels are given to the learning algorithm employed by the processor, thus the processor is configured to generate structure from the input. Using such an unsupervised learning approach results in discovering hidden patterns in data which might not be apparent from a manual analysis. As a non-limiting example, a user can generate a 3D stack relating to a particular transit infrastructure such as a bus or train. The user, desiring to navigate to the particular bus that will have the shortest commute time to her desired location, utilizes the unsupervised learning features of the predictive module to take into account changes in routes, time and other factors due to inclement weather, accidents or other events that might cause delay to one or more transit options,

[0140] In a further embodiment, the predictive module uses reinforcement learning features implemented as a sub-module of the predictive module. Here the processor is configured to interact with a dynamic environment in which a certain goal (e.g. driving a vehicle) is performed without a user manually providing instructions about the vehicle's proximity to the desired destination.

Reinforcement learning can be considered semi-supervised learning, where the sub-module configures to the processor to receive an incomplete training signal, such as a training set with some, or often many, of the target outputs missing. For instance, transduction is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing.

[0141] Other machine learning solutions are also implemented by the processor configured to execute the submodules of the predictive module. For example, in certain embodiments that utilize a robot or autonomous device, developmental learning submodules generate sequences or curriculum of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human interface. Additionally, the submodules incorporate other guidance mechanisms, such as active learning, prediction etc.

[0142] In using machine learning classification of data, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one (or multi-label classification) or more of these classes. In one embodiment, ciassification of data is implemented as a supervised learning routine. In a further machine learning implementation, regression is also implemented as a supervised learning problem. In regression, the outputs are continuous rather than discrete. In clustering, a set of inputs is to be divided into groups. [0143] In a further embodiment, the submodule uses dimensionality reduction algorithms to simplify inputs by mapping high-dimensional data, in to a lower-dimensional space, in a non-limiting embodiment, the predicti ve model configures the processor to implement a topic modeling strategy, such as through a topic modeling sub-module to evaluate a list of human language documents and determine or generate relationships between the documents. Using such a topic modeling submodule the system described extracts useful information relating documents from different places with different language, or formats.

Use of the 3D Data Stack

[0144] In one arrangement of the described system and method, the 3D data stack generated in step 250 utilizing the 3D data stack transformation module 310 is transmitted or otherwise communicated to a data and visualization output device 106. In one example of the system described, data and visualization output device 106 is a mobile computing device configured through code executing in a processor thereof to receive the 3D data stack and generate visualization for an end user as shown is step 260.

[0145] In one non-limiting arrangement, the mobile computing device 106 is a smart phone or tablet computer with a display device, coordinate or location devices and a network interface. According to this implementation, the mobile computing device receives the 3D data stack as a wireless data communication from the analysis system 104. However, in an alternative arrangement, the mobile computing device is configured through software modules, such as the data stack transmission module 312 utilized by the analytic system or the transmission module 314 utilized by the display device, to retrieve or cause to transmit to a 3D stack from a remote storage device or service such as a cloud hosting device or service.

[0146] The mobile device 106 is configured to permit the user to manipulate and view the 3D data stacks in real-time in order to evaluate the results of the query as in step 270. In one configuration, the mobile computing device is equipped with navigational and location aids such as GPS transceivers, altimeters and digital compasses. Utilizing such equipped devices allows the user to align the 3D data stack with the user's orientation at a specific location such that when the device is moved, the portion of the 3D stack displayed by the user device 106. For instance, a processor of the mobile computing device 106 is configured, through the display module 314, to represent changes in the field of view displayed to a user in response to the movement of the device. Here, the movements of the mobile computing device itself or the user and the mobile device together, will cause the view or portion of the 3D stack to change in relation to orientation, angle and elevation of the mobile device.

VR and AR Devices

[0147] In a particular embodiment, the mobile device 106 is a VR display device. In this

configuration, the user is immersed in a full scale visualization of the 3D data stack. By moving the VR display device, the data displayed to the user's field of vision will change depending on body position, head position and viewing angle. In an alternative configuration of the system described, the mobile computing device is an AR device that provides the 3D stack as a data overlay on a user's field of vision but allows the user to maintain real time observations of the location in question.

[0148] During the visualization interaction, either with a mobile device or altered reality platform, (e.g. VR or AR), the user can access tools and functions that allow the 3D data stack to be updated or modified. As shown in step 290, user action such as the placement of a beacon or annotating a location with additional metadata is recorded and added to the 3D data stack currently under visualization. This updated information is transmitted to the analysis system 1 6 where it can be processed and used to update the copy of the 3D stack residing in the model database 108. The update module 318 configures the processor of the mobile device 106 to transmit this information to the analysis system 104 where data stack update module 320 stores the updated information in the model database 108. Autonomous devices

[0149] In a further implementation, the 3D data stack is used by an autonomous or semi-autonomous device in order to path find, analyze or spot check data provided in the 3D data stack. In one non- limiting example, an airborne, submersible or subsurface autonomous device (e.g. drone) utilizes the 3D stack to inspect utility infrastructure with remote sensing devices such as IR scanners, cameras or magnetometers. The autonomous device is configured to take readings or measurements of infrastructure and update the 3D data stack with metadata relating to the current condition or state of the infrastructure. Additionally, the autonomous or semi -autonomous device is used in search and rescue, fire prevention and mitigation, disaster mitigation and relief operations.

[0150] The autonomous devices can, in specific embodiments, utilize the predictive module 330 of the system to provide real-time learning systems to execute tasks such as path finding and visual identification. As a non-limiting example, an autonomous vehicle traveling through a geographic area is configured to implement a data update feature whereby the data received by the vehicle is used to update the accuracy of the model data objects in near real time.

[0151] In a further embodiment, the system described provides a platfonn for network enabled vehicle communication and/ or autonomous robotic vehicle with multi sensor interface to act upon dynamic changes occurring in environment and update that information back to the database. For example, location data and other information recorded or measured by the autonomous robotic vehicle is transmitted through the document model database and is distributed to other linked or connected autonomous and non-autonomous vehicles to avoid congestion. Based on the information from all the mobile platforms, as well as sensors and other data feeds, the predictive module configures the processor of the system to send a predicted route change to a vehicle to avoid congestion in traffic, or avoid other navigational hazards. In a further arrangement, the mobile computing device 106 is configured to perform, using a check sub-module (data consistency and validity) of the data stack update module 318 to review specific set of data in the model database 108. The check submodule configures the processor to analyze and validate in "real-time" any changes made to the 3D data, stack, such as by annotating metadata or updated sensor measurements. The check module configures a processor to initiate a flag and update procedure to the specific 3D data stack being utilized by the user if any parameter changes are recognized. Here, the analytic system is configured to transmit an update to some or all of the 3D data stack being used by the user.

[0152] For example, if a sensor on oil and gas infrastructure indicates a change in the safety and continuous operational status, the 3D stack stored in the model database 108 is updated, and the updated status changes are made in real or near real time to any users that are currently accessing or using a 3D stack corresponding to that location. Data from autonomous vehicle/sensors can constantly update the MDOs in the model database 108 to provide improved spatial accuracy for real-time events that can be remotely analyzed by a user. For example, sensors indicating the creation of a new pothole would provide data to the 3D data stack of the location in question such that a remote user could evaluate the size, depth and potential impact of such a change in road surfaces might have on traffic. As another example, changes as a result of real time sensor and traffic pattern updates are analyzed by the prediction engine to provide predictions on future traffic patterns and to suggest real time traffic pattern optimization based on the 3D data stack.

Tool sets

[0153] The apparatus so described are configured to be extensible and interoperable with future designed tool sets configured to access the stored data models (such as though an API) and run data analysis specific to a particular domain or area of interest. For example, the Transportation

Department of a government may have an interface with the analytic system 1 4 that allows for additional information to be utilized in support of traffic signal analysis and impact, accident analysis, diversion and route analysis and dynamic re-routing tools. Likewise, in agricultural contexts , the analytic system is extensible to accept private or user specific data streams and information to allow the model data to be used and combined with user data for the purposes of crop monitoring, yield prediction, weather impact analysis, drought mitigation and cost predictions.

[0154] Item 1. A computer implemented method for generating a multi-dimensional representation of at least a some of a physical location that includes a plurality of objects, the method performed by one or more processors configured by executing code that cause the one or more processors to: a. access a point cloud having a plurality of data points representing distances that are measured between a source and each of the objects;

b. determine, using the data points, (i) a plurality of respective spatial planes associated with each of the objects, and (ii) edges of each of the objects on each of the objects' determined respective spatial plains;

c. segment each of the objects' determined respective spatial planes as a function of at least one of the determined edges;

d. merge all of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and

e. generate, using the spatial value data set, a multi -dimensional representation of at least some of the physical location within a virtual environment,

[0155] Item 2. The method of any one of the proceeding items, wherein the one or more processors is further configured to: a. display on a display device the multi -dimensional representations of the spatial value data.

[0156] Item 3. The method of any one of the proceeding items, wherein the processor is further configured to: a. detennine that at least one data point does not correspond to at least one of the determined respective segmented planes; and remove the at least one data point from the pomt cloud.

101571 Item 4. The method of any one of the proceeding items, wherein the processor is further configured to: a. determine that the spatial value dataset is valid or not valid by comparing the spatial value data set with at least one other data set corresponding to at least some of the physical location.

[0158] Item 5. The method any one of the proceeding items, wherein the source is a light source and at least one of the objects is light reflective.

[0159] Item 6. The method of any one of the proceeding items, wherein the segmented spatial planes are segmented by a maximum likelihood estimator sampling consensus algorithm.

[0160] Item 7. The method of any one of the proceeding items, wherein the at least one otlier data set includes building information management data.

[0161] Item 8. The method of any one of the proceeding items, wherein the at least one other data set includes information obtained from the physical location.

[ 162] Item 9. The method of any one of the proceeding items, wherein the at least one other data set includes image data.

[0163] Item 10. The method of any one of the proceeding items, wherein the processor is further configured to: remove any data point within the point cloud not corresponding to a segmented plane.

[0164] Item 11. The method of any one of the proceeding items, wherein the processor is further configured to: a. identify one or more values of the spatial value dataset lacking a corresponding value in at least one other data set, and

b. remove the one or more identified spatial values from the spatial value dataset.

[01651 Item 12. A system for generating a multi-dimensional representation of at least a some of a physkal location that includes a plurality of objects, the system comprising: a. a database configured to store at least one point cloud, the point cloud having a plurality of data points representing distances that are measured between a source and each of the objects;

b. at least one processor configured by executing code that cause the at least one processor to:

i. access, from the database, a point cloud,

ii. determine, using the data points, (i) a plurality of respective spatial planes

associated with each of the objects, and (ii) edges of each of the objects on each of the objects' determined respective spatial plains;

lii. segment each of the objects ' ' determined respective spatial planes as a function of at least one of the determined edges;

iv. merge all of the data points associated with at least two of the segmented spatial planes into a spatial value dataset; and

v. generate, using the spatial value data set, a multi-dimensional representation of at least some of the physical location within a virtual environment.

[0166] Item 13. The system of any one of the proceeding items, further comprising, at least one display device configured to receive tlie multi-dimensional representation of at least some of the physical location and display the multi-dimensional representation of at least some of the physical location. [0167] Item 14. A method for generating a multi -dimensional representation of at least a some of a physical location that includes a plurality of objects from a plurality of different format geospatial datasets, the method performed by one or more processors configured by executing code that cause the one or more processors to: a. generate a geospatial area query, wherein the query includes data identifying the given physical location;

b. access, from, at least one remote database accessible by the one or more processors, a plurality of geospatial data sets, each geospatial data set containing information relating to the given physical location in a given data format;

c. evaluate the data format of each of the geospatial data sets against a format array of pre-set data format types, where each element of the array contains reference to a compatible format type;

d. identify geospatial data sets not found within the format array;

e. store each geospatial data set not found in the array as an element in a conversion array; f. identify a con version factor to convert the data set stored in each element of the conversion array to an approved format type in the format array;

g. convert each data set in the conversion array by iterating over each, element in the

conversion array to apply the conversion factor to the element in the conversion array in order to obtain a converted data object;

h. generate a multi-dimensional visualization from the converted and non-converted

datasets.

[0168] Item 15. The method of one of the proceeding items, further configured to: a. transmit the multi-dimensional visualization to a second computer; b. display the visualization to the user using at least one display device integral to the second computer.

[0169] Item 16. The method of one or more of the proceeding items, further configured to: a. generate within a multi-dimensional representation of a given physical a plurality of representations of at least a portion the physical location corresponding to data points obtained from a plurality of geospatial datasets;

b. identify within each of the geospatial datasets one or more alignment points: c. adjust the positioning of one or more representations so as to align one or more alignment points from one of the geospatial datasets with one or more alignments points of another geospatial data set; and

d. output a visualization of the aligned geospatial datasets to one or more remote display- devices.

[0170] A computer implemented method for improving the use of incompatible multivariate spatial and non-spatial data obtained from, at least one or more sensor devices by transforming the data into compatible formats within the memory of a computer and generating a 3D visualization thereof, the method performed by one or more processors configured by executing code that cause the one or more processors to: select a geospatial area for inquiry by generating, using a geospatial query generator, geospatial selection corresponding to the area under inquiry; access, from at least one remote database accessible by the one or more processors, a plurality of data objects obtained from at least one of a plurality of sensor devices at a given period of time, using a input module, wherein the data is corresponds to the geospatial data of the inquiry; evaluate the format of the each of the plurality of data objects, using a format check module configured as code executing in the processor, the format check module configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type; identify data objects with an incompatible format type and storing each data object having a incompatible fonnat type in the format array as an element in a conversion array; convert each data object in the conversion array, using a conversion module configured as code executing in the processor by iterating over each element in the conversion array and identifying a conversion factor for converting the data object stored in the element of the conversion array to an approved format type in the fonnat array, and applying the format factor to the element in the conversion array to obtain a converted data object; generate a three-dimensional visualization from the collection of data objects having a compatible format referencing a given location; transmit the three-dimensional visualization to a second computer; display the visualization to the user using at least one display device integral to the second computer.

[0171] A computer implemented method as previously referenced wherein the second computer is integral to a virtual reality display device.

[0172] A computer implemented method as previously referenced wherein the second computer is integral to an autonomous or semiautonomous device.

[0173] A computer implemented method as previously referenced further comprising: updating a local copy, with the second computer, the collection of data objects comprising the three-dimensional visualization; transmitting the updated local copy to the first computer; synchronizing the local copy with a master copy of the updated data accessible by the first computer.

[0174] A system for accessing and transforming multivariate spatial and non-spatial; data within the memory of a computer and generating a 3D visualization thereof, the system comprising: a geospatial analytic se ver having at least one processor and configured by code executing therein to receive data, from multiple geospatial data sources, harmonize the received data into a compatible format, and transform collections of geospatial data, where each member of the collection references the same given location into a 3D data stack suitable for transmission; a remote viewing device having at least one processor and configured to receive the 3D data stack and display the collection of data within the 3D data stack to a user, and update the display of 3D information in response to the movement of the device.

[0175] A computer implemented method for improving the use of incompatible utility infrastructure data obtained from at least one or more sensor devices by transforming the utility data into compatible formats within the memory of a computer and generating a 3D visualization thereof, the method comprising: selecting a geospatial area for inquiry by generating, using a geospatial query generator, geospatial data corresponding to the area under inquiry; accessing, from at least one remote database accessible by a processor of the computer, a plurality of data objects obtained from at least one of a plurality of sensor devices at a given period of time and possessing geospatial information describing the location of utility infrastructure, using a input module configured as code executing in the processor, wherein the data is relevant to the geospatial data of the inquiry; evaluating the format of the each of the plurality of data objects, using a format check module configured as code executing in the processor, the format check module configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type; identifying data objects with an incompatible format type and storing each data object having a incompatible format type in the format array as an element in a conversion array; converting each data object in the conversion array, using a conversion module configured as code executing in the processor by iterating over each element in the conversion array and identifying a conversion factor for converting the data object stored in the element of the conversion array to an approved format type in the format array, and applying the format factor to the element in the conversion array to obtain a converted data object; generating a three-dimensional visualization of all the utility infrastructure from the collection of data objects having a compatible format referencing a given location; transmitting the three-dimensional visualization to a second computer; displaying a three-dimensional visualization of all the utility infrastructure to the user using a head-mounted display device connected to the second computer. [0176] While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of any embodiment or of what can be claimed, but rather as descriptions of features that can be specific to particular embodiments of particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a sub-combination or variation of a sub-combination.

[0177] Similarly, while operations are depicted in the drawings in a particular ordei; tins should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0178] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising", when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more oilier features, integers, steps, operations, elements, components, and/or groups thereof.

[0179] It should be noted that use of ordinal terms such as "first," "second," "third," etc, in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.

[0180] Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including," "comprising," or "having," "containing," "involving," and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.

[0181] Particular embodiments of the subject matter described in this specification have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain embodiments, multitasking and parallel processing can be advantageous. Patents, patent applications, and publications are cited throughout this application, the disclosures of which, particularly, including all disclosed chemical structures, are incorporated herein by reference. Citation of the above publications or documents is not intended as an admission that any of the foregoing is pertinent prior art, nor does it constitute any admission as to the contents or date of these publications or documents. All references cited herein are incorporated by reference to the same extent as if each individual publication, patent application, or patent, was specifically and individually indicated to be incorporated by reference. [0182] While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.