Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISUAL INTEGRATION SOFTWARE SYSTEM OF HETEROGENEOUS SOURCES, BASED ON THE NEBULAR-PHYSICAL-TOPOLOGICAL REPRESENTATION OF A COMPLEX SYSTEM
Document Type and Number:
WIPO Patent Application WO/2020/157615
Kind Code:
A1
Abstract:
Software system for visual integration of heterogeneous sources belonging to a complex system based on Nebular-Physical-Topological representation. Such software system stands as a unified element of command and control, analysis and decision support in the management of systems and services, capable of visually integrating and aggregating, through an innovative form of representation, information coming from distinct data sources, thus allowing its use through a single touch-and-gesture-based human- machine interface based on paradigm of Natural User Interface (NUI).

Inventors:
MOSCHETTONI CANDIDO (IT)
Application Number:
PCT/IB2020/050558
Publication Date:
August 06, 2020
Filing Date:
January 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WHITE3 SRL (IT)
International Classes:
G06F16/29; G06F16/901; G06F16/904
Domestic Patent References:
WO2000054139A12000-09-14
Foreign References:
US20140043325A12014-02-13
US20130307843A12013-11-21
Attorney, Agent or Firm:
KARAGHIOSOFF, Giorgio A. (IT)
Download PDF:
Claims:
CLAIMS

1. Visual integration and aggregation system of information coming from different sub-systems, sources or data sources , by using a single human- machine interface,

in which the information or data is organised according to a hierarchical representation of a graph, which is configured according to a representation that follows the structure of the universe, wherein:

each star represents a datum or piece of information or a virtual machine;

each star has a specific shape, particularly in relation to the colour and/or size, which are defined depending on specific characteristics of the datum, piece of information or virtual machine;

Optionally, each star comprises sub-information, which is identified in the planets present around the star element and is in the form of software, middleware, application software for example, and present in a machine or disks connected to the latter ;

Each star is arranged in the proximity of other stars, these stars having characteristics of mutual link, as in particular different machines that provide the same service, and these stars forming a cluster of stars;

Each cluster of stars being positioned in the proximity of other clusters that have characteristics of mutual link, as in particular services that belong to the same contract, thus forming a new cluster formed by several sub-clusters;

the set of clusters forming a galaxy; the representation is carried out by emphasising these clusters through the algorithmic generation of a "noise" with the aim of amplifying a particular status present within the same cluster.

2. System according to claim 1, wherein data, information or machines that cannot be clustered or are missing references with other entities are represented in a "black-hole" .

3. System according to claim 1 or 2, wherein the logical links among data, information or virtual machines are depicted with connecting lines of these elements that graphically generate the representation of constellations.

4. System according to claim 3, wherein said constellations are fixed when there is a link between N entities or they are alternatively dynamic when the relation is momentary and deriving from a particular event or process of association.

5. System according to one or more of the preceding claims, wherein the interface is provided in combination with a filtering or research unit that generates a system of axes, each axis representing a criterion for filtering or researching and said stars being positioned relatively to said axes in the position corresponding to the value that they have, with reference to the criterion of filtering or research .

6. System according to one or more of the preceding claims, wherein the data, piece of information or virtual machine of each star is uniquely referred to a physical machine, in which said datum, said piece of information or virtual machine is arranged, a map of the positions of the physical machines being provided and the projection between datum or piece of information or virtual machine on the corresponding physical machine on the map being depicted by lines that represent light beams, which depart from the said stars towards the georeferenced physical place where these elements really are.

7. System according to one or more of the preceding claims, wherein a graphic engine for generating the interface based on the structure of the universe and optionally, in combination, a graphic engine of georeferenced representation of the positions of the physical machines on a map, is provided.

8. System according to claim 7, wherein the physical units are depicted by structural and/or logical hierarchically organised schemes and configured in such a way that, through a drill-down on a single rack or apparatus or through a zoom, descending into depth and from the rack, the servers or network apparatuses present on it will appear in their position of rack unit, and so forth while descending up to reaching the single software that runs on a virtual machine.

9. System according to one or more of the preceding claims, characterised in that the system comprises communication connectors that are configured for the visual integration, aggregation and correlation of data coming from heterogeneous sources, a graphic unit of three-dimensional representation; a gesture navigation and interaction unit, a graphic engine of territorial representation on vector maps of static and dynamic georeferenced elements .

-42 -

Description:
Visual integration software system of heterogeneous sources, based on the Nebular-Physical- Topological representation of a complex system

TEXT OF THE DESCRIPTION

BACKGROUND OF THE INVENTION

Object of the present invention is a unified software system of representation, command, control, supervision, analysis and decision support in the management of systems and services. Such software system stands as an element capable of visually integrating information coming from distinct and heterogeneous data sources , by using a single touch- and-gesture-based human-machine interface based on the paradigm of the Natural User Interface (NUI) .

The system allows to aggregate large volumes of information related to resources, physical apparatuses or logical entities, and to represent them on an evolved interface with the aim of giving a complete information prospectus on a phenomenon, by placing particular emphasis on the granularity of the information depending on the layer of detail .

Following the NUI paradigm for the representation, interaction and integration of heterogeneous data sources, the system meets the new requirements in the use of information, where each user has at his disposal an infinity of information located in different systems with which he can interact autonomously, without the need of particular or specific skills, thus simplifying the complexity of the context . The system stands as a central element in the decision-making process, by following the OODA (Observe, Orient, Decide and Act) loop model. The system is designed to reduce the complexity of analysis and understanding, by actually shortening the decision-making chain. The decision-maker sees and understands a phenomenon or situation from a single system and directly executes actuating commands, moving from a segmented (traditional) decision-making chain to a direct understanding and implementation model .

Logically, the system is at an abstraction layer above the different systems and data sources to be integrated, it does not replicate the data present in these systems and it does not carry out expensive post-processes aimed at aggregations on DB. Through its connectors, the system communicates data sources with different systems and carries out a visual blending thereof.

The main troubleshooting support is when information coming from distinct and heterogeneous systems is shown in a single representation.

The ability to see beyond the trend of a single platform, beyond the single service, by visually blending several coherent or apparently disjoint layers of information, allows to create an unprecedented representation of a phenomenon as a whole .

Having a full picture brings out and highlights the possible relationships between the events, thus speeding up and simplifying the identification of the sources of a particular problem or the correlations and impacts that are cascaded to other systems, platforms or services.

The visual correlation becomes a key differentiating element, especially in moments and situations of criticality and emergency where the rapid identification of a problem becomes a key element, having a picture of what's happening and the wide-ranging impacts readily available, allows an effective direction of the analyses, a targeted involvement of the administrators of the platform or platforms involved, thus significantly speeding up the decision-making process and reducing the implementation time of the corrective actions and activities .

The visual correlation is only the first step in the process of identifying problems, which then passes through the automation of actions or procedures suitable for responding to specific cases, by using a rule engine that automates the generation of notifications and/or alerts as a result of the occurrence of the events provided in the rules implemented, or that emphasises an underway phenomenon, by bringing to light or highlighting anomaly or premonitory situations based on historical data and on "historical" rules created after an event particularly abnormal or of particular importance to be labelled as precedent happens, thus creating a sort of knowledge base of rules and causes-effects .

In addition to what mentioned, being able to observe a phenomenon in its entirety on a single system rather than having N distinct systems where only part of the information is present on each one of them, allows the expansion of the knowledge of those who have the task of guaranteeing the delivery of a service or of those who have the burden of supervising a specific platform or component, knowledge no longer strictly linked or isolated to a specific domain of competence and responsibility but enlarged and completed in an inclusive meaning and belonging to a wider ecosystem. Being present and represented on the system means being part of something greater, thus increasing the visibility of individuals, specific skills and individual responsibilities. Belonging leads users to invest their time, prompting them to participate and motivating them to achieve pre-set goals, improving their capabilities, increasing performance, thus actually changing their behaviour.

In the information age we are experiencing, Big Data, IoT and the proliferation of heterogeneous computer systems, from wearables to security systems and large data centres , produce a huge and complex amount of mostly unstructured data and information.

Current software applications, specifically their interfaces, and current command and control systems are inadequate for the new complexity, making a new representation, interaction and control mode necessary to simplify this complexity.

The limit of current systems and how to interact with them is in front of everyone, today using a web browser to show graphs , analysing the relationships present on a social network, controlling a geographical network or simply showing a high number of information, is at least a little rewarding, not simple experience and requires very precise skills to have a correct understanding thereof.

Today most of the applications present within a company are considered efficient at a functional layer, but at the same time limiting in terms of interaction and enjoyment, often requiring the end user to have specific skills.

To what has just been described regarding interaction and usability, it should be added the lack of homogeneity and atomicity of the different applications that are currently used, leading to the generation of the so-called "silo effect", i.e. technological islands that generate, for each application scope, excessive integration and maintenance costs and the inability to correlate information coming from different vertical markets.

This silo effect, this "pulverisation" of systems, which have stratified and evolved over time, has led to the need to have professional experts, who are able to master and manage their system, as well as the processes that govern it and the chain of activities that define the operating context.

These figures are even more necessary when decisions are to be made, as a result of emergencies, criticalities or decisions of the tactical or strategic type.

In the current decision-making chain, each system administrator reports to his superior, by extracting information from the platforms of competence, by creating reports that highlight the current status. This process occurs for each "silo", and those who will have to decide must aggregate the different reports received, analyse and relate them, and based on the results obtained, to make a decision on the actions to be performed.

In the described chain, the decision-maker bases his choices on data extracted from different groups, with probably different extraction modes and with logics of creation of reports made also through evaluations not exclusively objective and/or, however, the outcome of a post-processing or personal reasoning .

In such flow, the probability of running into the human error or insufficiency of information provided is certainly not negligible and increases exponentially with the increase of the systems (silos) involved, usually generating retrospective checks on the various systems to verify the correctness of some information and data that "do not add up", thus extending the decision-making time.

In this context, the use and interaction with the different systems present in the business architecture and, in general, the procedures that govern decision-making flows and processes become a fundamental and differentiating point in the evolution of command and control systems.

At the state of the art depicted herein by the WOOO/54139 document, a graphic, interactive and computerised visualisation system is known, wherein a database of technical documents, such as patents and patent publications developed by an online database research, can be depicted as a data object in a three-dimensional virtual space. Data objects are grouped into a universe of virtual galaxies by a number of common attributes selected by the user. A user can identify terms or values of interest for each of the selected attributes in order to adapt the representation to the particular interests of the user. Each of the selected attributes is associated with distinctive visual criteria to allow the user to distinguish between the different attributes and the values identified for each attribute, in the visualisation. Distinct visual criteria include changing the colour, size and layers within galaxies, depending on the different values for the selected attributes, as well as grouping galaxies on the display through an additional common attribute. The user can navigate through the three-dimensional space to view virtual galaxies, as well as penetrate particular galaxies of interest to view individual documents. As the user navigates a galaxy, the galaxy is transformed into a set of data objects each representing a single technical document. As the user approaches specific data objects, titles and images of the documents depicted are displayed on the screen. An object can be selected on the display to view the text of the document.

BRIEF DESCRIPTION OF THE INVENTION

The purpose of the present invention is to provide a system, a unique element capable of visually integrating and aggregating the information coming from different sub-systems, sources or data sources, by using a single touch-and-gesture-based human-machine interface based on the NUI paradigm, by centralising the supervision, analysis, command and control of the services, systems and entities present within a business context, thus providing an overall view of the resources and of the whole delivery chain, by means of an evolved, expressive, elementary and natural representation of the complexity, which every great architecture has, capable of being enjoyed even by non-technically prepared personnel. In particular, the present invention is aimed at becoming the central element in the decision-making process, thus eliminating the "silo" effect, by providing an overview of all the systems involved in a specific process, highlighting possible anomalies or alert situations, and providing all the tools suitable for simplifying the understanding and implementation of corrective actions , thus shortening significantly the whole decision-making chain.

The invention achieves the aforesaid purposes with a system that is the subject of claim 1.

Further improvements are subject of the sub- claims .

With reference to the document of the state of the art mentioned above, and as it will appear more clearly from the following description of a detailed exemplary embodiment, the particularity of the representation according to the present invention, differently from what is present in the WOOO/54139 document, resides both in the logics and in the algorithms, which generate and arrange the elements in space, just as they emphasise the states of the elements or highlight phenomena, which follow particular logics.

The arrangement of the "stars", i.e. of the elements representing data or virtual machines, are grouped according to the following algorithm:

The algorithm is based on N hierarchy layers, where the first layer represents the high-layer grouping, while the last represents the individual element .

Layer 1 (e.g. Customer Cluster)

Layer 2 (e.g. Contract Cluster) Layer N-2 (e.g. Business Service Cluster)

Layer N-1 (e.g. Virtual Machine Cluster)

Layer N (e.g. Individual Virtual Machine Data)

Such an algorithm creates an organised arrangement according to well-defined hierarchical criteria , i . e . :

- All "Stars" (i.e. all elements of layer N) gravitate around a precise main point of attraction, which is the centre of the representation ("Centre of the Galaxy") .

The "stars" around the main point of attraction create additional points of attraction according to the number of elements of layer 1 (e.g. the number of Customers) , on which additional points of attraction of the next layer are distributed and formed, up to the last layer, thus generating smaller and smaller agglomerates (clusters) of stars.

The arrangement and clustering are carried out following this logic:

It starts at layer N-2 (Business Service) . For each Business Service (or other element related to this layer) , the elements of the next layer N-1 (Virtual Machine) are arranged in 2d in the smallest rectangle possible. This allows to identify each Business Service with a rectangle.

It proceeds with the upper layer (Layer 2, Contracts) . For each contract (or other element related to this layer) , the elements of the next layer N-2 (Business Service) are arranged in 2d in the smallest rectangle possible. This allows to identify each contract with a rectangle.

It proceeds with the upper layer (Layer 1, Customers) . For each customer (or other element related to this layer) , the elements of the next layer, Layer 2 (Contracts) are arranged in 2d in the smallest rectangle possible. This allows to identify each customer with a rectangle.

Finally, the elements of the first layer (Layer 1, Customers) are arranged in 2d in the smallest rectangle possible.

For each arrangement carried out, the point of attraction is depicted by the centre of its own rectangle .

The arrangement algorithm in the 2d works as follows :

The elements to be arranged are initially randomly positioned within a unit circle.

- Each element is checked so that it does not collide with another, otherwise a repulsive force between the two elements is simulated, which will move the current one so that the two no longer collide .

- The "force" by which the element is repulsed depends on the distance of the two elements, on a decay factor (always set to 1) and on a speed factor. The latter is set according to the elements that are being arranged; the values of each layer are determined by the value of K multiplied by the layer:

§ K for the layer N-1 (Virtual Machine) .

§ Kx2 for the layer N-2 (Business Service)

§ Kx3 for the layer 2 (Contracts) ,

§ Kx4 for the layer 1 (Customers)

The value of K depends on the number of layers present, in the case of 5 layers, the value of K is

5. As a result of the arrangement of the elements of the layer N-1 (Virtual Machine arrangement) a next step is made, in which the elements just arranged in 2d are randomly arranged on the third axis along two Gaussian curves.

The used Gaussian curves have standard deviation 2.23 and an expected value of 0.

To find the z relative to layer N-1 (Virtual Machine arrangement) within their cluster, the procedure is as follows:

The sign of the z is found through the System. Random class of C#.

- The sign is multiplied by two values expressed by two Gaussian curves. Both Gaussian curves are normalised. The value given to the first Gaussian curve is the x of the server, normalised with respect to the width of the cluster and multiplied by K. The value given to the second Gaussian curve is the y of the server, normalised with respect to the height of the cluster and multiplied by K.

- Finally, the z is multiplied by a random value between 0 and the largest size (width or height) of the cluster, divided by K-1.

For the other arrangements , the elements are also moved on the third axis, but the z is a random value between 0 and the largest size (width or height) of the cluster divided by K-1.

The randomness for the z of the Customers differs from the others and ranges between -50 and 50.

To improve the visibility of star clusters, a "noise" generation algorithm was used. Specifically, especially in clusters comprising a few "stars" elements, these are "lost" among the larger clusters, making it difficult to identify them. To address this problem, additional dummy stars (which do not correspond to an actual element or data to be depicted, and are shown with graphic characteristics slightly different from the stars based on actual data) are added to each cluster of stars according to the following logic:

Given a sphere inscribed in the parallelepiped that identifies the cluster, N points (with N proportional to the number of elements of the cluster) are arranged along a spiral (with M arms) . Such points are randomly arranged on the third axis along two Gaussian curves (in the same way as the Virtual Machines, Layer N-1) .

Firstly, a spiral of points is created around the centre of attraction of the cluster.

Such spiral is created as follows:

- A maximum number of elements that will compose the spiral (the elements that create the noise effect) is set and that number is calculated as the volume of the cluster divided by 0.75 cubed.

- The number of arms of the spiral depends on the number of elements of the cluster.

- The step is always 1.

- The rotation delta is always 10.

The elements added to the spiral are randomised on the z between +15% or -15% of the cluster depth.

- The elements added to the spiral, after the position has been identified, are moved on the x and y axes of a random value between -20 and 20.

After creating the spiral, to make the clusters more differentiated from each other, it will be rotated on an axis and at an angle identified as follows :

- A rotation axis is calculated at first on the xy plane of the cluster. Such axis is identified by a random angle between 60 and 120 or between 240 and 300 degrees.

- Then a random angle is calculated between 15 and 45 degrees.

This representation mode makes even the clusters of a few stars take on a larger size, such as to make the cluster immediately identifiable, but at the same time allowing to understand its "actual" size at a glance. For example, if I have a cluster of 2 virtual machines (i.e. 2 stars), this small cluster would not be easily identifiable if we represent 2 stars among thousands of others; however, if these 2 stars visually appeared as a cluster of 100 or more stars, then it would be more easily identifiable. The "noise" generation algorithm does just that, adding a number of false stars to make a small cluster distinguishable and identifiable. In the noise generation algorithm, the number of stars to be inserted, their position, their colourimetric representation is decided so as to show these clusters as likely galaxies or star clusters. By using this algorithm, unlike other representation modes based exclusively on the data available, it is possible to create a clearer visualisation and it is possible not to lose the evidence even of quantitatively smaller data.

The described arrangement logics and those of noise creation allow to create that representation of the clusters from the typical "galaxy" shape around the centre of attraction.

Once the arrangement, clustering and "noise" generation have been executed, the logics relating to the movement of the depicted elements are implemented, in order to make the representation dynamic and to provide additional information on a cluster. To do this, a rotation of the noise around the z axis is created, with angular velocity between 0.05 and 0.5. The angular velocity is dependent on the characteristics and values in the information fields present in the elements of layer N. For example, if a Virtual Machine has been added recently to a Business Service, then the "noise" elements related to that Business Service will move at a higher velocity compared to another Business Service, where a Virtual Machine has been added earlier, or the sound will move at a higher speed if a task relating to a Business Service is currently running on a Virtual Machine, whereas it will move more slowly if it has been a while since tasks relating to a Business Service were run on Virtual Machines.

The same logic applied to the rotation speed of noise is that relating to the colour shades of noise. The colour of the stars created by noise is intended to emphasise specific states of actual stars (elements) . For example, if we have an alert on a Virtual Machine and we would colour with red only the star relative to that machine, that state of alert would be lost in the midst of the other thousands of stars. To address this problem, also in this case noise is used, by colouring it according to the state of alert, more precisely, by setting a colour gradient on the various stars of noise, having as a final result, a coloured galaxy made from different shades of colour dependent on the level of alerting.

The final result of what is described is a peculiar and unique overall view of a likely universe formed by galaxies of different shape, size, colour and movement, distinctive and easily identifiable, which are capable through a single evocative representation to convey the meaningful information present in the different layers of data.

According to a further aspect, with respect to what previously described, when retrieving the data, anomalies may be present, i.e. not all the data extracted from the various sources can be properly categorised within the individual layers, this is because some items may not have attributes ascribable to a specific layer (e.g. there can be Virtual Machines without the Business Service attribute, which it would not allow to insert these elements into the defined structure) . In these cases, in order not to lose the visibility of this element, this is still depicted but not within the arrangement and clustering system previously defined, but it creates a particular galaxy consisting of all the abnormal elements, called "Black Hole", which actually becomes a container of non-classifiable elements.

This "Black Hole" draws on the arrangement and clustering logic of the layer N-2 and is composed of all the spurious elements of the layer N-1 (which cannot be associated with a specific Business Service) . At a positional layer, the "Black Hole" does not follow the logic of the other "galaxies" and is positioned at a fixed (and custom) distance from the main centre of attraction.

When drilling down between layers, i.e. from layer 1 (Customers) it moves to layer 2 (Contracts) and then to the other layers, it is important to be able to see specific relations or correlations existing between the different elements depicted. For example, if you drilled down to a Business Service, the system will show a set of stars representing the different Virtual Machines that belong to the Business Service, at this point, if one is interested to know about specific links between these Machines, (such as, for example, which Virtual Machines are connected to each other or which machines are accessing a specific shared application, or which are instantiated on the same physical machine) , these will be shown as links of different stars, of the connection lines between the elements that are found in the relation with each other. This representation refers to the logic of constellations, where asterisms that are recognizable by geometric shape and are independent of other aggregation logics, are created. These asterisms can be used both for elements belonging to the same layer but also on elements belonging to distinct layers, (e.g. knowing which Virtual Machines have a contract where specific SLAs are provided) .

The filtering system is based on the dynamic repositioning of the elements (stars) present according to the criteria set by the filters. For example, if we want to know which were the open contracts over the last month and the value thereof, such a request is set on a filter panel, and the system reorganises and rearranges the clusters on the corresponding layer (Contracts) on 2 axes, where on the x-axis there will be the days of the last month, while in the y-axis, a scale of values of the contracts. Clusters of contracts, that previously followed a specific arrangement logic, move to occupy the corresponding position on the axes according to the date of activation of the contract and to its value. This representation mode actually generates a new "universe" arranged and organised according to the rules of the set filter.

In addition to what has been described so far, another characterising element of the system is that of being able then to ascribe abstract or logical elements (Virtual Machines, Business Service, etc.) to actual physical elements. A Virtual Machine will surely be run on a given physical machine, or a Business Service will definitely consist of a set of software that is running on a particular virtual or physical machine. The identification of the physical elements associated with the logical elements is done as follows: When entering a particular layer (e.g. Layer N-1, Virtual Machine) , the "projection" mode can be activated, i.e. a feature that allows to show where these Virtual Machines are actually located in the territory, in particular, in which Data Centre. When the projection is activated, a territorial map appears on the screen that showed "the universe", which is positioned below that of the "universe" (let' s think of the horizon where in the upper part the stars can be seen and in the lower part the earth can be seen) , then there will be in the same screen both the abstract logical view (the sky with the galaxies) and a physical view (a map on which POIs are positioned, which represent, for example, Data Centres) . At this point, from the cluster of layer N- 1, for each star present (Virtual Machine), connection lines depart towards the physical POI (Data Centre, Building, Headquarters etc.) where the physical machine is present where there is that virtual machine. Through this representation, it is possible to have at a glance a mapping of the actual physical location of the elements present in the different layers. From this visualisation mode, where on the map the different POIs are shown, it is possible to further drill down, where selecting a POI, a new representation mode is entered where a three-dimensional representation of the chosen POI (Data Centre, Building, Office, etc.) is shown, where all the physical assets thereof (e.g. Racks, Physical Machines, etc.) are present therein, and where the physical machines associated with the virtual machines belonging to the cluster of the layer N-1, which we had previously chosen, are highlighted. The representation of the physical elements becomes final element of the whole chain, where starting from an abstract view of the elements by means of a representation of the universe consisting of galaxies of distinct entities, there is a projection thereof towards the actual points until arriving at the actual physical element present within a building in the position where it is really located.

BRIEF DESCRIPTION OF THE FIGURES

Figure 1 shows an implementation chart of an exemplary embodiment of the present invention in a client-server architecture.

Figure 2 shows a server flow chart according to the exemplary embodiment of Figure 1.

Figures 2 to 14 show different screens of an exemplary embodiment of the interface that depicts the structure of the universe with different examples of features.

DETAILED DESCRIPTION OF THE FIGURES

In a preferred embodiment, the system has the following characteristics, which can be declined on data referable to:

• Abstract logical elements

• Referable elements in space

• Merging of both declinations

In particular, the software system offers the following key features:

• Mapping: representation on map of georeferenced elements

• Clustering: representation grouped by theme and hierarchy, based on Nebular representation

• Filtering: exclusion of unnecessary objects so as to keep the focus on navigation

• Searching: the research can take place at every layer of the presented data, thus emphasising graphically the results thereof

• Charting: representation of element data repositioned on reference axes based on specific filtering/researching criteria

• Relationship: representation of relations and correlations between logical and/or physical elements

• Projection: projection on a georeferenced map of abstract logical elements present in the Nebular representation

• Topology: physical representation of an infrastructure and topological representation of connections between physical and/or logical elements

• Alerting: Representation of alerts and notifications related to the elements depicted

• Dynamics: The representation changes and evolves dynamically as the data changes, thus allowing to always have an updated situation and a film of the history, through time sliders.

Obtaining the just described key features pass through the primitives and the technical/functional characteristics of the same system that provide:

• Integration, aggregation and visual correlation of data coming from heterogeneous sources through communication connectors.

• Three-dimensional graphic representation based on the latest generation Game Engine

• Navigation and gesture interaction in three- dimensional environment through a native NUI-type interface

• Integrated 3D GIS engine that allows spatial representation on vector maps of static and dynamic georeferenced elements

• Procedural creation feature of a Nebular representation of data coming from GraphDB, graph or tree structures

• Bidirectional communication with sub-systems to be integrated, receiving information and sending commands .

• Continuous navigation on 3D vector map or referenced environment with no wait times due to loading, providing contextual information based on the navigation layer.

• DWIN feature, provides the user with information according to what is viewing or based on the occurrence of a particular event, thus allowing, if necessary, to draw on the historical actions performed by the operator/user in the past, in relation to a phenomenon that has occurred and is comparable to the one underway.

• Topological representation of graph structures (physical/logical networks)

• Management of notifications and alerts

• Dynamic graphic representation of data depending on the time period

• Representation of information by using superimposed layers, allowing the simplification of visual correlations and direct cause-and-effect association generated by an event

• Representation of conciliation, relation and correlation connections of logical or physical elements

• Projection representation between georeferenced logical and physical elements

• CAD procedural import and extrusion

• Real-time management of data flows

• Simultaneous management of heterogeneous data (streaming, video, audio, tables, graphs, values, etc.) on the same interface

As described above for the present invention, it is implemented on two distinct components, a Client component and a Server component, where both possess business intelligence but with distinct characteristics and features (FIG. 1)

The Server component 100 of the system has the task of communicating with different sub-systems and data sources denoted by 101, 102, 103 and of processing, conciliating, and structuring these data and making them available to Clients 110.

The Client component 110 communicates in synchronous or asynchronous mode in a bidirectional manner with the Server 100 and renders 112 the data found, making them usable to the user 120 through a gesture interaction usable through multitouch device.

The different macro layers that make up the Server 100 are identifiable as:

• External Connection Layer 104 :

• Process and Correlation Layer 105:

• BE-FE Connection Layer 106

• Functional DB 107.

The External Connection Layer 104 consists of a set of connectors distinguished by features, type and protocol, which are the channels of communication with existing data sources 101, 102, 103 to which the system will have to connect. Such connectors can communicate in both synchronous and asynchronous modes, can be mono- or bidirectional and can handle both static and real-time data. The different types of connectors are thus identified; in brackets the supported formats/protocols :

• GIS Connector (GeoJSON, PBF, MVT)

• CAD Connector (DXF, 3ds max objects, Obj , Fbx, Dae, Collada , SKP)

• Video streaming Connector

(Input media: UDP/RTP Unicast, UDP/RTP Multicast, HTTP/FTP, TCP/RTP Unicast, DCCP/RTP Unicast, RTSP, File, MPEG encoder)

(Input format: MPEG-1/2, DivX® (1/2/3/4/5/6) , MPEG-4 ASP, XviD , 3ivX D4 , H.261, H .263/H .263i ,

H.264/MPEG-4 AVC, Cinepak, Theora, Dirac/VC-2, MJPEG (A/B) , WMV 1/2, WMV 3/WMV-9/VC-1 , Sorenson 1/3, DV, On2 VP3/VP5/VP6, Indeo Video v3 (IV32) , Real Video (1/2/3/4) )

• Hi Level Connector (HTTP REST/RESTFULL, JSON/XML REST POST, SOAP WSDL/XML, SNMP, SMTP Redis, MQTT , Socket TCP/UDP, websocket)

• DB Connector (MySQL, SQLite, MS SQLServer, PostgreSQL, MongoDB, Oracle)

• File Connector (txt, csv, xml)

All data obtained from the communication of these connectors with the different external systems are analysed and structured by the Process and Correlation Layer 105. This layer depicts the data normalisation and conciliation engine, where data of different types are logically linked, different and heterogeneous data structures are encapsulated in defined structures, which will then be manageable by the client component.

The Functional DB 107 consists of multiple databases of different types that have the task of being the buffer between external systems 101, 102,

103, and the Clients 110. Such DBs are not intended to replicate the information permanently but in them this information passes through for the time necessary for the normalisation and sending to the Clients. The Functional DB 107 acts as data cache and simultaneously also as repository for local functional data purely linked to the server system and GIS data.

The BE-FE Connection Layer 106 is the communication layer between the Server 100 and the Clients 110. Such layer exposes the structured data in a mode that can be interpreted by the Clients. This layer exposes the structured data outwardly in both asynchronous and synchronous modes , thus allowing not only data reading but also bidirectional communication between the Clients 110 and the Servers 100.

The different macro layers that make up the Clients 110 are identifiable as:

• FE-BE Connection Layer 111:

• W3 Rendering Layer 112:

The FE-BE Connection Layer 111 is the communication layer between the Client 110 and the Server 100, it is the equivalent Client 110 side of the BE-FE Connection Layer 106 on the Server 100, operating in identical and mirror mode.

The W3 Rendering Layer 112 is the engine of management representation and system interaction.

Such layer has unique and peculiar characteristics that make the system object of the invention unique.

The W3 Rendering Layer 112 consists of a set of software engines of libraries and modules that communicate and interact with each other as will be described in more detail with reference to Figure 2.

The W3 Rendering Layer 112 has as its central element a 3D Game Engine 200 to which libraries and other software engines are attached.

The software libraries consist of a set of functions and data structures designed to provide particular features to the 3D Game Engine 200.

The engines are specialised software cores consisting of a set of libraries, functional modules, scripts and tools that characterise the type, which are attached to the 3D Game Engine to evolve it with new complex characteristics, methods, structures, roles and specific features.

The engines and libraries described characterise the same system both from the representative, organisational, interactive point of view and distinguishes it from a functional point of view.

The libraries and engines that constitute the system that is the object of the invention are:

Libraries :

• Essential 201: basic libraries of representation, navigation, interaction and interprocess communication

• Alerting 202: management of alerts and notifications

• Continuous Navigation 203: navigation of an environment within a single interface, from the panoramic view to the detail of each individual element, or vice versa, without wait times or uploads that slow down its operation, always remaining within a zoom-based scenario. Such continuous navigation allows in fact to have an infinite plan of information always available and usable in a simple direct and quick way. The "continuous navigation" is based on proprietary technology that allows to navigate a 3D vector map or referenced environment without wait times due to loading, and provides contextual information based on the navigation layer. Such technology allows the caching in GPUs of the contiguous zoom layers with respect to the "currently depicted" one, thus bringing with it the respective references to the elements present. Such navigation mode simplifies every step of the analysis process, from data preparation to relationship discovery

• Augmented Layering 204 : The representation of information by using superimposed layers allows the simplification of visual correlations and the direct cause-and-effect association generated by an event. Through this library it is possible to depict in the same screen and within the same environment, elements of different nature, format and characteristics. You can simultaneously depict maps, 3D buildings, CADs, BIMs , POIs, markers, areas of interest, images, documents, videos, audio/video streaming coming from security systems, tables, graphics, topologies, GIS layers, etc. Each element depicts an information layer that enriches the context with new information.

• Live Dispatching 205: allows the dispatching of tasks, data, events and procedures from the central command and control system to peripheral devices, to be understood as other remote command and control systems or to mobile devices equipped with a special companion app.

Such library provides the basic tools for the interaction, representation and management of messaging, file exchange and intercommunication both deferred and real time, such as video streaming, VOIP communication, Push to Talk.

• Proximity Authentication 206: allows the access to the system and profiled authentication through proximity, proximity + smartphone, proximity + smartphone + fingerprints through a companion App for smartphone and the use of BLE beacons.

Such system access mode, in addition to ensuring different layers of security (Strong Authentication) , allows to direct the user interface or dashboard based on where the user is located. If the system provides multi-user, each user dashboard will be oriented and positioned near the user and will contain data, information and commands related to the profiling of the specific user.

• Gestural Recognition 207 : The navigation and interaction in the three-dimensional environment occurs by using touches and the recognition of particular movements and/or gestures interpreted by a convolutional neural network able to learn and recognise gestures made by the users, improving and becoming refined autonomously over time.

Engine :

• 3D Game Engine 200: It is the central element of the W3 Rendering Layer, it has the task of representing the information in a three-dimensional environment. The interface component of the system, thanks to this 3D graphics engine, arose from gaming, allows to depict dynamically and in real-time millions of polygons with a number of elements simultaneously referable and addressable in the order of hundreds of thousands in the single scene, in a three-dimensional environment, thus maintaining high the quality of the interaction and the fluidity of the system, otherwise impossible through traditional web-based software or webGL. It allows to produce high-quality images, manage physics between objects, collisions and special effects, return high-visual effect images with high-level performance still between 30 and 60 FPS . Such engine allows to make the most of the hardware of the latest generation video cards, and allows scalability of the system through scaling the HW of the video card. The use of this graphic engine also allows to have an interface that does not suffer from limitations in the arbitrary use of resources for computation, memory usage and full access to the GPU, thus allowing to run low-level native code to make the most of the potentials of the HW that runs it.

• Asset Engine 208: It is the engine for the graphic representation and management of planimetries, CADs and assets. It allows the representation of elements referable in space through relative and non-geographical coordinates. Through a procedural extrusion, it allows to depict the 2D planimetry three-dimensionally, by starting from a start file of the CAD type, to depict three- dimensionally the other elements identified in the different layers of the CAD, such as specific infrastructural objects, racks, sensors, cameras, etc. Such engine basically allows to turn a static 2D CAD planimetry into a 3D interactive environment formed by the addressable "live" objects (thanks to the work combined with other libraries or engines present in the system) . Such engine also offers to the end user editor features, through which it makes it possible to move, rotate, scale objects represented and present within the system, as well as it makes it possible to insert new objects (present in the Asset Engine library) and to edit/remove objects already present.

• GIS Engine 209: It is the proprietary 3D GIS vector engine, developed to be able to depict and to interact with maps and georeferenced elements on the territory. On this peculiar and characteristic component of the system, a cartographic, vector, georeferenced layer has been developed, which rests on the 3D graphic engine and takes advantage of its powerful representation characteristics.

The 3D Game Engine 200 was born and is used in the world of gaming to create video games and, therefore, has features that make it very powerful in terms of dynamic rendering but that, at the same time, does not comprise some characteristics and/or features for worlds different than the playful one. One of these is precisely the georeferencing of elements through lat./long. on a vector GIS.

The GIS engine 209 made allows to direct territorial elements via geographic coordinates and to have a 2D/3D vector mapping of the whole world.

The data rendered on the GIS is extracted from the back-end component, which in turn contains a GIS Server system that preserves vector maps, after extracting them from different providers (each of which provides a specific layer of representation) , which are associated, merged and exposed as a service to the GIS component present on the client.

The data extracted from the server and present on the client are represented and rendered dynamically by the graphic engine, which thanks to the developed GIS module allows to map, direct and use spatial data usable through touch and gestures.

The system is designed and developed for the interpretation and representation of spatial data based on vector graphics, thus offering many advantages compared to the classic system of image tiles, including:

• Efficient and consistent use of graphics card design technologies

• Minimising the amount of data being interpreted • Cancelling the load times

• Representing information according to the level of detail of navigation

• Supporting for multilayer information

• Greater precision in the design of territories and areas of interest

• Efficient support for the simultaneous representation of numerous objects on the territory

• Representing 3D objects immersed in the territory

• Support and connectivity to and from major GIS data standards with possible customisation of proprietary environments.

The built-in GIS 209 engine guarantees a high level of performance and accuracy and allows an extension with data coming from third-party or proprietary products. Additionally, the numerous geospatial tools provided allow the user to study and interact with the territory in an invisible and unique way, always following the "natural" model by using touches and gestures.

• DWIN Engine 210: In order to provide the user with information based on what is viewing or based on the occurrence of a particular event, the system has on its inside the DWIN engine (Data When I Need) , that is, an engine that according to specific rules shows that set of information, data and elements considered to be useful and contextual to the layer of representation at that particular time. Additionally, the DWIN can allow to draw on the history of actions performed by the operator/user in the past, in relation to an occurred phenomenon comparable to the one underway. Such Engine, in conjunction with the Continuous Navigation allows to propose and show the user what he actually needs at that time, based on the zoom level it is in and based on what the screen is showing at that time. The purpose of the DWIN is not to make a decision for the user but to make all the data and information necessary to make the best decision at that particular time, available to the user .

The DWIN is a proprietary system based on a rule engine and on predictive analysis algorithms.

• Nebula Engine 211: The representation and use of large amounts of data, especially unstructured data with graph relationships, have always been difficult and very complex. The current systems of graph representation show a dense and complex network of elements and relationships with each other, with a level of complexity of interaction and analysis directly proportional to the number of elements and relationships present.

The Nebula Engine was designed and built thinking of modelling the represented databases as part of a universe that collects them all. For this similarity, we wanted to use a nomenclature and an organisation that allows parallelism with astronomy.

The basic idea is to represent a large number of logical, dynamic and abstract elements, conceptually linking these elements to a "known" world.

The representation of the universe was conceived by paralleling the idea of computer cloud (or other types of abstract logical elements with graph structures, such as social media) and astronomical nebula . A nebula is understood as an agglomeration of logical entities, clusterisable , organisable and navigable .

Single elements orbit around larger elements, which in turn are attracted by even more intense gravitational forces to shape the entire universe.

Following such logics, the Nebula Engine is capable of :

Providing a visual mapping, a full picture as immediate as possible, already providing from the main screen an indication of the status of the depicted elements

Providing a navigation based on specific entry points from which to descend or ascend hierarchically along the entire stack (e.g. of the services and/or infrastructure) that can have both a Top-Down approach (e.g. from the Client to the machine) and Bottom-Up (e.g. from an IP to the Client) .

Providing tools to address the problem directly, through nebula filtering and researching commands.

Providing tools that can highlight relationships/correlations between elements (constellation concept)

Providing an immediate and intuitive navigation system based on "known" paradigms through a conceptual parallelism between abstract and reality.

An example of this representation and its features is shown in Figures 3 to 14.

The representation of the Nebula occurs according to an overview of a part of universe where: each star represents the last element of navigation (e.g. virtual machine or a single message if we talk about social media) each star by shape, colour and size expresses specific characteristics of the element, as shown in Figure 6.

Each star can in turn have sub-information (e.g. OS/middleware/application software present in a machine, or disks attached to it, etc.) , which can be identified in planets present around the star element .

Each star will be in the proximity of other stars that have characteristics that link them (e.g. they are different machines that deliver the same service) by forming a cluster of stars (binary or open cluster)

Each cluster of stars will in turn be positioned in the proximity of other clusters that have characteristics that link them (e.g. they are services that belong to the same contract) , by forming a new cluster formed by the most sub-clusters

This logic is recursive for each layer of aggregation up to the last layer (e.g. customers of a company) that forms the Galaxy (e.g. customer Galaxy)

On each layer, starting from the Galaxy to the single star, the representation is carried out by emphasising these clusters through the algorithmic generation of a "noise" with the aim of amplifying a particular state present within the same cluster. This "noise" allows to accentuate clusters that would otherwise be lost from sight due to the size, or could not transmit alerting information.

The Nebula Engine also provides for the representation and management of elements that are non-clusterisable or missing references with other entities. In this case, such elements are inserted and reorganised into a "black hole" (again by astronomical association) , a graphic element containing spurious entities (which, being not attracted to other clusters, are attracted to the black hole) .

An example of this representation is shown in Figure 3.

This hierarchical navigation of a graph allows to navigate the entire universe starting from the grouping element (cluster) considered most appropriate until obtaining the desired information. An example is shown in Figure 4, which shows how top- down navigation occurs .

The relationships between elements can also occur following a logic different from the clustering, that is through the logical link of elements present in a cluster with those present in other clusters. Such link is depicted with connecting lines between these elements, thus forming and drawing constellations, as shown in Figure 11. Such constellations can be fixed or dynamic. Fixed when a link between N entities persists, dynamic when this relationship is momentary and the result of a relationship arising from a particular event or association process.

A further variant of visual organisation is represented by the Charting feature, which allows the repositioning of elements on reference axes according to specific filtering/research criteria, as shown in Figure 10. Basically, the filtered/researched or selected stars move from their position and are positioned near Cartesian axes that appeared on the screen (2 or 3 axes) and are placed, in reference to these axes, in the position dictated by the value of the element object of the filter or research, (e.g. if interested in researching for the memory allocation of the systems present, then the stars representing the systems are placed, in the reference axes, in the position relative to their memory value, thus generating a graph that is horizontal to the entire nebular structure) .

The nebular representation of logical elements offers an evocative and organised view of an abstract world. Such world and the elements present in it, also bring "less abstract" information and referred to a world that is more "physical" and material, (e.g. a virtual machine brings the information referred to the physical machine where it resides, and the physical machine has the information that denotes on which rack is connected, the latter has the information of the data centre that hosts it, and so on) . This information allows the system set forth and object of the invention to carry out a logical - physical projection or to prove the point, a sky- earth projection as shown in Figure 12.

If we have considered the abstract world as a universe, a "starry sky", by looking down towards the earth, we will see what is referred to as matter, physicality, rationality, we switch from observing the stars to observing a territory made up of elements that have their geographical location, their physical size and live in a well-defined, structured and tangible context.

The representation features for the link and association between the logical elements and the relative physical element are what we define as Projection features.

The visual Projection is represented as beams of light that start from the stars (logical systems) towards the georeferenced physical place (Data Centre) where these elements are in reality.

The projection serves as a link between the representation generated by the Nebula Engine and the representation generated by the DC Engine 212, representing the physical infrastructure of the data centre .

• DC Engine: If the Nebula Engine 211 creates a nebula showing a universe of logical elements, the DC Engine 212 (Data Centre Engine) takes care of the representation of the physical infrastructure. The DC Engine functionally rests on the Asset Engine 208 and inherits its properties by extending and specialising them in a very specific scope, which is that of Data Centres .

That engine brings a range of assets, properties and typical features of the Data Centre world and linked to the management of racks, servers, workstations, disks, network apparatuses, UPS, air conditioning plants, sensors, etc., and features linked to the network topology and allows to show and possibly manage the connectivity between different apparatuses present within or between data centres.

The planimetric representation by CAD is managed by the Asset Engine 208, as well as the asset population, while all navigation logic is peculiar to this engine.

The navigation logic of the DC engine 212 is based on the libraries of Continuous Navigation 203 that allows to have a 3D panoramic view of the Data Centre, that is, to see the Racks and other plant design apparatuses positioned on the planimetry, and from this view through a drill-down on the single rack or apparatus or through a zoom, to descend into depth and from the rack, the servers or network apparatuses present on it will appear in their position of rack units, and so on to descend until they get to the single software that runs on a virtual machine. As Figure 13 shows,

for each element depicted, all data and information related to it and/or ascribable to it, such as notifications, alerts, system status, etc., are shown simultaneously.

The DC engine 212 offers the same features as the Nebula Engine 211 but through a georeferenced, real and physical representation of the systems.

According to the present invention, the software package was designed and developed around the Unity 3D graphic engine (3D Game Engine module) for the W3 Rendering Layer of the Client, the engines, libraries and modules in the same layer were developed through the C# language and are linked to the 3D Game Engine.

Still Client side, the FE-BE Connection Layer (independent of the 3D Game Engine) was developed in C#.

Server side, BE-FE Connection Layers and Process and Correlation Layers are developed by using a set of languages, in particular the Python and Go languages have been used. The External Connection Layer, being structured as microservice, contains modules developed in different languages and are dependent on the type of connector or data source to be integrated. Client side, the application can be run on Windows, Linux operating systems and on Android and iOS mobile platforms, based on the compilation made and based on the required features.

The application has been designed to be natively usable through multi-touch devices but it is still compatible to be used by traditional I/O devices, such as mouse and keyboard.

Server side, the system runs on Linux operating system on physical or virtualised apparatuses.

The added value of this system according to the present invention is to be found substantially in the representation and interaction mode with large amounts of data of different types, which through advanced techniques and algorithms of rendering and interaction, allows, through a single evocative and original NUI-based dynamic 3D interface, to control, manage and command the sub-systems connected to the same system.