Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMMAND AND CONTROL SYSTEM FOR OPTIMAL RISK MANAGEMENT
Document Type and Number:
WIPO Patent Application WO/2016/170551
Kind Code:
A2
Abstract:
The present invention relates to the realisation of a command and control centre, based on physical and virtual machines, for the monitoring of human resources, data and corporate or institutional facilities, to enable a systematic assessment of vulnerabilities and threats. The command and control centre analyses the data from corporate databases and external data acquired and analysed through crawling and semantic intelligence techniques that allow, thanks to an integration boost, to highlight, for each asset of interest, a number of useful indicators to evaluate and synthesise the vulnerability degree, also on the basis of the data provided by a checklist and the threat degree, based on surveys results and qualitative analysis carried out by specialised teams of analysts in order to determine the priorities for action in relation to the totality of the analysed assets. In addition, through the realisation of a statistical synthesis, an objective weight system is detected inside the questions included on the checklist, which allows to highlight the questions that most affect the analysis results. Punctual arrays provide the risk gradient for the company or public or private entity, in a standardised and homogenous form, and its threat rank worldwide. The punctuality of data avoids inconsistent approaches by giving the user, as a whole, objective data based on the qualitative and quantitative analysis, which are suitable to provide real interpretations of threats. The integration of multidisciplinary databases allows decision-makers to give immediate responses to crises through the use of genuine data (updated locally according to the bottom-up concept) that can be used simultaneously by multiple managers from every part of the world, according to hierarchical principles that the system can give itself in relation to specific needs. The system may be used in complex business processes such as in the field of finance, trade, anti-fraud and counterfeiting, strategic marketing, business security and protection, or may also allow to adapt the administration of security roles and delegations; to manage documentation and staff attendance; to provide information documents on security risks (travel guides) and health risks (health care travel guides), to allow the administration/planning of staff attendance, including access administration; to manage emergency/evacuation plans in operating sites and the automatic administration of deadlines for Security Plans and their procedures.

Inventors:
SACCONE UMBERTO (IT)
Application Number:
PCT/IT2015/000109
Publication Date:
October 27, 2016
Filing Date:
April 21, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GRADE S R L (IT)
SACCONE UMBERTO (IT)
International Classes:
G06F17/30; G06Q99/00
Other References:
See references of EP 3286660A2
Download PDF:
Claims:
Claims

1) A command and control system, based on an ICT platform, consisting of interconnected physical and virtual machines, mainly computers, physical network equipment, algorithms and processing systems for information available in databases of public and private companies or entities, together with freely available external data which allow to support the administration of emergency management, risk analysis activity and the monitoring of security or business processes and to maintain the history of all relevant information, characterised in that it comprises a data centre with two Apache web servers (1); clones (2), each of which hosts a wget crawler for website mirroring or FTP that analyses its content and downloads files from the web; an anonymisation module (3) for privacy, data partitioning and distribution; a module (4) to monitor web applications and VMware virtual environments as well as to detect new devices and automatically allocate performance tests to detected items; a module for automatic translation (5) that interfaces with both the wget crawler and the module (6) and its clone (6'), for the normalisation and preliminary processing of various documents, in the form of text and images, from databases; a Linux- based NAS module (7), having the task of centralising data storage in one single device accessible to all nodes on the network, making at the same time files available on different platforms and allowing to implement RAID schemes (Redundant Array of Inexpensive Disks) to ensure a higher data reliability; a cloned module (8) from a cluster (8'), a subset of virtualised hardware assets as a separate computer, which relate mainly to data from module (1) and module (6); a semantic module (9) which uses a dedicated infrastructure consisting of 5 physical servers and uses applications redundancy mechanisms through 13 virtual servers to provide a greater scalability; a module (10) enabling to monitor the maximum number of process instances in memory and executing them sequentially; a module (11) enabling the creation and use of maps, geographic data compilation, maps analysis, sharing of geographic information and the administration of geographic information in a database, which are also available on the web and accessible through smartphones and tablets; a module (12), closely interconnected with module (11), enabling the integration of geographic and spatial information into business information systems, on any scale and detail levels; a high-security communication module (13) based on an encryption system; a module (14) for credentials administration and access to IT applications; a module (15) for enterprise content management (EC ) in any forms, such as images, e-mails, scanned documents and electronic files; two dynamic web servers (16) that rely on the databases; a reporting platform (17) having an interactive user interface to analyse data and enable the decision-making process in any place and at any time, even with continuous mobile access; a checklist arranged in 24 sections having about 1 , 150 questions and representation diagrams; a synthetic index of vulnerability and a synthetic index of threat, to which any given asset is subjected to, for the creation of a synthetic index of the overall risk supported by a coefficient of variation, calculated as the ratio between the standard error and the arithmetic average, multiplied by 100.

2) A command and control system as stated in claim 1), characterised in that the creation of the synthetic index of the vulnerability level is achieved as follows: by using the analysis of multiple matches, 4 synthetic variables, which actually sum up the "structural" features of the different analysed resources, are isolated and a hierarchical classification algorithm is applied to identify three homogeneous groups of assets, by sorting them according to their average vulnerability level and creating, for any resources of each group, an overall score based on the scores assigned to each of the possible replies to each question on this checklist; subsequently, for each analysed asset, a "belonging probability" to each of these groups is calculated: the group having the highest average score, and thus the highest (average) vulnerability levels, is labelled as a High Vulnerability group, whereas the groups having gradually lower (average) vulnerability levels, are labelled as average and low vulnerability groups; and the belonging probability to the "High Vulnerability" group is interpreted as a synthetic index of the asset vulnerability.

3) A command and control system as stated in claim 1) characterised in that the creation of the synthetic index of the threat level is based on evaluating 29 factors related to four threat categories: Politics, Terrorism, Crime, Ethics, and, similarly to what is provided for the questions on the checklist, each threat factor has been assigned with an integer value comprised between 1 and 10, and the average values are obtained from the above values, both for the individual types of threat and the overall threat, through the arithmetic average of the provided scores and by using both the Delphi technique - through the evaluation of the opinions of a group of experts which are combined in a single shared view - and the average positions, according to the later aggregation stages, starting from the different numerical values of the various indicators, with a precise weighting system needed to assess, where needed, the weight of each indicator or threat, and by simultaneously calculating the monitoring indexes that control the behaviour of the highest and lowest average values.

4) A command and control system as stated in claim 1) characterised in that for the creation of the synthetic index of the overall risk, to which each asset is subjected, it is used, as a measure of synthesis, the arithmetic average of the two values regarding the threat and vulnerability indexes, so that, when the threat and vulnerability levels are substantially different, the synthetic index describes their high levels of variability; when vulnerability and threat levels are basically the same, the synthetic index describes a low level of variability, so that the variability can be a sort of alert: the higher it is, the more consistent the situations it describes are; the lower it is, the more it hides potentially dangerous situations. 5) A command and control system as stated in claim 1), in which the applications managed and contained in modules (5), (11), (12), (13), (14) and (17), also allow to adapt the administration of security roles and delegations, to manage documentation and staff attendance; to provide information documents on security risk (travel guides) and health risks (health care travel guides), to allow the administration/planning of staff attendance, including access administration and emergency/evacuation in operating sites around the world; in addition, it allows to manage automatic deadlines of Security Plans and their procedures.

6) A command and control system as stated in claims 1) and 5), characterised in that the system is also a command and control centre for managers of corporate control; a data analysis and information hub for managers of corporate development; a compliance operation centre for managers of corporate compliance; a reporting aid for top management; an instrument of fraud prevention/business detection.

Description:
COMMAND AND CONTROL SYSTEM FOR OPTIMAL RISK MANAGEMENT

Description

Technical field

The object of the present invention is a command and control system based on an electronic platform consisting of interconnected physical and virtual machines, mainly computers, physical networks equipment, processing systems and algorithms allowing to search for, extract and organise information, process a probability computation in order to forecast and manage explanation modes of events.

State of the art

It is known that the type of material subject of the intelligence information is mostly unstructured: communications, conversations, data, news, etc., rarely have an organised form, that is in structured databases.

The systems which are traditionally used to handle unstructured information, as is obvious to any users of Internet search engines, are not able to solve the issue of discerning useful data from irrelevant ones: statistical technologies or technologies based on the use of keywords consider text as sequences of characters and fail to understand their meaning. Over the recent years, systems based on semantic analysis have emerged to extract relevant information and organise them for the intended purposes: manage data, support arguments and stimulate insights.

The most advanced semantic technology focuses on the meaning of words rather than on the sequence which they form. Therefore, it is characterised by the ability to focus on the content of texts (topics, concepts, key entities), regardless of how they are expressed. Advanced systems have also emerged which are able to track down any information, use a multiplicity of data by analyzing texts and identifying the conceptual links between documents and, additionally, by geo-referring processed information and linking it to risk situations in order to assist analysts in the activities of Corporate Intelligence and Homeland Security.

However, the technique, in the manner described in the present invention, lacks a fully automated intelligent system to solve the security problem of a territory or an industrial site through the avoidance or reduction of bodily injuries, damage to property, places, institutions and symbols, but also through the prevention of damage planning to other communities or industrial sites.

In particular, it lacks an automatic system able to predict and control possible scenarios on how, who, when and where of future attacks and damage and to track an event that happened to conditions and/or people who determined it and to suggest or control possible evacuation or enforcement actions, according to paths set by updated data.

Aims and advantages of the present invention

The purpose of this invention is therefore to provide a command and control system based on an electronic platform consisting of interconnected, physical and virtual machines, mainly computers, equipment for physical networks, algorithms and processing systems of information contained in a company's database or public or private entities, together with freely available external data, which would allow to support the management of emergency situations, in security risk analysis and monitoring of security processes; additionally, it also allows to keep the history of all relevant information.

Another purpose of this invention, in accordance with the previous one, is to provide a command and control system allowing, either in an integrated or selective manner, intelligence activities; security activities; fraud investigation activities; security assessments; security audits; security policies and procedures; market intelligence; big data link analysis; security co-sourcing; country report; country threat assessment; power map; integrity due diligence; integrated security management dashboard.

Another purpose of this invention, in accordance with the foregoing one, is to provide a command and control system allowing to adapt the administration of security roles and delegations; to manage documentation and staff attendance; to provide information documents on security risks (travel guides) and health risks (health care travel guides), to allow the administration/planning of staff attendance, including access administration and emergency/evacuation plans in operating sites around the world; and the automatic administration of deadlines relating to security plans and procedures.

Another purpose of the present invention, in accordance with the previous ones, is to create a checklist organised into 24 sections with approximately 1 ,150 questions and representation graphs from which it is possible to obtain, through analytical and statistical methods, a synthetic index of vulnerability, further based on the evaluation of 29 assessment factors related to four types of threats: politics, terrorism, crime, ethics, and with the help of analytical and statistical methodologies, to get a synthetic index of threat; through these two indexes, to create a synthetic index of the overall risk.

Another purpose of the present invention, in accordance with the above purposes, is to provide a command and control system which can also be used in complex business processes such as in the field of finance, trade, anti-fraud and counterfeiting, strategic marketing, business security and protection, namely to act as: a command and control centre for corporate auditing officers; a hub of information and data analysis for managers of business development; a compliance operation centre for corporate compliance officers; a reporting aid for top management; an instrument of corporate fraud prevention/detection. The characteristics and advantages of a command and control system for risk management, according to the present invention, will become more evident in the attached description and drawings, provided solely for illustrative, and not limiting purposes, in which:

- Figure 1 is a block diagram which schematically shows the production environment for the analysis, processing and presentation phases of information for risk management;

- Figure 2 is a block diagram in continuation of the block on Figure 1 , which schematically shows the production environment for the capture and processing phases of information for risk management.

Figure 3 shows a block diagram, a summary of the representative components of the command and control system.

In accordance with Fig. 1 and Fig. 2, it can be noted that the system which is the object of the present invention comprises a data centre which includes physical and virtual servers, in particular, by schematically dividing the system into modules, it is designated with (1) two web servers using the Apache software. On each sever, it also resides a wget crawler to mirror sites or transfer files (FTP), which parses the content in a methodical and automated manner, and download files from the web; the two web servers (1) are clones (2) to meet the requirements of business continuity.

With (3), it is designated the anonymisation module which allows to meet the privacy criteria but also the partitioning and regulated distribution of information.

With (4), it is designated the ZABBIX®'s open-source monitoring module which allows both the monitoring of Web applications and VMware virtual environments in order to detect new devices and automatically assign performance checks to detected items (server, network equipment, virtual machine).

All information, both the configurations and the data collected by the network, is stored in a relational database for simple and accessible processing through different channels and options.

The web interface allows both the configuration of the system and the display of data in a secure form, through a control system of customizable access.

With (5), it is designated the SYSTRAN®'s machine translation module to be interfaced with both the wget crawler and the databases referred to in module (6) and his clone (6'), which are run on a Windows 2008 server; Tomcat 6 is used as an open-source servlet container, which implements the specifications of the JavaServer Pages (JSP) and Servlets, providing the software platform for the implementation of Web applications developed in Java.

The databases listed in module (6) cover a great amount of various documents as text and image, derived from the most diverse sources: press, internet, radio, public databases, radio, mail, reports of agents, informants and observers, aerial or satellite images, etc.,. and are treated beforehand with the "COGITO® Cleaner" virtual machine, capable of normalising the main data flows (voice, fax, http, etc.,.) and process all major acquisition formats.

Once collected, data are stored and immediately subjected to indexing, and then filtered according to the selected parameters; then, all potentially interesting data are so identified. With (7), it is designated the NAS module based on Linux, with the task to centralise the storage of data in a single device accessible to all network nodes, making at the same time files available on different platforms and allowing to implement RAID (Redundant Array of Inexpensive Disks) schemes to ensure a more reliable data management.

Where is convenient, the IT infrastructure has been virtualised, including servers and storage, so that the IT resources can be managed as if they were shared utilities, dynamically provided to different organisational units, without worrying too much about the differences and limitations of the underlying hardware.

The DB LPAR module (8) is a logical partition, a subset of virtualised hardware resources as a separate computer, which mainly concern data from module (1) and module (6).

The ODR function (18) retrieves a list of valid destination nodes that run the Intelligent Management service in the cell, of a multi-cell WebSphere Application Server, Network Deployment environment where each node in the cell is running the intelligent management service. If the primary node fails for any reason, the ODR function selects another node from the list of available nodes. The ODR function establishes a connection to that node.

The module is cloned from a cluster (8'). The semantic module (9), relating to clusters, uses a dedicated infrastructure made up of 5 physical servers and application redundancy mechanisms consisting of 13 virtual servers to increase scalability.

The (10) TIBCO® module allows to exchange data between different systems; in practice, the communication between the database or syntactically different programming languages. It allows to control the maximum number of process instances lying in memory and specify that all process instances should be executed sequentially.

The ArcGIS® (11) module is a geographic information system that enables the creation and use of maps, geographic data compilation; map analysis, geographical information sharing and management of geographic information in a database.

ArcGIS allows to display and query the created cards, to create layered maps and perform spatial analysis and intelligent manipulation of geo-localized data.

It is possible to add own data, create mash-ups with maps and information shared by users, insert data into pop-up maps and other highlights, including photos and links to web pages. The platform is available via the web and can also be used by smartphones and tablets. Basically, in the system which is the object of this invention, the available applications in ArcGIS Online allows to perform the above-mentioned operations, to monitor events and activities and protect them from unauthorised access. The module (12) is an application aimed at integrating geographic and spatial information in business information systems, at any scale and level of detail. It primarily works on the integration of GIS and other advanced technologies, such as GPS (Global Positioning System).

The GIS in module (12) is closely interconnected with module (11); in fact, the GIS is a RIA (Rich Internet Application) based on the Silverlight technology using the ArcGIS API. The Server Map Application is the ESRI ArcGIS Server 10.0.2.

Metadata are managed through an advanced customisation of the ESRI Geo-Portal Extension. The RDBMS technology is based on Oracle.

Module (12) makes available, to those concerned, the tools capable of supporting and managing Tier 3 emergencies (the most serious ones) by monitoring any facility or location, but also the tools capable of tracking the location of ships and sea platforms, every five minutes all over the world, by showing the weather and sea conditions.

Module (13) allows communications with a high degree of security and is based on a cryptographic system that handles calls, SMS, fax, conference calls, file transfer and classified computer networks, using standard phone lines, digital networks, such as GSM, ISDN, Internet Protocol (IP) and satellite communications.

Module (14) is a corporate system to manage credentials and computing application access.

Module (15) FILENET®. is a documentary database for enterprise content management (ECM) in all its forms, such as images, e-mails and any type of scanned documents and electronic files. Module (16) is represented by two dynamic web servers that rely on databases.

The (17) QlikView® module is a reporting platform for the user workstation, which offers an intuitive user interface, with interactive dashboards rich in graphics, which can be created quickly and easily to get clear and simple results; it allows users to analyse unlimited data in order to enable the decision-making process by speeding up and increasing efficiency. Users can identify the bulk of data, significant occurrences (pattern) of some items and link them together to obtain new information, without launching a new search by modifying the data, because the detection is performed in the memory and it switches from one topic to the other by associating ideas.

Therefore, it is particularly suitable for the analysis of unstructured data; additionally, it eases the completion of complex investigation, almost intuitively, through the GUI.

It is possible to perform simulations, predictive and budget analysis, making comparisons among groups of associated data by giving users another tool of "intelligence" on available data.

The platform also allows for continuous and interactive access from mobile devices, in total safety.

The aggregation of the described tools allows to search for information, select it, handle it and present it to users in the desired form and manner, based on hierarchies of controlled access, that is, each security manager will access the information on the security threat for the territorial scope of its jurisdiction and may initiate the operations provided by the process of risk management; but, other authorised users, will access to a comprehensive vision for their business areas and many other users, at a higher hierarchical level, will have full access to the platform. To obtain a threat and vulnerability value of a given asset, and therefore a quantification of the present and future real risk, updated moment by moment, based on the flow of processed information, other tools having a greater specificity for the concerned assets are needed.

In particular, to assess the security vulnerabilities, it has been used a structured model on closed questions ("checklist") aimed at assessing the vulnerability degree of assets in the world, and at implementing the most appropriate mitigation measures.

In particular, the checklist is organised into 24 "sections", each of which consists of questions designed to test the asset vulnerability level of security systems and procedures at a physical, logical, procedural and organisational level in order to verify its status rather than what is expected by specific regulatory requirements, standards and international best practices.

The sections on the checklist are as follows;

Perimeter and Access Areas; Organisation and Responsibility; Description; Lighting; Buildings; Key Management; Access Control; Intrusion Detection System; Control Room; CCTV System; Responsiveness; Critical Areas; Inbound and Outbound Goods; Organisation of Security Controls; Critical Loads; Medical Clinic; Plan and/or Security Procedures; Information Security; Political Risk; Terrorist Risk; Business Terrorism Risk; Crime Risk; Crime Risk for Business; Legislative Decrees (Italian) no. 231/2001 and no. 81/200.

In addition, an introductory section to the reasons of the checklist is available; a "Summary" section which contains a bar graph through which it is possible to monitor the completion degree of the questions included in each section and a "Charts" section that displays the computation results of the risk analysis through a vulnerability chart (Kiviath Diagram) and a chart of the risk matrix. The 24 sections or tabs on the checklist include approximately 1 ,150 questions.

Different response modes are associated to each question based on the presence/absence of the requested vulnerability type or on any inapplicability of the application.

Within the file, there is also a scoring system (score) which allocate 4 points to any vulnerability presence, 1 point to the vulnerability absence, -1 point to inapplicability (N/A). Then, the vulnerability profile of the different assets is derived from the analysis of the checklist containing questions on the presence (or absence) of factors or the presence (absence) of "structural" equipment which might impair/strengthen the security level of the investigated asset.

Statistical analysis has been used to get an overall measurement of the vulnerability level within each asset intended as the identification of vulnerability factors and security equipment that a "well-protected" asset should have (or not have). The structural comparison among the different assets in the analysis allows, on one hand, to create a "ranking" of the assets themselves based on the level of the highlighted vulnerability and, on the other hand, to build a system of weights capable of bringing out the importance of individual variables and, therefore, of those vulnerability factors which make the asset more or less "secure".

Multivariate analysis tools have been used, such as the Multiple Correspondence Analysis and the Cluster Analysis, as the methodologies that best allow to synthesize available information and classify investigated statistical units, by contributing to the creation of an asset ranking based on the vulnerability level and the definition of a weighting system that allows to highlight the importance of individual variables in determining the vulnerability level.

The Factor analysis of multiple correspondences allows to study the structure of associative relationships that exist between different qualitative (and quantitative) variables and provides a way for the "reduction" of the original variables in a (lower) number of "synthetic" variables.

The data matrix is a matrix of n code lines (the number of assets) and p columns, one for each variable, by matching the answers to the questions on the checklist.

Through matrix algebraic processes, factors are extracted, which are nothing more than algebraic constructs that are obtained through linear combinations of the original modes- variables.

This procedure allows to locate orthogonal axes, or axes which are independent from one another, called "factorial axes", through which it is possible to represent, in a smaller space, the interrelationships between the considered variables-mode.

The factors are latent dimensions in the data structure and refer to the underlying conceptual categories, which are useful to account for what is shared by the associated variables.

In essence, starting from all modes of the considered variables, it has been possible to build "new" variables which are configured as a "synthesis" of the initial variables.

With the Cluster Analysis, it has been possible to group the statistical units in order to make the groups as uniform as possible within them and as different as possible from each other (by maximizing the distance between the groups and by minimizing it in the groups).

First, a hierarchical classification algorithm has been used (the Ward's method) to minimize, at each step of the algorithm, the variance within the groups by optimizing the partition obtained through the aggregation of two elements.

Taking into account that the total variance of a set of units can be decomposed into the sum of two quantities (the internal variance with respect to the clusters and the external variance - that is among the clusters), a partition is considered as much better as the classes are homogeneous inside and different from each other; in other words, the higher the variance between the classes, the lower the internal variance between the classes. This approach has served to highlight the number of "natural" groups of assets in which the phenomenon occurred.

The proposed method allows to allocate, uniquely, every available asset to one of the highlighted groups; but, it was preferred to use a classification logic in which the statistical units (assets) are not allocated exclusively to different groups, but by bringing out a belonging probability to each of the highlighted groups.

The methodology used in this phase was the Fuzzy C-means (FC ) which allows to calculate, for each asset, a belonging probability to each cluster extracted from the analysis, such as a measure ranging between 0 (provided that it does not belong with certainty to the concerned group) and 1 (provided that a unit belongs with certainty to a cluster).

The Multiple Correspondence Analysis (MCA) has then allowed to "reduce" the significant information for the purposes of the analysis.

The 1,150 questions on the checklist contain information which is often concatenated together. This means that some of the answers may have an informational content not completely "separated" from the content already provided by other questions. The reduction in the information available to the few synthetic variables allows, on one hand, considerable "savings" in computing terms and, on the other hand, to focus the analysis only on of the factors that differentiate the various considered assets, starting, of course, from the provided variables (that is, the answers to the questions on the checklist).

Starting from the 1,150 questions, the MCA has allowed to extract about 170 factorial axes; of these, the top 10 factorial axes can almost explain the whole (95.27%) variability generated by the questions. In the remainder of the analysis, it was decided to use the first 4 factorial axes which, together, help to explain more than 90% of all differences between the investigated assets.

Basically, the considered factorial axes are the "new" synthetic variables (summarising the originating variables) which precisely sum up the "structural" features of the investigated assets. On the basis of this summary, it has been possible to reconstruct, for each asset, a "score" relating to each of the synthetic variables extracted from the analysis.

Starting from these "scores", it has been possible to group the assets into homogeneous groups based on the "vulnerability" of the latter.

Then, it was proceeded to build a synthetic index of the asset vulnerability level, as follows:

1) on the basis of 4 synthetic variables calculated by applying a hierarchical classification algorithm, homogeneous groups of assets have been identified based on their vulnerability level.

As is known, the application of classification algorithms usually gives rise not to a single solution, but to a set of solutions, which are then evaluated (and validated) based on some indicators which are widely used in literature and which are meant to indicate the best possible breakdown(s) of the considered statistical units (in this case, the assets).

From the analysis of the data and of the possible groupings, it has emerged that most of the used indicators would suggest that the division into three groups is the one that best subdivides the statistical units in the groups themselves.

2) Once found, the three groups have been ordered according to the "average" vulnerability level that they highlighted.

To calculate the above average level, an overall score has been created for each asset of each group, based on the score assigned to each of the response mode for each question on the questionnaire. 3) The group having the highest average score, and thus the highest (average) vulnerability levels, is labelled as a High vulnerability group.

However, the groups having (average) levels of vulnerability, gradually lower, are labeled, accordingly, as average and low vulnerability groups.

4) The hierarchical classification algorithms allow to uniquely classify each of the x analysed assets in one of the groups that emerged from the analysis: therefore, the n assets have been included in the "High Vulnerability" group, the n+1 assets in the "Average Vulnerability" group and the n+2 assets in the "Low Vulnerability" group.

In fact, the logic of belonging or not belonging to a group is not very useful in solving complex problems because things do not appear (almost) never completely "white" or "black", but generally "blurrier" and better describable by using the different "gray shades". Then, it has been used the classification methodology of the "Fuzzy" type through which it has been subsequently possible, starting from the three above-mentioned highlighted groups, to assign the belonging of each asset groups to a no longer dichotomous view, but, for each analysed asset it has been possible to calculate a "belonging probability" to each of these groups.

Once these probabilities have been defined, the belonging probability to the "High Vulnerability" group is interpreted as a synthetic index of the Vulnerability Level of the asset.

The results of this methodology highlight the vulnerability level of all x analysed assets, sorted by the "most vulnerable" to the "least vulnerable".

Moreover, the performed MCA has made it possible to build an objective weighting system, which has allowed to allocate a "weight" to each question on the checklist, that represents the importance of the question in order to bring out the "vulnerability" level of the analysed assets. It should be noted that each received checklist refers to a single security measure which has been adopted, such as a fence, a peripheral alarm system and so on.

Among all sections on the checklist, 18 sections are dedicated to the vulnerability computation: Organisation and Responsibility; Perimeter and Access Areas; Lighting; Buildings; Keys Management; Access Control; Intrusion Detection System; Control Room; CCTV System; Responsiveness; Critical Areas; Inbound and Outbound Goods; Organisation of Security Controls; Critical Loads; Medical Clinic; Plan and/or Security Procedures; Information Security; (Italian) Legislative Decrees no. 231/2001 and no. 81/2008.

The same computation method is used for all sections: an initial V1i vulnerability value is assigned to each ith reply of at least one questionnaire and a first value of the overall Vc1 vulnerability level of the security measure is determined on the basis of the first V1, vulnerability values which have been assigned.

For example, the first V1, vulnerability values can belong to a predetermined value scale, such as a numerical series from 1 to 4.

The determination phase of the first value of the overall Vc1 vulnerability value includes the phase which consists in assigning a C i weighting coefficient to each i-th answer of at least one questionnaire.

In this case, the first value of the overall Vc1 vulnerability value is determined as a result of an equation having the following form i=i " ^ -1 , where n indicates the number of i-th questions contained on the checklist.

Preferably, the analysis phase of the second plurality of data includes, in addition, the phase consisting in selecting a number of key questions among the questions contained on the checklist. In particular, these are key questions related to more general safety aspects whose assessment may be needed to verify compliance with a plurality of legal regulations on safety.

In this case, a second value of the Vc2 overall vulnerability value is determined as a result

of an equation having the following form where p is the number of the key questions contained on the checklist.

In conclusion, the detection phase of the Vr security risk value includes the phase consisting in determining the Vr risk value on the basis of the first plurality of analysed data and of the second overall vulnerability value.

The threat assessment is based on evaluating 29 factors related to four types of threat: Politics; Terrorism; Crime; Ethics.

The four types of threat and the factors associated to each of them, considered in the analysis, are:

Political: Anti-Western Bias, Autonomist Movements, Border Disputes, Economic Strength, Energy Security, Food Security, Government Stability, Policy Risk, Risk of Armed Conflict, Water Security;

Terrorist: Attacks, Endogenous Terrorism, Ethno-religious Tens, Exogenous Terrorism, Regional Tensions, Security Forces;

Criminal: Common Crime/Robberies, Kidnappings, Killings, Organised Crime, Piracy, Police Forces, Sexual Abuse;

Ethical: Child Labour, Corruption, Human Rights, Human Trafficking, Labour Flexibility, Money Laundering.

Similarly to what is provided in relation to the questions on the checklist, each threat factor has been associated with a (integer) value between 1 and 10. Starting from these values, it has then been possible to obtain average values for both the individual types of threat and the "overall" threat in its entirety by using the arithmetic average of the provided scores.

More specifically, it has been used the computation model of the TRASS system, which is based on a mathematical function, able not only to measure the aggregate or synthetic risk, starting from different numeric values expressed by various indicators, but also to do so by taking into account a precise weighting system, necessary to evaluate differently, where needed, the "weight" of each indicator (or each risk family) compared to the total risk index. To build the synthetic indexes, it has been used the method of the average positions, which is better suited to handle the data with a quality not always under control and for the weighting abilities having not only a mathematical significance, but also a substantial and interpretive significance.

The process takes place through successive aggregation phases and allows to highlight when alternative choices are possible and in which cases the aggregate order affects the end result; and, in addition, it allows to appreciate to what extent the standardisation is needed.

For the different steps of the aggregation process, it is possible to propose more than a summary.

By taking advantage of this opportunity, indexes that control the behaviour of the highest and the low-average values are computed simultaneously.

Additionally, the use of multiple indexes allows to emphasise the complementary aspects. The TRASS system is integrated with the Delphi iterative survey, which takes place through several evaluation and expression phases of an expert group's views and aims to bring together the most comprehensive and shared view in a single "expression".

The Delphi technique is used to obtain answers to a problem from a group (panel) of independent experts through two or three rounds.

After each round, an administrator provides an anonymous summary of the experts' answers and their reasoning. If the experts' answers vary slightly between the different rounds, the process is stopped; a mathematical average is performed among the answers till the final round.

In this way, the technique which is applied to the present invention can highly decrease the incidence of human errors in estimating the most uncertain factors (that is because of the lack or poor reliability of quantitative data or official statistics), by limiting prejudices and cognitive distortions in the assessment phase and guaranteeing reliable estimates and very low error margins.

The methodologies described above have enabled to build a Synthetic Index of the Vulnerability and a Synthetic Index of the Threat for each asset included in the analysis. The next step has been to integrate the two measures in order to build a single measure of the overall risk, to which any given asset is exposed.

To merge together the two indexes, it has been used the arithmetic average as a measure of synthesis, rather than multiplying the two values.

Therefore, in a first approximation, it has been calculated the arithmetic average of the two vulnerabilities and threats values, by building a Synthetic Index of the overall risk that, in its construction, can vary from zero to 100 (since the two synthetic indexes that compose it vary between zero and 100).

However, it does not seem useful to summarise such complex situations in a single number that can "hide" profoundly different conceptual aspects between them, if not downright harmful, since the same synthesis level can hide very different situations. In fact, the arithmetic average sums up the situations from which it descends. However, in this process of synthesis it is possible to hide what really happens in the elements that compose it. In fact, an overall risk level equal to 60 may derive from a threat and a vulnerability which are substantially equivalent (for example, respectively equal to 65 and 55); or from a mixture of profoundly different values (for example, the threat is equal to 90 and the vulnerability is equal to 30; or vice versa). Thus, the same average can result from "variability" situations which are profoundly different.

In a situation with a high variability level, the vulnerability and threat values must be sufficiently distant from each other and when there are have high variability levels the possible solutions are basically two:

1) high threat and low vulnerability;

2) low threat and high vulnerability.

If (1) the threat is high (for the concerned asset, the probability of suffering from "attacks" is high), but the low vulnerability indicates that those who have proceeded to make the asset operational are equipped with suitable instruments of "defence" (by taking into account such a high threat). In essence, the asset has been built in a consistent manner with respect to the "dangerousness" of the geographical area in which it is located.

In case (2), on the contrary, the threat is low, but the vulnerability is high. In fact, for the asset in question, the probability of suffering from "attacks" is low; that is, it is like saying: the asset is placed into a "quiet zone" and therefore it might not be absolutely necessary to provide it with all the most complex defence tools.

Then, high variability levels derive from threat and vulnerability values which are substantially different between them and actually describe situations in which the analyst and the designer have consistently set the asset "protection" based on the (high or low) threat level to which it is subjected, by operating, therefore, consistently with the territorial situation which has been found. On the other hand, however, to have a low variability level, the vulnerability index should have substantially the same values.

Examples:

a) if the threat is very low (for example, it is equal to 20), this means that the vulnerability will be very low (with values slightly higher or lower than 20);

b) if the threat is very high (for example, it is equal to 80, on a scale from zero to 100), this means that (because the average variability of the two values is low) the vulnerability will be very high (with values which are slightly higher or lower than 80)

In the case referred to by (a), the asset is not threatened, but - at the same time - it is also very well defended; this is not a real 'issue' because the 'defence' level is appropriate to the situation; but, it is rather a matter of 'efficiency' about the need to equip the asset with high levels of defence, in the presence of a low threat.

In the case referred to by (b), the situation is quite different: high threat levels also match high vulnerability levels: these are "dangerous" situations, which need to be tackled on a priority basis.

By summarising what has just been highlighted, it emerges that the variability within the analysis represents a sort of "alarm bells": the higher it is, the better it describes consistent situations; the lower it is, the more it could hide potentially dangerous situations (if associated with high and average-high levels of threat).

Therefore, a series of additional information have been combined with the synthesis measure to help to make informed decision in a complex environment.

In particular, the arithmetic average of the two threat and vulnerability values has been combined with the coefficient of variation, calculated as the ratio between the standard error and the arithmetic average (multiplied by 100). In conclusion, it can be said that the complexity of the investigated phenomenon is reflected on the choice of the methodologies that have not led to the definition of an algorithm which would "automatically" give a priority scale of possible interventions, but of an observation range within which the phenomenon has taken place and within which it is left to the user's sensitivity to identify the decisions to be made.

The command and control system according to the present invention enables, with the applications handled and contained in modules (5), (11), (12), (13), (14) and (17), also to ensure the adaptation of managing security roles and delegations; managing documentation and staff attendance; providing informational documents on security risks (travel guides) and health risks; it basically allows to record the details of business trips for the whole staff, employees, expatriates, contractors or those concerned, who travel abroad for work purposes, including information on family and companions and inform about the specificity of the country where they will travel, by providing them, via mail, with security documents: health travel guides and handbooks; it also lets to monitor the presence of personnel shifts in order to provide the competent authorities with timely notification, where needed; the managing/scheduling of staff attendance, including access management; to handle emergency/evacuation plans on sites where they are working.

The application also includes a management system of automatic deadlines for security plans and procedures.

The system which is the object of this invention can also be used in complex business processes such as in the field of finance, trade, anti-fraud and counterfeiting, strategic marketing, business security and protection, by acting in particular as:

a command and control centre for corporate auditing officers, interested in having an immediate and constantly updated transversal overview of the operations, inspections and (any) gaps in both the core and business support processes; an information and data analysis hub for managers of business development which, by analysing the information derived from both business inspections and the so-called market intelligence obtained by studying big data, can get immediate support in business initiatives (such as increasing risk trends in a country, threats to the stability of a Government or a Country, trade agreements or a greater market presence of a competitor in a market, compared with a sharp drop in sales in the same context);

a compliance operation centre for corporate compliance officers who can take advantage of a single centre to collect, in a usable format, the documents and information needed to fulfil legal obligations (risk assessment legislation, internal disciplinary regulations, etc);

a reporting support to give top management an up-to-date snapshot on the activities status, inspections, gaps, implementing measures, similarly to business or economic/financial analysis of public interest;

an instrument of corporate fraud prevention/detection based on logical/mathematical models for processing large amounts of data to enable the prevention of fraudulent activity. For the above needs, the system also uses the integrated analysis of the following databases:

Human Resources (possibly separated on the basis of the Parent company and the subsidiaries): information about the register of employees and collaborators that is usefully exploited in the analysis of resource productivity, forward-looking analysis on fixed term needs/shortages of staff, redundancies and inefficiencies related to absences/offsite detachments; the integrated analysis of these data with open-source data is useful for the assessment of its resources (local press reports, blogs, personal accounts on main social networks);

Corporate Secretariat (possibly separated on the basis of the Parent company and the subsidiaries): this information allows to develop the critical analysis in terms of compliance with policies and procedures, continuous compliance with delegations and powers of attorney relating to authorisation powers (through a simple match with the requirements of payment authorisations and/or databases of the Legal Office relating to contracts with counterparties) regarding the legal risk, lack of links and/or conflicts of interest between assignees of signatory powers and partners and managers of the counterparties (preferred vendor list) related to fraud risk management;

Preferred Vendor list (possibly separated on the basis of the Parent Company and the subsidiaries): useful for the continuous analysis of the counterpart risk related to the good reputation and professional competence regarding both the company and its administrators; the real-time screening of every change in the company's structure of their counterparts, together with the ability to access open-source information assembled into an organic document of due diligence, is a solid test point in anti-corruption, anti-fraud and reputational risk prevention;

Due Diligence Database: the possibility to use data mining techniques allows to support both the supervisory bodies and the business managers in case of issues/opportunities highlighted by the analysis and the constant monitoring of public/open source data related to their counterparts.

In conclusion, the present invention realises an integrated platform to find, explore, transform, discover and share information resulting from the analysis of Big Data and helps to ease the access to the entire wealth of data, to discover new information, to provide realtime results and to keep all data safe and well-managed.

A graphical interface for the retrieval and exploration of raw data helps to discover statistical correlations between combinations and data attributes to assess the potential of a data set and see whether it may deserve more resources and analysis. Users navigate within a catalogue of interactive data through familiar tools and powerful search functionalities. The system reduces the time-consuming preparation of raw data ready for the analysis and rationalises the data through an intuitive experience.

Users can visually work on the data without having to constantly change tool nor write codes, so that they can devote more time to the analysis itself.