Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GRAPHIC ENGINE FOR CREATING AND EXECUTING APPLICATIONS WITH MULTISENSORY INTERFACES
Document Type and Number:
WIPO Patent Application WO/2017/006223
Kind Code:
A1
Abstract:
A graphic engine for creating and executing applications with multisensory interfaces. According to the invention, said graphic engine consists of middleware software lying between the operating system and the final application and comprises specialized libraries for implementing applications with real-time graphics and characterized in that said graphic engine is georeferenced, i.e. addresses points and elements in space using geographic coordinates.

Inventors:
MOSCHETTONI CANDIDO (IT)
MARZILLI DAVIDE (IT)
DE VECCHI MARCO SALVATORE (IT)
Application Number:
PCT/IB2016/053925
Publication Date:
January 12, 2017
Filing Date:
June 30, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NIVI GROUP S P A (IT)
International Classes:
G06F9/44; G06F17/30; G06T17/05
Domestic Patent References:
WO2002015108A12002-02-21
Foreign References:
US20110202510A12011-08-18
US20070198586A12007-08-23
US20120182298A12012-07-19
Attorney, Agent or Firm:
KARAGHIOSOFF, Giorgio A. (IT)
Download PDF:
Claims:
CLAIMS

1. A graphic engine for creating and executing applications with multisensory interfaces, characterized in that said graphic engine consists of middleware software lying between the operating system and the final application and comprises specialized libraries for implementing applications with real-time graphics and characterized in that said graphic engine is georeferenced, i.e. addresses points and elements in space using geographic coordinates .

2. A graphic engine as claimed in claim 1, characterized in that the graphic engine has at least one, preferably more libraries selected from the following group of libraries:

2D and 3D scene rendering,

- Ability of scaling the 2D/3D environment from a planetary-scale distance (6377000 m) to the focal length of the human eye (0.022 m) ,

- Camera management in a 3D environment,

- Management of effects and animations on graphic elements based on physical simulations, to enhance the perception of depth and movement

- Interaction with the interface through peculiar and general gestures (pan/zoom/rotate) ,

- Representation, management and handling of images,

Representation, management and handling of charts and tables ,

- Management of information panels and windows in 2D and 3D environments,

- Management of the most widespread input protocols such as for example WM_Touch, WM_Pen, Trackpad, MtDEv, Linux Kernel HID, TUIO and others, Representation of 2D/3D maps and 2D/3D georeferenced information contents,

- Drawing and representing points, markers, lines, polygons and 3D models on maps

- Asynchronous management of data flows .

3. A graphic engine as claimed in claims 1 or 2, characterized in that it comprises the following supports :

Native support for physical measurement units (dimensioning of scalable graphic elements based on the device that contains the application)

native support for CSS,

Native support for an interface definition language (XML) ,

- Cache system for managing graphic resources,

Abstraction system for local and network resources ,

- System for representing real-time and non-real-time video flows ,

- The ability of graphically superimposing multiple information layers (windows, panels, charts, etc)

- The ability of interoperating with external systems using standard protocols (e.g. XML, SOAP, REST, JSON, POST HTTP etc. ) ,

- WMS map support,

vector map support,

- KML and DEM standard support,

- Tools for importing graphic assets for the most common graphic design and cad software (Adobe illustrator, Adobe Photoshop, Autodesk 3DS Max, etc . ) ,

Support for new-generation foreign I/O devices, such as Google Glass, Oculus Rift, LeapMotion etc., - Support for interaction with speech input and in- air gesture devices

Support for communication with IOS-, Android-, Windows Mobile-based mobile devices,

- RFID and NFC support.

4. A graphic engine as claimed in one or more of the preceding claims has been designed and developed directly on the basis of OpenGL multi-platform graphical APIs defined by Khronos Group (http://www.khronos.org).

5. A graphic engine as claimed in claim 4, characterized in that it has been implemented using, as a programming method in addition to OpenGL for graphic rendering, the Python language for the scripting component and the use of C++ for graphic libraries .

6. A graphic engine as claimed in one or more of the preceding claims wherein a 3D engine is integrated through OpenGL to allow representation of data in a three-dimensional environment and can be addressed along three axes, for representation of three-dimensional objects and also for representation of POIs composed of Latitude, Longitude and Elevation .

7. A graphic engine as claimed in one or more of the preceding claims characterized by a modular architecture composed of six macro-areas/modules dedicated to peculiar aspects of the engine, which communicate with one another during the execution of the application which macro-areas consist in:

a macro-area (20) with the engine modules required for loading external data resources a macro-area (25) that contains the modules of the engine required for loading, converting and processing geolocated and cartographic information extracted from any type of external data resources; a macro-area (21) with the function of collecting and processing user inputs to the application from peripheral devices of the system supported by the engine and exposed by the operating system by "Input Drivers" (40) ;

a macro area (22) providing synchronization and parallelism mechanisms required by the internal components of the engine and by the logics of the applications being executed starting from "IPC" modules (44) offered by the operating system;

a macro area (23) whose function is managing and rendering the graphic elements of the application, which are added to the graphic window of the "Window" module (230) initialized through the SDL library (36) ;

a macro area (24) with the function of providing animation and interaction components between the graphic objects of the system reflecting the behaviors close to the real physics of naturally- occurring elements .

8. A hardware/software device comprising a graphic engine as claimed in one of claims 1 to 7 , which forms the interface with one or more I/O devices, such as multi-touch systems, and creates a programming environment for multisensing applications, requiring said graphic engine for execution, whereas said graphic engine consists of a program that can be executed on one or more operating systems .

Description:
"Graphic engine for creating and executing applications with multisensory interfaces"

The present invention relates to a graphic engine for creating and executing applications with multisensory interfaces.

Through the years, the IT technology has progressed through changes that can be deemed to be milestones: One of these was the general introduction of Personal Computers (PC) , one more was the invention of the graphic browser, and yet another one was the advent of the Internet.

If computers were to be classified along a technological timeline, they might be schematically grouped into four eras, each era being characterized by its peculiar and specific usage of and interaction with information, meeting the requirements of representation and usage of the contents available and present in that historical period:

Mainframe Era

One computer, many users

Personal Computer Era

One computer, one user

Mobility Era

Many computers for one user

Ubiquity Era

Thousands of computers for each user. Each transition has been always characterized by innovation in terms of Human Computer Interactions (HCI) , affording an increasingly simple and convenient usage of information.

HCIs have evolved with time, from CLI interfaces, through GUI interfaces, to what can be defined as the "next evolutionary step after the shift from the Command Line Interface (CLI) to the Graphical User Interface (GUI)", i.e. the NUI , Natural User Interface.

The interfaces that have been developed through the years will be now briefly described.

• CLI Command Line Interface

users had input data through an artificial element, i.e. a keyboard, and use a series of coded inputs with a rigorous syntax, and received output data in the form of written text.

• GUI Graphical User Interface

The mouse and the graphical interface were introduced: users could interact more easily with the system by moving the mouse, and had a greater interaction with contexts and objects and with the active contents displayed on the screen.

• NUI Natural User Interface

The NUI allows users to handle contents more directly, using more natural movements, actions and gestures .

The Natural Interaction or Natural Interface, may be defined as an approach to the use of the technological devices that are being studied and promoted, an intuitive and spontaneous relationship with technology, by the activation of "the cognitive and cybernetic dynamics that people commonly experience in real life,, when they discover reality by looking around and handling objects and communicate using gestures, expressions, movements".

The user is no longer required to possess and develop technology-specific abilities and skills, but may freely approach the machine and get to know, learn and take the instruments he/she needs through the use of the machine itself.

The NUI paradigm meets the new information usage needs, with each user having infinite heterogeneous information at his/her disposal in thousands of different systems.

Computer devices may be currently classified into two large classes of use:

professional use and private use.

While new devices such as smartphones and tablets have replaced personal computers with old GUI interfaces and have extensively used NUI interfaces in private applications, such process has not taken place in work applications.

With the exception of the communication component, professional applications have not been significantly affected by the revolution introduced by Apple and continued through the years also by other actors. At present we work with a PC in much the same way as we did 15 or 20 or more years ago, and our PC use is still based on GUI interfaces, with which we interact by means of a keyboard and a mouse.

This important turning point in terms of contents usage and modes of interaction that occurred for mobile devices did not take place for PCs and traditional operating systems. Attempts have been made in this direction, particularly by Microsoft, whose operating system Windows 8 can be used both in traditional mode, i.e. with a mouse, and a keyboard, and with point-and-click operation, and in touch mode, with typical tablet and smartphone gestures being transferred to a PC. The result was a hybrid, poorly organized, non-user-friendly operating system, but apart from easily solvable initial defects, the real problem was found to be the conceptual approach.

Most professional software requires the use of many options and navigation of a variety of menus, which would make touch selection inconvenient, whereby the mouse and keyboard are still essential, and a more classical interaction is still important.

Therefore, the evolution of PC interfaces is not merely a software or hardware problem, and a number of other factors have to be considered, such as ergonomics, compatibility with other software products, assessment of actual improvement in daily use, and last but not least the need of teaching users to do the same things they did before in a different manner.

Professional users are conservative by nature, and if they have managed work on their PCs in a given manner, they want to continue in the same way, and their reticence in changing habits can be only overcome if they can use a system that allows them not only to do the same things as before, but to do them in a much shorter time.

This purpose cannot be fulfilled by proposing the same software with minor changes, e.g. with mouse-replacing gestures, but by proposing something completely new in terms of both hardware and software, which can actually give users the perception of working on a new product with peculiar characteristics that assist its daily work.

The philosophy of creating a new "object" has become more and more popular with an increasing number of companies providing devices of this type, and entities that propose hardware and software solutions of a certain importance.

Multimedia tables, totems, interactive video walls, operator consoles, are the main devices provided by these companies, the best known and most active of which, in terms of device development and especially software and development environments are Ideum, based in the US, Mul titaction , based in Finland, Intuilab, based in France, U-Touch, based in the UK, and obviously Microsoft.

Unlike the major hardware-producing vendors, these companies provide a finished product, with software solutions.

Prior art products consist of combinations of hardware and software solutions and SDKs for development of specific applications. These mainly include software applications developed for digital signage or museum applications, or simply media viewers, allowing effective use of hardware potential.

Certain products are also developed for different, more professional users, such as GIS software for environmental management and control, supervision software, or highly complex software for airport control.

Nevertheless, these existing products are affected by considerable functional, design and conceptual drawbacks . The generational shift of interfaces and the new approach for interaction with computer systems must also relate to professional work use, with the user being able to use the same gestures and the same movements both at home with his/her own mobile device and at work on the table of the control center.

On the one hand the enterprise and corporate market demands advanced information management solutions and on the other there is no adequate offer that can provide instruments therefor.

The currently available libraries and corresponding multi-touch applications have been developed for other purposes and are not suitable for professional environments, as they are unable to adequately cover the demands and limitations imposed in terms of both safety and functionality.

The object of the present invention is to provide a new generation of hardware-software systems for corporate use, which are aimed at increasing productivity, simplifying process management and affording a more intuitive and attractive user experience, as well as filling the void in the market, and providing an environment that allows fast development of applications for supporting the new forms of interaction and communication with external sensors, devices and systems.

Particularly, the object of the present invention is to provide an engine that meets the demands of companies and has the peculiar stability, safety and functions required for professional use.

This graphic engine is required to be able to readily create applications for complex systems, not only designed for digital signage or museum or recreational applications.

The invention fulfills the aforementioned objects by providing a graphic engine for creating and executing applications with multisensory interfaces, which graphic engine consists of middleware software lying between the operating system and the final application and comprises specialized libraries for implementing applications with real-time graphics, which graphic engine is characterized in that said graphic engine is georeferenced, i.e. addresses points and elements in space using geographic coordinates.

In a preferred embodiment, the graphic engine has at least one or preferably multiple libraries selecting from the following group of libraries:

• 2D and 3D interface rendering,

• Ability of scaling the 2D/3D environment from a planetary-scale distance (6377000 m) to the focal length of the human eye (0.022 m) ,

• 3D camera management,

• Management of effects and animations on graphic elements based on physical simulations, to enhance depth and movement perception

· Interaction with the interface through peculiar and general gestures (pan/zoom/rotate) ,

Representation, management and handling of images ,

Representation, management and handling of charts and tables,

• Management of information panels and windows in 2D and 2D environments, Management of the most widespread input protocols such as WM_Touch, WM_Pen, Trackpad, MtDEv, Linux Kernel HID, TUIO and others,

Representation of 2D/3D maps and 2D/3D georeferenced information contents,

· Drawing and representing points, markers , lines, polygons and 3D models on maps

Asynchronous management of data flows .

According to another feature, the graphic engine comprises supports for one or more of the following functions:

- Native support for physical measurement units (dimensioning of scalable graphic elements based on the device that contains the application)

- Native support for CSS,

- Native support for an interface definition language (XML) ,

• Cache system for managing graphic resources,

• An abstraction system for local and network resources ,

· A system for real-time and non-real-time video flow representation,

• The ability of graphically superimposing multiple information layers (windows, panels, charts, etc)

· The ability of interoperating with external systems using standard protocols (e.g. XML, SOAP, REST, JSON, POST HTTP etc.),

WMS map support,

Vector map support,

KML and DEM standard support,

Tools for importing graphic assets for the most common graphic design and cad software (Adobe illustrator, Adobe Photoshop, Autodesk 3DS Max, etc . ) ,

Support for new-generation foreign I/O devices such as Google Glass, Oculus Rift, LeapMotion etc., Support for interaction with speech input and in-air gesture devices,

Support for communication with IOS-, Android-, Windows Mobile-based devices,

- RFID and NFC support.

In accordance with a preferred embodiment, the graphic engine has been designed and developed directly on the basis of OpenGL cross-platform graphics API as defined by Khronos Group (http://www.khronos.org) .

The graphic engine so obtained affords representation of data in a two/three-dimensional environment and two/three-dimensional objects georeferenced to Latitude-, Longitude- and

Elevation .

In a hardware/software device of the present invention, the graphic engine of the present invention forms the interface of I/O devices such as multi-touch systems, voice systems and in-air gesture systems .

Due to the above, the graphic engine of the present invention is a front-end system and a client that can represent three-dimensional environments or maps for use through advanced input systems and is further adapted to represent data from external sources.

In one embodiment, the architecture of the graphic engine of the present invention is modular and comprises 6 macro-areas dedicated to peculiar aspects of the framework, which communicate with one another during execution of the application, and consist of :

The "RESOURCE SUBSYSTEM" macro area, which contains the engine modules required for loading external data resources from multiple sources such as: file systems and local storage devices through LFS modules and pen drives, which use the Storage Controller of the operating system to obtain the interface with the hardware devices, network protocols through the "Net" module, which use the "Socket" modules of the system for interfacing with the distributed network services, proximity devices through the "NFC" module, which interfaces with the "Input Drivers" module of the operating system.

The "GEOSPATIAL SUBSYSTEM" macro-area which contains the engine modules required for loading, converting and processing geolocation and cartographic information retrieved from any type of external data resources managed by the "RESOURCE SUBSYSTEM" or the application itself. The abstraction of such resources is carried out by creating geometries expressed in geographic coordinates ("Geometry" module) from raw data or modules that process data in multiple standard geospatial standards, such as: KML/KMZ for reading KML and KMZ formats , Shapefile for reading ESRI Shapefile formats, WKB/WKT for reading WKB and WKT formats, GeoJSON for reading the GEOJson format, WMS/WFS for reading data of WMS and WFS Web services, DEM for reading geographic Digital Elevation Model (DEM) data expressed in the HGT format. The "INPUT" macro-area has the purpose of collecting and processing user inputs to the application from peripheral devices of the system supported by the engine and exposed by the operating system through "Input Drivers" (40) ;

Particularly, the engine manages touch control and text input peripheral devices through the modules: "Keyboard" for managing text input from a physical keyboard, such module being implemented through the SDL library, "Mouse" for touch emulation using mouse peripheral devices, such module being implemented through the SDL library, MTDev for using Linux-native MultiTouch peripheral devices, "Tuio" for communication with peripheral devices using the TUIO protocol, "WM Touch" for using Windows-native MultiTouch peripheral devices, "Leap Motion" for interfacing with Leap Motion input peripheral devices .

The "EVENT SUBSYSTEM" macro area has the purpose of providing the synchronization and parallelism mechanisms required by the internal parts of the engine and for the logics of the applications being executed, from the "IPC" modules provided by the operating system. This area is composed of the following modules: "Async Operations" which manages all asynchronous operations and synchronization thereof with the main thread, which is the only one that allows graphic operations, "Task Scheduler" which provides time scheduling of single or repeated operations, "Publish/Subscribe" which provides the primitives required to implement Publish/Subscribe architectural patterns for the application modules. The "PRESENTATION LAYER" macro area is the part that has the purpose of managing and rendering the graphic elements of the application, which are added to the graphic window of the "Window" module initialized through the SDL library. All the graphic parts in the engine are constructed on standard graphic and cross-platform "OpenGL" APIs, which are exposed by the Display Driver of the operating system provided by the manufacturer of the graphic adapter.

The "PHYSICS" macro area has the purpose of providing animation and interaction components among the graphic objects of the system reflecting the behaviors close to the real physics of nature- occurring elements. The "Animation" module has the purpose of providing an animation system with paradigms and primitives similar to those as used in computer graphics. The "Effect" module has the purpose of managing graphic "collisions" among objects and parts of the application surface, to react to given events dictated by the logics of the application itself. The "Kinetics" module has the purpose of providing inertia to the graphic objects, according to physical movement parameters similar to the real ones.

It will be appreciated from the above discussion that the graphic engine of the present invention logically lies between the application software and the operating system. Therefore, it can be considered as middleware required for execution of the application .

An application developed through the graphic engine of the present invention, and using its libraries and procedures can be only executed if the graphic engine is present in the system.

The application uses the APIs to invoke commands contained in the libraries of the graphic engine, and such APIs in turn communicate with the operating system and its drivers, thereby actually executing the task .

Certain components , such as the geolocation component and libraries, e.g. the GIS libraries, also require a database for storage of the functional data of the graphic engine. The database

may be of Sql or noSql type, provided that it has extensions supporting the aforementioned geolocation libraries, such as the GIS.

Therefore, the graphic engine of the present invention forms a software package, including libraries, databases and all the features required both for development of new applications and for execution of such applications.

These and other features and advantages of the present invention will appear more clearly from the following description of a few embodiments, illustrated in the annexed drawings, in which:

Fig. 1 shows a general block diagram of a device or a hardware-software product in which the graphic engine of the present invention is loaded and executed;

Figure 2 shows a block diagram of the architecture of an exemplary embodiment of the graphic engine of the present invention.

Referring to Figure 1, the graphic engine of the present invention is designated by numeral 1 and lies as middleware between the Input/Output systems and the operating system 2.

The Input/Output systems may be of any type and comprise one or more of currently known Input/Output devices, as well as possible future Input/Output systems and devices.

The figure shows, by way of example and without limitation, a tablet 3, a multi-touch screen 4, a multi-touch table 5 and an audio/video system.

Furthermore, the graphic engine 1 also has interfaces for external data provisioning/processing services, which are referenced Service 1, 2, 3 and N.

Before describing the graphic engine of the present invention in greater detail, in order to locate said engine in a system or a hardware/software device having the aforementioned multisensory features and using a NUI interface suitable for professional use, the general structure of this device should be outlined.

The integration of the graphic engine of the present invention will provide a multisensory device that is not available yet, and has the following basic parts:

Adequate hardware

Computer

Operating system

The graphic engine of the present invention

Application

Data Fusion

The form of the object shall depend on the purpose of the object itself. For example, a device designed to facilitate cooperation and decisions should be provided in the form of a table. Conversely, a device designed for supervision should be provided in the form of a wall mount monitor or panel. Alternatively, if the device is designed for use by an operator, it should preferably be in the form of a console, and so on.

The second basic part is the computer, which must be integrated and have a compact size, but is also required to ensure adequate performances, especially for video management.

The third element is the operating system, which has the task of managing the basic functions of the system and operates as an interface between the hardware and the application part.

In this case, the choice of the OS may be made according to customer-specific limitations, although the graphic engine of the invention is substantially of cross-platform type, and is hence able to run without distinction on all common operating systems.

The fourth element is the software component, i.e. the graphic engine of the invention, which is the basis of the applications that will leas to the creation of the hardware, i.e. the fifth element.

Finally, the last essential element consists of the data, which may be retrieved from external systems and hence require a Data Fusion server component .

Concerning the graphic engine of the present invention, this component has the purpose of being the graphic engine of what is represented on screen and on the interactions of users therewith.

In accordance with a preferred embodiment, the graphic engine has been designed and developed directly on the basis of OpenGL cross-platform graphics API as defined by Khronos Group (http : //www . khronos . org (https : / www . khronos . org)

Such engine has been implemented using as a programming method, in addition to OpenGL for graphic rendering, the language that is deemed to be most suitable for creating our graphic engine, i.e. the python language for the scripting component C++ for graphic libraries.

The choice of python as the main language is essentially dictated by the need of using a light, dynamic, modern and simple and flexible programming language, that supports both the object-oriented paradigm and structured programming, as well as many functional programming and reflection features. It also has a wide standard library, which makes it suitable for many uses, with the possible addition of further modules written in C, C ++ or other languages. It is compatible with all the platforms and, due to the simplicity of writing extensions in C an C ++ , it combines the language-specific simplicity with the high-level performances of C and C ++ languages .

Therefore, the provision of a Python language code and the creation of libraries in C and C ++, are a way to combine the simplicity and high-speed of Python with the power and execution speed of C.

The added value of the graphic engine of the present invention should be substantially found in the stability of the system and the implementation and integration of a 3D engine (created using OpenGL) in the graphic engine itself. This 3D engine allows data to be represented in a three-dimensional environment that can be addressed along three axes .

This three-dimensional environment allows the creation of 3-axis maps, with the inclusion of elevations, and the possibility of introducing three- dimensional elements and objects on the map, such as buildings or facilities, as well as the representation of POIs on coordinates composed of Latitude, Longitude and Elevation.

In graphic terms, OpenGL-based libraries have been created, with allow the generation of new elements without having to directly write in the OpenGL code .

In one example of the preferred exemplary embodiment, the architecture of the graphic engine of the present invention as shown in Figure 2 has a modular structure and lies between the executive application (10) and the software libraries and the operating system being executed.

The modules of the architecture are divided into 6 macro-areas dedicated to peculiar aspects of the framework, which communicate with one another during execution of the application.

The "RESOURCE SUBSYSTEM" macro area (20) contains the engine modules required for loading external data resources from multiple sources such as: file systems and local storage devices through LFS modules (210) and pen drives (211) , which use the Storage Controller (42) of the operating system to obtain the interface with the hardware devices, network protocols through the "Net" module (212) , which use the "Socket" modules (43) of the system for interfacing with the distributed network services, proximity devices through the "NFC" module (209) which interfaces with the "Input Drivers" module (40) of the operating system.

Each type of resource that can be obtained from these sources is abstracted by means of a specialized "Resource Adapter" (208) which provides a standard interface for obtaining high-level elements managed by the engine: Image (202) for managing raster images using a Freelmage library (31) , Video (203) for managing audiovisual elements using the VLC library (38) , Pdf (205) for managing Pdf documents using the Poppler library (34) , Text for managing text elements using the FreeType library (32) , HTML (204) for managing documents and hypertext links using Chromium Framework (30) and SSL (37) libraries, SVG (207) for managing vector image formats, Raw Data (201) for all the other types that have no direct connection with the rendering component but require ad hoc processing of the application being executed. The latter are connected via the "Data Adapters" (200) to the graphic part of the engine, which allows proper display thereof according to criteria selected by the application being executed.

The "GEOSPATIAL SUBSYSTEM" macro-area (25) contains the engine modules required for loading, converting and processing geolocation and cartographic information retrieved from any type of external data resources managed by the "RESOURCE SUBSYSTEM" (20) or the application itself. The abstraction of such resources is carried out by creating geometries expressed in geographic coordinates ("Geometry" module (252)) from raw data or modules that process data in multiple standard geospatial standards, such as: KML/KMZ (253) for reading KML and KMZ formats, Shapefile (250) for reading ESRI Shapefile formats, WKB/WKT (254) for reading WKB and WKT formats, GeoJSON (255) for reading the GEOJson format, WMS/WFS (256) for reading data of WMS and WFS Web services, DEM (257) for reading geographic Digital Elevation Model (DEM) data expressed in the HGT format.

Before being transferred to the presentation part, this data may be converted using automatic cartographic projection techniques or guided by the application using the "GeoProcessing" module (251) which uses the PROJ.4 library (35) to make the required mathematical calculations. This module also has the purpose of implementing certain conventional geodetic measurements, such as route calculations and distances between geographic points.

The "INPUT" macro-area (21) has the purpose of collecting and processing user inputs to the application from peripheral devices of the system supported by the engine and exposed by the operating system through "Input Drivers" (40) . Particularly, the engine manages touch control and text input peripheral devices through the modules: "Keyboard" (214) for managing text input from a physical keyboard, such module being implemented through the SDL library (36) , "Mouse" (213) for touch emulation using mouse peripheral devices, such module being implemented through the SDL library (36) , MTDev (217) for using Linux-native MultiTouch peripheral devices, "Tuio" (212) for communication with peripheral devices using the TUIO protocol, "WM Touch" (215) for using Windows-native MultiTouch peripheral devices, "Leap Motion" (216) for interfacing with Leap Motion input peripheral devices.

All the data retrieved by these providers is processed and abstracted by the "Raw Input" module (211) which may transfer it upon request directly to the application (10) or may transfer it to the "Gesture" module (210) which represents the global multi-touch gesture recognition and gesture behavior disambiguation manager for all the elements upon which the user may exert his/her interaction on the window of the application being executed. This module has the purpose of recognizing, in the indistinct flow of user-generated touches, the specific behaviors that can be reconducted to standard gestures and recognized by the engine, which has the purpose of propagating them to the appropriate elements of the application interface, thereby relieving the application of any type of control logics and uniforming gesture recognition over all the applications executed through the engine.

The "EVENT SUBSYSTEM" macro area (22) has the purpose of providing the synchronization and parallelism mechanisms required by the internal parts of the engine and for the logics of the applications being executed, from the "IPC" modules (44) ) provided by the operating system. This area is composed of the following modules: "Async Operations" (220) which manages all asynchronous operations and synchronization thereof with the main thread, which is the only one that allows graphic operations, "Task Scheduler" (221) which provides time scheduling of single or repeated operations, "Publish/Subscribe" (222) which provides the primitives required to implement Publish/Subscribe architectural patterns for the application modules.

The "Presentation Layer" macro area (23) is the part that has the purpose of managing and rendering the graphic elements of the application, which are added to the graphic window of the "Window" module (230) initialized through the SDL library (36) . All the graphic parts in the engine are constructed on standard graphic and cross-platform "OpenGL" APIs (33) , which are exposed by the Display Driver (41) of the operating system provided by the manufacturer of the graphic adapter.

Management, processing and caching of graphic elements is divided in the engine into two main modules, i.e. "2D Canvas" (231) and "3D Scene Manager" (232) , which have the purpose of managing two-dimensional and three-dimensional graphic resources respectively of the application. The two- dimensional graphic resources are also managed by the following specific modules: "Widgets" (233) which represent the extendable base interface elements offered by the engine, "CSS" (236) which manages customization and animation of these objects through CSS style sheets, "IDL" (234) which allows interface description through a XML-based language and which, with the "CSS" module (236) , provides essential tools to allow the application development team to not necessarily have software programming knowledge to act on the graphic interface of the application, "Unit" (235) which manages the transformations of graphic measurement units and particularly manages the physical measurement units of hardware devices. Due to this aspect the applications developed by the engine are able to adapt some or all of the graphic elements and user interactions therewith to multiple screens of different sizes and resolutions. Particularly, the possibility of not using only virtual measurement units common to all graphic environments, such as pixels, allows direct use of real measurement units such as centimeters, to define the dimensions of the graphic elements and the surfaces designed for interaction by the user, for a homogeneous experience and ergonomics of the interface for devices of various shapes and sizes.

The three-dimensional elements directly relate to the "3D Scene Manager" module (232) which has the purpose of managing one or more three-dimensional scenes, the placement of objects therein, the management of the virtual camera framing the scene, the lights and any possible interaction by the user or the application with these scenes.

The "Presentation Layer" (23) has been specially designed to visually support a great number of two- and three-dimensional graphic elements (of the order of thousands) .

The "Physics" macro area (24) has the purpose of providing animation and interaction components among the graphic objects of the system reflecting the behaviors close to the real physics of nature- occurring elements. The "Animation" module (240) has the purpose of providing an animation system with paradigms and primitives similar to those as used in computer graphics. The "Effect" module (242) has the purpose of managing graphic "collisions" among objects and parts of the application surface, to react to given events dictated by the logics of the application itself. The "Kinetics" module (241) has the purpose of providing inertia to the graphic objects, according to physical movement parameters similar to the real ones. The structure of the graphic engine of the present invention ensures the following functions:

2D and 3D scene rendering,

Ability of scaling the 2D/3D environment from a planetary-scale distance (6377000 m) to the focal length of the human eye (0.022 m) ,

Camera management in a 3D environment,

Management of effects and animations on graphic elements based on physical simulations, to enhance the perception of depth and movement

Interaction with the interface through peculiar and general gestures (pan/zoom/rotate) ,

Representation, management and handling of images ,

Representation, management and handling of charts and tables ,

Management of information panels and windows in 2D and 3D environments,

Management of the most widespread input protocols such as for example WM_Touch, WM_Pen, Trackpad, MtDEv, Linux Kernel HID, TUIO and others,

Representation of 2D/3D maps and 2D/3D georeferenced information contents,

Drawing and representing points , markers , lines, polygons and 3D models on maps

Asynchronous management of data flows .

Also, the graphic engine as described above provides the following supports: Native support for physical measurement units (dimensioning of scalable graphic elements based on the device that contains the application)

Native support for CSS,

Native support for an interface definition language (XML) ,

Cache system for management of graphic resources ,

Abstraction system for local and network resources,

System for representing real-time and non-realtime video flows ,

The ability of graphically superimposing multiple information layers (windows, panels, charts, etc. ) ,

The ability of interoperating with external systems using standard protocols (e.g. XML, SOAP,

REST, JSON, POST HTTP etc.),

WMS map support,

Vector map support,

KML and DEM standard support,

Tools for importing graphic assets for the most common graphic design and cad software (Adobe illustrator, Adobe Photoshop, Autodesk 3DS Max, etc. ) ,

Support for new-generation foreign I/O devices, such as Google Glass, Oculus Rift, LeapMotion etc.,

Support for interaction with speech input and in-air gesture devices

Support for communication with mobile devices based on IOS, Android, Windows Mobile,

RFID and NFC support. The characteristics of the graphic engine as described hereinabove are:

It is a cross-platform engine, and may be used on the following operating systems: Linux based, Windows 7/8, OSX

It can be executed on medium-level computers with a x86 processor (minimum requirements Intel i5, video adapter with Open GL 4.4. support, 4 GB Ram) .

It is compatible with multi-touch hardware manufactured by 3M, Pqlabs, Zytronic and Displax (these manufacturers releasing drivers for linux, windows and osx)

It is compatible with the input protocols

WM_Touch , WM_Pen , Mac OSX Trackpad , MtDEv , Linux Kernel HID

It is open to integration with other Input/Output peripheral devices, such as Google glass, Oculus Rift, leap motion, device iOS e device Android

It is a real-time three-dimensional rendering engine

It allows positioning of elements and points on a two- and three-dimensional space using geographic coordinates

It can represent raster and vector maps

It can interpret user gestures

It has a series of communication channels toward external systems

It manages streaming data flows (audio and video streaming)

It manages and represents GPS data flows

It allows both single-user and multi-user interactions It can interface with RFID and NFC and SCADA systems

It has a high reliability, stability and safety level .