Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NETWORK OF DEVICES FOR GENERATING COMMAND AND CONTROL SEQUENCES RELATED TO OBJECTS IN AN ENVIRONMENT, AND MANAGEMENT METHOD OF SAID NETWORK OF DEVICES
Document Type and Number:
WIPO Patent Application WO/2014/132275
Kind Code:
A1
Abstract:
Described is a network of devices for generating a command and control sequence which can be related to at least one object in an environment comprising video capturing elements positioned in the environment and designed to take partial video shots which can be associated with the object and/or an event involving the object, a management unit connected to the video capturing elements and/or to at least one object and designed to determine the command and control sequence at least as a function of the partial video shots and to direct it toward the object and/or to at least one video capturing element; the management unit is also designed to assign to all the objects positioned in the environment an identification parameter which is unique and independent of its position and/or orientation and/or movement in the environment and independent of its visibility/presence in a field of vision of a video capturing element.

Inventors:
BELLINI MARCO (IT)
LUONI MARCO (IT)
Application Number:
PCT/IT2013/000062
Publication Date:
September 04, 2014
Filing Date:
March 01, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERNET ONE S R L (IT)
International Classes:
H04N7/18; H04N5/232; H04N21/218; H04N21/2187
Domestic Patent References:
WO2002003702A12002-01-10
Foreign References:
EP2150057A22010-02-03
US5363297A1994-11-08
Other References:
None
Attorney, Agent or Firm:
TARABBIA, Luigi (Viale Lancetti 17, Milano, IT)
Download PDF:
Claims:
CLAIMS

1. A network of devices for generating command and control sequences which can be related to at least one object (2) positioned in an environment (3), comprising:

- a predetermined number of video capturing elements (3a) positioned, in a fixed or mobile position, in the environment (3) and designed to perform a respective predetermined number of partial video shots which can be associated with the object (2) and/or an event involving the object (2) ; and

- a management unit (4) operationally connected to the video capturing elements (3a) and/or to at least one object (2), the management unit (4) being designed to determine the command and control sequence at least as a function of a predetermined number of partial video shots performed by the video capturing elements (3a) and being also designed to direct the command and control sequence to at least one of the objects (2) and/or to at least one of the video capturing elements (3a) ,

characterised in that the management unit (4) is designed to assign to at least one object (2), and preferably to all the objects (2) positioned in the environment (3) , an identification parameter which is unique and independent of its position and/or orientation and/or movement in the environment (3) and independent of its visibility/presence in a field of vision (3b) of a video capturing element (3a) .

2. The network according to claim 1, wherein at least one video capturing element (3a) comprises a video camera defining a corresponding field of vision (3b) configurable according to a predetermined number of filming parameters, the filming parameters preferably comprising :

a switched ON condition, wherein a single video capturing element (3a) is designed to generate a partial video shot, and/or a switched OFF condition wherein, on the contrary, a single video capturing element (3a) is not able to generate a partial video shot; and/or

- a position in the environment (3) which is fixed or variable in terms of time/space; and/or

- a translational and/or rotational movement of a vision axis (3c) of the video camera and/or of a machine body of the video camera defining the vision axis (3c) relative to the stationary position of the video camera; and/or

- an angular amplitude of the field of vision (3b) ; and/or

- an enlargement factor inside the field of vision (3b) ; and/or

- a factor for focusing on an object (2) or on a group of objects (2) preferably under conditions of mutual proximity inside the field of vision (3b) ; and/or

- factors of illumination and/or chromatic contrast and/or chromatic saturation and/or chromatic filtration inside the field of vision (3b),

the command and control sequence being determined in its entirety by the management unit (4), preferably in an automatic manner, for defining a video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements (3a) , the video capturing elements (3a) being yet more preferably controlled by the command and control sequence as a function of positions and/or events related to at least one object in the environment in which the network (1) operates.

3. The network according to claim 1 or 2, wherein the management unit (4) comprises:

- input transmission means (4a) designed to transfer the partial video shots from the video capturing elements (3) to the management unit (4);

unique means (4b) of assigning identification parameters to at least one, and preferably to each object (2) in the environment (3); and - output transmission means (4c) designed to send to the video capturing elements (3a) the command and control sequence .

4. The network according to claim 3, wherein there are also measuring means operationally associated with the management unit (4) and designed to measure status parameters of at least one object (2), and preferably designed to transfer the status parameters to the management unit (4), the status parameters yet more preferably comprising:

- an absolute position of a single object (2) and/or relative positions of several objects (2) in the environment (3) ; and/or

- a length of time one or more objects (2) remain in the absolute position and/or in the relative positions starting from an initial time, the initial time preferably corresponding to a start time of the command and control sequence;

- an absolute linear speed and/or a speed along a trajectory of a moving object and/or relative speed between several objects (2) in the environment (3); and/or

- an absolute acceleration, either linear, centrifugal or along a trajectory of a moving object (2) and/or relative acceleration between several objects (2) in the environment (3) ; and/or

- an absolute time of remaining along a trajectory of an object (2) or times of relative detachment/separation between several moving objects (2) in the environment (3) ; and/or

- one or more physical quantities, preferably measurable in quantitative terms, related to an operational status of one or more objects (2) and/or of one or more structural/functional sub-components of an object (2) in the environment (3) , the physical quantities being preferably static or dynamic forces or temperatures or pressures or speeds of translation/rotation or strokes/elongations/angulations of joints/chain mechanisms/sub-components of an object (2) .

5. The network according to any one of the preceding claims, wherein the management unit (4) also comprises processing means designed for:

- receiving as input the status parameters of one or more objects (2) and/or the filming parameters of one or more video capturing elements (3a) ;

- generating a matrix of truth/probability values of statuses and/or events which can be related to the status parameters and/or the filming parameters; and

- defining the command and control sequence as a function of at least one comparison between the truth/probability matrix values and a corresponding series of of truth/probability reference values.

6. The network according to any one of the preceding claims, wherein there are also display means (5) designed to provide, in real time or on demand, an overall video sequence generated by applying the command and control sequence to the video capturing elements (3a) , the display means preferably comprising a multimedia terminal and/or a data storage unit yet more preferably associable with the multimedia terminal and/or designed to store the command and control sequence.

7. The network according to any one of the preceding claims, wherein there are also feedback means which can be directly activated on the object (2) to vary positioning, dynamic or functional statuses as a function of the command and control sequence, the feedback means comprising:

- actuators designed to vary the filming parameters of the video capturing elements (3a) and/or the status parameters of the object (2); and/or

- actuators designed to display operational instructions to a user which can be associated with the object (2) and/or with operational sub-units of the object (2) designed to perform multi-sensory interface functions with the user.

8. A method for managing and controlling an object (2) positioned in an environment (3), the management and control method being preferably implementable using a network of devices preferably according to any one of the preceding claims, the method comprising the following steps :

- performing a predetermined number of partial video shots which can be associated with a presence of the object (2) and/or at least one event related to the same object (2) in a field of vision (3b) of a video capturing element (3a) in the environment (3) ; and

- forming a command and control sequence which can be sent to the object (2) and/or the video capturing element (3a) ,

characterised in that the step of forming the command and control sequence comprises a sub-step of identifying the object (2) uniquely, and preferably in an automatic fashion, irrespective of its position and/or orientation and/or movement and/or visibility in the environment or in the field of vision (3b) .

9. The method according to claim 8, wherein there are also the following sub-steps:

- measuring status parameters of at least one object (2) and/or filming parameters of at least one video capturing element (3a) in the environment (3);

- generating a matrix of truth/probability values of statuses and/or events which can be at least related to the status parameters and/or the filming parameters; and

- defining the command and control sequence as a function of at least one comparison between the truth/probability matrix values and a corresponding series of of truth/probability reference values.

10. The method according to claim 8 or 9, wherein the environment (3) comprises a route occupied by and/or which can be passed along by a predetermined number of moving persons and/or vehicles, the route being preferably closed and the status parameters yet more preferably comprising:

- one or more cardinal or ordinal positions relative to the persons and/or vehicles with respect to a direction of travel along the route and/or with respect to an initial instant of time of occupation of the route;

- speeds and/or accelerations, absolute and/or relative to the persons and/or vehicles with respect to the route; absolute distances of the persons and/or of the vehicles in the route with respect to an initial point of starting travel along the route;

- relative spatial or temporal distances between the persons and/or between the vehicles present in the route; and/or

- average or absolute travel times, preferably classified in quantitative order starting from a minimum time, relative to the travel along one or more turns performed on the route by the persons and/or the vehicles;

physical/structural/functional parameters of the persons and/or of the vehicles in the route and/or of their structural or functional parts or sub-units;

- proximity or positional correspondence of the persons and/or of the vehicles in positions or predetermined portions of the route; and/or

- environmental factors relative to one or more points or portions of the route,

the command and control sequence defining a video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements (3a), the video capturing elements (3a) being controlled by the command and control sequence as a function of positions and/or events related to at least one object (2) in the environment in which the network (1) operates.

11. The method according to claim 10, wherein there is also a step of transmitting/displaying the video sequence and/or one or more groups of partial video shots on one or more multimedia terminals, and wherein there is preferably a step of storing the video sequence and/or the groups of partial shots in a data storage unit.

12. The method according to claim 11, wherein there is also a step of selecting, in an automatic fashion or according to arbitrary parameters of an operator, the video sequence or the groups of partial video shots prior to the step of transmitting/displaying the video sequence and/or the groups of partial video shots.

13. The method according to claim 10, wherein there is also a step of generating feedback, correction of control instructions which can be transmitted to at least one person and/or to a vehicle in the route as a function of the measurement of additional status parameters, the additional status parameters preferably comprising:

- a deviation of the person or of the vehicle from an optimum predetermined position or from a predetermined optimum passage time or from a predetermined optimum speed at one or more predetermined points on the route; and/or

- a relative or absolute speed of rotation value of at least one wheel of the vehicle with respect to a relative or absolute speed of rotation reference value of the at least one wheel related to one or more predetermined points on the route; and/or

- one or more values which can be related to an actuation of one or more commands of the vehicle with respect to reference actuation values related to one or more predetermined points on the route.

14. The method according to claim 8 or 9, wherein the environment (3) comprises a logistics area for goods storage, the status parameters preferably comprising: - a quantity of objects contained entirely in the logistics area; and/or

- an absolute position of an object or of a group of objects in the logistics area; and/or

- a relative position of an object with respect to another object in the logistics area; and/or

- a weight or a quantity of goods contained in an object in the logistics area; and/or

- one or more parameters which can be related to a movement of one or more objects in the logistics area; and/or

- a parameter which can be related to an open or closed status of an object in the logistics area, the object being preferably a package; and/or

- a position and/or a movement and/or a presence in a field of vision (3b) of a video capturing element (3a) of an object not associated with a unique identification parameter, the object not associated with a unique identification parameter being preferably an undesired operator inside the logistics area,

the command and control sequence defining a video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements (3a) and/or defining instructions for movement/deviation or maintaining the position of at least one object (2) inside the logistics area.

15. The method according to claim 8 or 9, wherein the environment (3) comprises a civil/urban area for stationary positioning and/or passage of users and/or vehicles and preferably equipped with a multiplicity of filming and/or surveillance video cameras, the status parameters preferably comprising:

a quantity of objects contained entirely in the civil/urban area; and/or

- an absolute position of an object or of a group of objects in the civil/urban area; and/or - a relative position of an object with respect to another object in the civil/urban area; and/or

- one or more parameters which can be related to one or more users and/or vehicles in the civil/urban area; and/or

- a parameter which can be related to an intersection or a geometrical/topological approach between trajectories of movement of two or more users and/or vehicles inside the civil/urban area; and/or

- a position and/or a movement and/or a presence in a field of vision (3b) of a video capturing element (3a) of an user and/or of a vehicle not associated with a unique identification parameter, the object not associated with a unique identification parameter being preferably an undesired or unexpected vehicle inside the civil/urban area,

the command and control sequence defining a video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements (3a) and/or defining instructions for movement/deviation or maintaining the position of at least one user inside the civil/urban area, the instructions for movement/deviation or maintaining the position being yet more preferably sent by the user through an object (2) comprising a multi-sensory interface .

Description:
NETWORK OF DEVICES FOR GENERATING COMMAND AND CONTROL SEQUENCES RELATED TO OBJECTS IN AN ENVIRONMENT, AND MANAGEMENT METHOD OF SAID NETWORK OF DEVICES

This invention relates to an integrated system, both at hardware and software level, designed to generate a video sequence related to one or more objects which move inside an environment, as well as a method for the management and the dynamic construction of the video sequence over time .

As is known, in various operational contexts it is necessary to "follow" one or more objects with video shots: these video shots can serve various purposes, such as, for example, the tracking and recording the movements of packages (boxes, suitcases and so on) inside rooms designed for the storage or circulation of goods and/or passengers (stores, railway stations, airports and the like) or for constructing a television or multimedia screen which displays and provides to an audience a "coverage" of an event (for example, a sports competition held on a given track) .

In order to guarantee the video shots, the prior art systems are generally based on a plurality of video cameras which are automatic or controlled by human operators, located at suitable points of an environment (in which the video shots must be taken) : these video cameras are generally controlled by a control centre comprising an operator-director or a centralised processing unit : depending on the type of control station (human or "automatic"), verbal/manual or automated commands are given to the video cameras, which can perform a predetermined series of movements and/or variations of the field of filming according to the various filming needs.

For example, there are prior art technical systems for monitoring environments in which a centralised software analyses the image filmed by one or more video cameras and identifies particular shapes or forms, the presence of which (or deviations in the general image) are then measured in the shots of one or more video cameras positioned in different sub-environments - for example communicating with each other - or at different points of a same environment.

However, these automated systems have considerable operational limitations due to the fact that the analysis of the image, even though it is made possible by the considerable processing power of modern computers, is always linked to a not perfectly unique and certain identification of the object which has generated a certain shape/form in the image taken by the video camera: moreover, this object, in the real world, could radically change its angulation with respect to the video camera, thus radically changing its analysed shape/form and, therefore, in effect "disappearing" from the software recognition (and this is even more likely if, when moving, the object shifts to a point of the environment which is not covered/reachable by the fields of vision of the video cameras) .

On the other hand, the filming systems coordinated by a human director suffer from considerable operational limitations due to the considerable work load of the "central" operator, who must not only select which video cameras to operate or, in any case, "select" to receive the video sequence, but must also set up, often in a completely random or intuitive manner, a predetermined sequence of changes of video cameras and/or movements or "effects" of a single video camera to give the generated video the necessary aesthetic/artistic qualities which make it pleasing to the eyes of the public.

Moreover, the video filming systems (both with automated control and human) are not generally able to create in real time a mounted and "selected" film which, on the one hand, can be completely descriptive of the movements of an object (or of a group of objects), and, on the other hand, can be optimised in terms of "concentration" on possible crucial events linked to the movement of one or more objects in the environment covered by the video cameras .

As a first example of the above, one can consider the case in which it is necessary to follow or record the movement of a single particularly critical package (in terms of contents/value of the goods) inside a given area in which other packages of similar form/shape move simultaneously: the prior art systems for analysing the image are not able to recognise particularly critical situations, such as for example, the unexpected/undesired approach of a person to the package.

As a further example, one can consider live filming of a motor racing competition on a track: the director, whilst having various video cameras located at different points of the track, is not always able to know well enough in advance if and when (and at what point of the track!) the interesting situations for the spectators are developing, such as, for example, a driver who is approaching another for overtaking or is significantly increasing his speed...rather than a situation of prolonged stability of the relative positions between the drivers (the latter event generating little interest and which may even be filmed for too long) .

The aim of this invention is therefore to provide a network of devices for video filming related to one or more moving objects (as well as a method for managing and controlling the network) which is able to overcome the drawbacks mentioned above and which achieves a greater degree of operational flexibility, efficiency in recognition of the moving objects and a much greater degree of versatility and readiness in making video sequences characterised by a high degree of visual/cinematographic processing and/or care in displaying both "static" visual details of the images and "dynamic situations" linked to one or more significant events which occur in the environment in which the video cameras are operating.

More specifically, the aim of the invention is to provide a network of devices, and a relative control and management method, which can allow an identification with an extremely high degree of unequivocalness of the single objects moving in an environment, irrespective of their kinematic situations and/or their number, and which can exercise this main function in fields of application which differ greatly from each other in terms of general or "application" characteristics (ranging, or example, from the control of rooms or stores in the logistics field to the environmental safety control in stationary or transient fields in civil or military/militarised sectors, to the creation and distribution in real time of films relative to sports/athletics events such as, for example, competitions in a closed circuit or along an indoor and/or outdoor track) .

Another aim of the invention is to provide a network of devices, with a related control and management method, which is able to compose and edit, in an extremely adaptable and fast manner with respect to the performance times of the events filmed in the environment in which the network operated, video sequences characterised by a large variety of shots on one or more moving objects, as well as an equally large variety of static (or "photographic") and dynamic (or "directorial") visual effects as well as choices relative to the so-called "video editing".

These and other aims are achieved, in the spirit of this invention, by a network of devices according to one or more of the claims appended hereto, which is denoted by the reference numeral "1" in the attached drawings (provided merely by way of example without restricting the scope of the inventive concept) , wherein Figure 1 is an overview diagram of the network positioned in an environment having in turn one or more moving objects inside it.

Inside the network of devices 1 according to this invention it is possible to distinguish, in the embodiment shown here by way of an example, one or more objects 2 (having the most differing possible natures, as described in more detail below) : these objects 2 can conveniently be found moving in the environment 3, even if according to specific requirements at the time one or more of these objects can be substantially stationary/immobile (and, according to their position and/or the time they remain in the environment, they can be displayed in one or more "partial video shots " generated by this invention in association with the objects 2 and/or with one or more "events" which involve at least one object 2) .

From the point of view of its main functional components, this network therefore comprises at least one video capturing element 3a, which can be arbitrarily located in a certain position (fixed or mobile) in the environment 3: the video capturing element 3a is designed to perform the previously mentioned "partial video shot" which can be associated with the object 2.

The network 1 also comprises a management unit 4 operationally connected to the video capturing elements 3a and/or to at least one object 2 (according to methods described in more detail below) : the management unit 4 is designed to determine a so-called "command and control sequence" at least as a function of a predetermined number of partial video shots performed by the video capturing elements 3a and it is also designed to direct the above-mentioned command and control sequence to at least one of the objects 2 and/or to at least one of the video capturing elements 3a.

Advantageously, in this invention the management unit 4 is designed to assign to at least one object 2, and preferably to all the objects 2 in the environment 3, an identification parameter which is unique and independent of its position and/or orientation and/or movement in the environment 3: at the same time, the unique identification parameter is independent from a visibility/presence of the object 2 in a field of vision 3b of one or more video capturing elements 3a operating in the environment 3.

In order to maximise the application flexibility, this invention can naturally comprise more than one video capturing element 3a, or, in any case, one or more sensory/environmental measuring elements (based, for example, on measuring the mass, acoustic, temperature or electromagnetic induction effects, without necessarily being linked to a video shot) which are able to measure and film or at least "follow " one or more objects 2 in the environment 3: in that case, the possibility of generating partial video shots can be performed alongside, or as an alternative to, with the possibility of generating tracking of presence and/or movement and/or events linked to signals measured by the above-mentioned sensory/environmental measuring elements (and, in parallel, the command and control sequence can be constructed on the basis of the presence, movement or "event" signals collected by the sensory/environmental measuring elements and by the unique identification parameters assigned to the objects 2 and/or to the video capturing elements 3a) .

According to the operational needs at the time, the management unit 4 can also be designed to generate a so- called "overall video flow or sequence" using a suitable combination of partial video shots taken by the elements 3a: unlike the prior art devices (compared, for example, with the automated systems for analysis of the image already operating in certain contexts linked to environmental security) , such an overall video flow or sequence will be generated independently from the visibility/presence of one or more objects 2 in one of the possible fields of vision 3b of the video capturing elements 3a.

In other words, and unlike prior art systems, this invention is able to control in feedback at least the video capturing elements 3a (or even, if present, the sensory/environmental measuring elements) in such a way as to generate an overall video flow or sequence which has the capacity of describing in the most complete possible manner all the possible "significant events" that can happen to all the objects 2 in the environment 3: this functionality constantly provides information (to an end user, which can be, for example, a television spectator or a monitoring station operator) on all the objects but at the same time "concentrated" only on the events truly of interest.

Looking at the structural details, it may be observed that the management unit 4 functionally serves the video capturing elements 3a and is also designed to associate a plurality of unique identification parameters to a corresponding plurality of objects 2 present in the environment 3: for this latter purpose, the management unit 4 can be equipped with suitable sensors and/or detectors and/or transmitters, which are able to communicate (one-way or two-way) with the single objects 2 according to various operating modes.

By way of an example of the above, the management unit 4 can, for example, be operatively associated with radio and/or GPS communication means coexisting on the unit 4 and/or on one or more of the objects 2 (or even on the video capturing elements 3a) : the communication means send and/or receive unique information on the objects 2 to/from the unit 4, in such a way as to allow the unambiguous identification.

With regard to the above-mentioned unique information, this can be selected in a totally arbitrary fashion according to the requirements at the time: for example, it may be necessary to communicate (both transmitting and receiving) a unique alphanumeric code assigned to a single object, or it will be possible to communicate speed, position and any other geometrical or kinematic/dynamic parameter linked to the movement or to the presence of a single object 2 in a predetermined part of the environment 3 in which the network 1 operates. From the physical point of view, the management unit 4 can be located as desired, according to the installation possibilities offered by the environment and/or the degree of complexity to be managed: for example, if the network 1 is dedicated to a single object moving in an environment (for example, a single vehicle performing tests "behind closed doors" on a test track or involved in the filming of promotional videos) , the unit 4 can be housed directly on the object/moving vehicle, and could also be equipped with integrated sensors and/or GPS signal management: alternatively, if the unit 4 is to have a higher processing and storage capacity, it can be configured as a stand-alone unit in a fixed position in the environment 3 and above all "physically separated" from the moving objects (for example, there could be in this case a processing unit acting as management unit 4 positioned in a suitable area of a civil/building complex, which communicates via radio or using suitable technical means with the video cameras and with the objects moving within this complex: each of the objects will therefore have installed on-board a sub-system of sensors and transmit positional/GPS type data, which is sent to unit 4 which processes it in the most suitable manner) . Now focussing attention on the video capturing elements 3a, it may be seen how these comprise a video camera defining a corresponding field of vision 3b configurable according to a predetermined number of filming parameters: in turn, the filming parameters can be communicated, in a two-way direction, between the video cameras and the management unit 4 and they can comprise: a switched ON condition (wherein a single video capturing element 3a is designed to generate a partial video shot) ;

- a switched OFF condition (wherein a single video capturing element 3a is not able to generate a partial video shot) ;

a stationary position, fixed or variable temporally/spatially in the environment 3 (to take into account the various filming requirements: for example, a video camera mounted on rails which moves parallel to a part of a route or along one or more walls of a room) ;

- a translational and/or rotational movement of a vision axis 3c of the video camera and/or of a machine body of the video camera 3c defining the above-mentioned vision axis 3c relative to the stationary position of the video camera (for example, the possibility of rotating the video camera about various axes, to achieve, for example, an "overturning" effect of the image) ;

- an angular amplitude of the field of vision 3b (for example, related to the possibility of selectively actuating, on a single video camera, various types of lenses or focal optics) ;

- an enlargement factor inside the field of vision 3b (for example, to change from filming with high zoom to panoramic shots) ;

- a factor for focusing on an object 2 or on a group of objects 2 (which, for example, can be under conditions of mutual proximity inside the field of vision 3b of a video camera) ; - factors of illumination and/or chromatic contrast and/or chromatic saturation and/or chromatic filtration inside the field of vision 3b (which can be related to the possibility of varying the visual/aesthetic impact of the video shot or which can be set, manually or automatically, to improve the sharpness of the images under particular filming conditions) .

On the basis of these possible filming parameters, the command and control sequence is determined in its entirety by the management unit 4, conveniently in an automatic manner, for defining a video sequence (referred to above with the name "overall video flow or sequence ") defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements 3a: in other words, the video capturing elements 3a are controlled by the command and control sequence as a function of positions and/or events related to at least one object 2 in the environment 3 in which the network 1 is operating.

By having unique identification parameters of the objects, data related to the position and movement of the object 2 and filming parameters, this invention is able to determine in its entirety (or as it is known in the television/cinematographic trade, to "edit" and "compose") the video flow or sequence using the automated functions nested in the management unit 4, even though, when necessary, it is possible for an operator to intervene manually in the composition/editing of the video flow or sequence.

For example, the management unit 4 can reversibly configure one or more video capturing elements 3a in such a way as to change their status between the switched ON to the switched OFF condition: in this way, it is possible to "follow" predetermined objects 2 (or groups of objects 2) in the environment 3 without having to necessarily receive all the partial video shots generated in real time by all the video capturing elements 3a. However, this invention can keep all the video capturing elements 3a in the switched ON condition, whilst rejecting from the editing/composition of the entire video sequence those partial video shots which will be considered "negligible " as a function of the various parameters (unique identification of the object (s), position/movement of the object (s) 2 and/or filming parameters) associated on each occasion with the partial video shots.

From the structural point of view, the management unit 4 comprises in particular input transmission means designed to transfer the partial video shots from the video capturing elements 3 to the management unit 4: these means can comprise suitable signal emitters connected to the video cameras and a receiver connected to the management unit 4, with, if necessary, sub-units for converting the analogue/digital or digital/analogue signal, according to the requirements at the time.

In the management unit 4 there are also suitable means for assigning unique identification parameters to at least one, and preferably to each object 2 in the environment 3: as described above, the unique assigning means can comprise GPS transmitters and/or data/parameter detectors associated with the single objects 2 (speed or acceleration sensors and so on, as will be described in more detail below) , which can send the data measured using suitable transmission sub-units to the unit 4 for processing .

In order to suitably control the video cameras served by the network 1, there are also suitable output transmission means, which are designed to send to the video capturing elements 3a and/or to the objects 2 the command and control sequence, in the form of suitable signals (which, for example, can be combination and/or selection and/or variation commands of filming parameters) : these signals are created by the management unit 4 by suitable processing, which will be described in more detail below.

If the requirements at the time need a higher degree of functional interaction, the transmission of the partial video shots (typically one-way from the video cameras to the management unit 4), the exchange (one-way or two-way) of the identification parameters of the objects 2 and the sending (preferably one-way from the management unit 4 to the objects 2 and/or to the video capturing elements 3a) of the command and control sequence can be performed by transceiver means shared between the various functional sub-components of the management unit 4 as described in the previous three paragraphs...even though, alternatively, each of these functional sub-components can have its own separate and dedicated transceiver network.

In order to guarantee the maximum degree of discrimination between the various objects 2, and to collect the greatest quantity of data necessary for processing the command and control sequence, there are, advantageously, measuring means functionally associated with the management unit 4: these measuring means are designed to measure status parameters of one or more objects in the environment 3 and they can also be configured to transfer the above-mentioned status parameters to the management unit 4.

In other words, according to this invention, it is possible that the so-called "unique identification" of the single objects 2 in the environment 3 is not linked to the assignment of the so-called unique identification parameter, but is performed on the basis of the set of status parameters of the objects 2: unless physically very improbable situations occur (such as, for example, a more or less "accidental" interpenetration of two objects 2, or unless there are errors in the determination of the position) it is possible to distinguish different objects at least due to the fact that their position coordinates will never coincide at a particular moment in time.

As a further alternative, it is possible, on the basis of the comparison of one or more status parameters linked to different objects 2, that the management unit 4 assigns differing unique identification parameters to the objects 2 : these unique identification parameters can be transmitted to the objects 2 or can be stored in the management unit 4, which will periodically re-perform the comparisons of status parameter coming from the objects to maintain or update the assignment of the unique identification parameters.

From the qualitative/quantitative point of view, the status parameters with which the management unit 4 can perform its processing functions can conveniently comprise :

- an absolute position of a single object 2 and/or relative positions of several objects 2 in the environment 3;

- a length of time one or more objects 2 remain in the above-mentioned absolute position (and/or in the relative positions) starting from an initial time, which in turn can correspond to a start time of the video flow or sequence ;

- an absolute linear speed and/or a speed along a trajectory of a moving object (and/or relative speed between several objects 2) in the environment 3;

- an absolute acceleration, either linear, centrifugal or along a trajectory of a moving object 2 (and/or relative acceleration between several objects 2) in the environment 3;

- an absolute time of remaining along a trajectory of an object 2 (or times of relative detachment/separation between several objects 2) moving in the environment 3;

- one or more physical quantities (typically measurable in quantitative terms) related to an operational status of one or more objects 2 and/or of one or more structural/functional sub-components of an object 2 in the environment 3.

With particular reference to the last of the possible status parameters listed above, it can be seen that these can comprise physical quantities describing static or dynamic forces (for example, an aerodynamic pressure on a moving vehicle, or the weight of fuel in a tank, or the weight or quantity of objects inside a package), temperatures, pressures, translation/rotation speeds (for example, engine revolutions or the angular speed of one or more wheels of a vehicle) or strokes/elongations/angulations of joints/chain mechanisms/sub-components of an object 2 (such as, for example, the state of extension or compression of one or more suspensions of a vehicle, the steering angle imparted to a vehicle by its driver or a geometrical indication of the opening or closing of a package) .

In order to use the status parameters in an innovative and original manner, the management unit 4 comprises suitable processing means, which:

- receive as input the status parameters of one or more objects 2 (and/or even the filming parameters of the video cameras, which can in effect be considered as "status parameters" for the video cameras themselves if they are considered as objects in the environment) ;

- generate a matrix of truth/probability values of statuses and/or events which can be related to the status parameters (and preferably, the matrix of truth/probability values can also be related to the filming parameters) ; and

- dynamically select and/or combine and/or vary the partial video shots for defining the command and control sequence (and, consequently, they can define the overall video flow or sequence which is generated applying the command and control sequence to the video capturing elements 3a) as a function of a comparison between the truth/probability matrix values and a corresponding series of of truth/probability reference values.

In other words, this invention is able to suitably control the video cameras 3a, processing the respective filming parameters and sending them in a selective and sequential manner, in such a way as to self-generate a series of shots which are always centred on those which are considered to be "significant events" that occur in the environment 3 and which involve one or more objects 2: the significance of the events is determined by the comparison of the truth/probability values, which in turn is performed on the basis of reference values which can be set in advance by an operator (for example, by a "director" who defines, case by case, the factors of greatest significance on which to instruct the network, by suitable preparation in advance of the reference values ) .

According to the quality and/or quantity of reference values which can be selected by an operator (or which can be determined by the management unit 4 on the basis of self-learning methods) , the network of devices 1 and the relative operating method ensure, on each occasion, a very accurate and "proportional" determination of the events and/or statuses which assume greater importance according to the application of the network 1 to specific types of environments or events: this allows, for example, to package an overall video sequence of a sports event automatically centred on the important moments of the competition (overtakings, accidents, reporting fastest laps and so on) or to set up in advance a "visual tracking" of one or more packages in a logistics area. If the command and control sequence is to be used for controlling objects 2, it is possible to implement suitable feedback means which can be directly activated on at least one object 2 (which in turn can be of a very - In variable nature, such as, for example, a vehicle or a multi-sensor interface which can be worn by a user) to vary positioning, dynamic or functional statuses as a function of the command and control sequence.

Structurally, the above-mentioned feedback means comprise actuators designed to vary the filming parameters of the video capturing elements 3a and/or the status parameters of the object 2 or, according to the type of objects 2 to be controlled, suitable actuators designed to display operating instructions to a user which can be associated with the object 2 (for example, head-up display type units which can be displayed on a windscreen of a vehicle, or movement and/or tactile feedback servo- mechanisms nested in articles of clothing or in suitable user interfaces) : these operating instructions can be visual or of a multi-sensory type.

As a further operational alternative which may be requested from the network 1, the feedback means can also be active on operational sub-units of the object 2 (for example, on a control unit of the motor and/or of the vehicle braking/stability control system) .

By way of an example of the operational possibilities of this network of devices 1, it should be noted how this can "cover" a sports event: in this case, the events or statuses of greatest importance can comprise (but not be limited to) the times/speeds of the athletes and/or of the vehicles along the track, any situation of an an athlete/vehicle coming close to another or the "undisturbed" permanence of a particular athlete/vehicle in a position in a race or along the track (but they can also be related to possible situations linked to mechanical faults or physiological problems, adequately monitored and measured, as well as various environmental conditions such as, for example, a change in lighting or the occurrence of different weather conditions) .

As a second example, considering, on the other hand, that the network of devices 1 must perform the work activities inside a logistics space, the events or statuses of greatest importance can comprise (but not be limited to) the presence of one or more packages in determined positions (incoming goods area, storage area, outgoing area and so on) , the weight of the packages, the condition of the surfaces of the packages, any accelerations related to falls or accidental movements of the packages, the mutual proximity of several packages to be delivered to a single recipient or delivered by a single courier and so on.

When the overall video sequence has been generated, and adequately edited as a function of the choices of the management unit 4, it is conveniently possible to transfer it to suitable display means 5, which are designed to provide the user (in real time or on demand, according to the requirements at the time) with an overall video sequence generated by applying the command and control sequence to the video capturing elements 3a: technically, these display means 5 can comprise any type of multimedia terminal (a screen, a video-projector or even a portable device such as a tablet, smartphone and the like) and/or a data storage unit, which can be associated with the multimedia terminal in such a way as to store the command and control sequence and/or the overall video sequence generated.

The implementation of the display means can therefore take into account various operational aims: for example, if the overall video sequence relates to a sports event, it can be immediately broadcast, whilst for events of an amateur nature the "dedicated" film can be displayed to a possible purchaser (who, in the meanwhile, has been actively involved in the "movement" of a relative object...or of the person himself/herself...in the environment) ; moreover, monitoring and security films can be constructed relative to the movement or permanence or one or more "critical" packages or packs, if they must be viewed for checking any theft, tampering or removal of the packages/packs.

This invention also relates to an innovative and original method for managing and controlling an object 2 positioned in an environment 3: the method can be conveniently implemented using a network of devices preferably as described up to here (and/or as claimed below) and basically comprising the following steps:

- taking a predetermined number of partial video shots which can be associated with a presence of one or more objects 2 and/or at least one event which can be related to the same object (s) 2, checking that at least one object 2 can be found in a field of vision 3b of a video capturing element 3a in the environment 3; and

- forming a command and control sequence which can be sent to at least one object 2 and/or at least one video capturing element 3a.

Advantageously, the above-mentioned step of forming the command and control sequence comprises a sub-step of identifying the object 2 uniquely, and preferably in an automatic fashion, irrespective of its position and/or orientation and/or movement and/or visibility in the environment or in the field of vision 3b.

For a greater functional completeness, the unique identification may be done on each object 2 present in the environment 3, always independently of the position, orientation, movement and/or visibility in the environment or in the partial video shots generated by the video capturing elements 3a.

Other operational sub-steps can be associated with the above-mentioned unique identification step, the final aim of which can be, for example, that of generating an overall video flow or sequence which can be considered "pre-edited" or already composed at "director" level: these sub-steps basically comprise: - a sub-step for measuring/detecting status parameters of at least one and preferably each object 2 in the environment 3 (or also for measuring/detecting filming parameters of at least one video capturing element 3a in the environment 3) ;

- a sub-step for generating a matrix of truth/probability values of statuses and/or events which can be at least related to the status parameters detected/measured (or also related to the filming parameters) ; and

- a sub-step for defining the command and control sequence as a function of at least one comparison between the truth/probability matrix values and a corresponding series of of truth/probability reference values.

As already mentioned, according to the choice of one or more status parameters, and, consequently, according to the qualitative determination of the truth/probability values which make up the choice matrix, it is possible to set up a series of editing/composition/direction criteria which are automatically applied to the partial video shots, and which therefore translate into the overall composition of a video flow or sequence.

For example, in a situation wherein the environment 3 to be filmed comprises a route (whether it is a closed circuit or an "open" path with a start and an end separate in the space) occupied and/or which can be passed along by a predetermined number of persons and/or vehicles (which can be moving or stationary, for example due to faults or other reasons) , the status parameters can conveniently comprise:

- one or more relative cardinal or ordinal positions of persons and/or vehicles with respect to a direction of travel along the route and/or with respect to an initial instant of time of occupation of the route;

- speeds and/or accelerations, absolute and/or relative to the persons and/or vehicles with respect to the route;

- absolute distances of the persons and/or of the vehicles in the route with respect to an initial point of starting travel along the route;

- relative spatial or temporal distances between the persons and/or between the vehicles present in the route;

- average or absolute travel times, preferably classified in quantitative order starting from a minimum time, relative to the travel along one or more turns performed on the route by the persons and/or the vehicles;

physical/structural/functional parameters of the persons and/or of the vehicles in the route and/or of their structural or functional parts or sub-units;

- proximity or positional correspondence of the persons and/or of the vehicles in positions or predetermined portions of the route; and/or

- environmental factors (lighting, weather conditions, opening or closing of possible alternative passages, variations to the road surface conditions and so on) relative to one or more points or portions of the route. Given the status parameters described by way of an example above, this method can conveniently define/compose a command and control sequence which, after sending to the video capturing elements 3a, defines an "overall" video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements 3a: in other words, the video capturing elements 3a are controlled by the command and control sequence as a function of positions and/or events related to at least one object 2 in the environment in which the network of devices 1 operates .

By a suitable use of the feedback means, this method also offers an additional function where there is the need to control a user or a vehicle moving on a given route (whether it is a track for a sports competition or a suitable path in a civil /urban area) : the method can generate suitable correction of control instructions, which can be processed by the management unit 4 and they can be transferred, by a simple reverse transmission or by a feedback process, to at least one person and/or a vehicle in the route as a function of the measurement of suitable "additional status parameters".

In other words, this method can be used to functionally integrate the video flow or sequence with corrective instructions on the moving object, which originate from both the real time analysis of the status parameters of the object and the analysis of the video shots: this allows, for example, for corrective instructions on the driving of a vehicle to be given in order to increase the "performance" or in order to reduce the mechanical wear during a competition...or also, in contexts other than sport/competitions, it allows information to be given to a user on the possibilities of moving safely or stopping close to dangerous areas/situations inside a given area. Looking in more detail, it can be seen that in order to optimise a driving performance and/or mechanical performance of a vehicle (or of a person engaged in a competition) , the additional status parameters can be related to feedback, correction or control instructions and can, for example, comprise:

- a deviation of the person or of the vehicle from an optimum predetermined position (in other words, a deviation from the so-called "ideal trajectory") or from a predetermined optimum passage time or from a predetermined optimum speed at one or more predetermined points on the route; and/or

- a relative or absolute speed of rotation value of at least one wheel of the vehicle with respect to a relative or absolute speed of rotation reference value of the at least one wheel related to one or more predetermined points on the route (in such a way as to identify any slipping situations of one or more wheels of a vehicle ) ; and/or - one or more values which can be related to an actuation of one or more commands of the vehicle with respect to reference actuation values related to one or more predetermined points on the route (in such a way as to assess the actual effectiveness of steering, braking or power discharge commands on the drive wheels imparted by the driver to a vehicle as a function of the adhesion to the "ideal trajectory" or depending on the state of wear of the tyres) .

It must therefore be noted that, in the above-mentioned example, an automated driving instruction function can be implemented, which automatically generates corrective information displaying it in real time (or, more generally, sending it to the driver in the form of visual or multi-sensory stimuli) as a function of the monitoring of the driving of the vehicle relative to the "critical points" of the circuit/track that is being travelled on. If, on the other hand, this method is applied in logistics contexts of stores or stations for the interchange of goods (or even passengers!), the status parameters can be conveniently selected at least from the following :

- a quantity of objects (or persons) contained entirely in the logistics area;

- an absolute position of an object/person or of a group of objects/persons in the logistics area;

- a relative position of an object/person with respect to another obj ect/person in the logistics area;

- a weight or a quantity of goods contained in an object (or weight of a person, if "connected" to baggages or particular clothing or accessories) in the logistics area; and/or

- one or more parameters which can be related to a movement (presence or concealment with respect to one or more of the fields of vision 3b, entering or leaving adjacent or non-adjacent fields of vision and so on) of one or more objects in the logistics area; and/or

- a parameter which can be related to an open or closed status of an object in the logistics area, the object being preferably a package; and/or

- a position and/or a movement and/or a presence in a field of vision 3b of a video capturing element 3a of an object/person not associated with a unique identification parameter (for example, a so-called "undesired operator" or in any case not authorised in the logistics area) . In the example in the logistics context described above, the command and control sequence therefore defines a video sequence defined by a combination and/or selection and/or variation of filming parameters of one or more video capturing elements 3a which "track" and therefore keep under surveillance the movement of one or more "critical" objects (whether they are packs/packages that are particularly important or damaged or are to undergo special operations, or they are operators authorised or unauthorised to operate in the logistics area) ; moreover, the command and control sequence can define instructions for moving/shifting or staying in position of at least one object 2 (one or more packages...or even one or more vehicles or movement mechanisms designed to manipulate the packages themselves!) in the logistics area.

A further application example of this invention, both at the level of the network of devices 1 and the management and control method, can be conveniently applied in a civil/urban area for the standing still and/or passage of users and/or vehicles and preferably provided with a plurality of filming and/or surveillance video cameras (for example, an urban area provided with surveillance video cameras pointing at various points which are potentially dangerous or where is the intensive passage of persons and vehicles) : in this work environment, the possible status parameters comprise:

a quantity of objects contained entirely in the civil/urban area; and/or

- an absolute position of an object or of a group of objects in the civil/urban area; and/or

- a relative position of an object with respect to another object in the civil/urban area; and/or

- one or more parameters which can be related to one or more users and/or vehicles in the civil/urban area; and/or

- a parameter which can be related to an intersection or a geometrical/topological approach between trajectories of movement of two or more users and/or vehicles inside the civil/urban area; and/or

- a position and/or a movement and/or a presence in a field of vision 3b of a video capturing element 3a of an user and/or of a vehicle not associated with a unique identification parameter (for example, an undesired or unexpected vehicle inside the civil/urban area) .

Operating on the parameters listed above, this invention can generate a command and control sequence and in turn an overall video sequence such as to keep monitored the most dangerous points of the area and to display/combine at filming level any acts or events of violation of security/safety conditions (accidents, possibility of collisions between persons and vehicles at pedestrian road crossings or due to the loss of control/driving position of the vehicles, criminal events and so on) and/or defining instructions for moving/shifting or staying in position of at least one user inside the civil/urban area.

With particular reference to the last operational possibility described in this example, it can be seen how this invention is advantageously able to perform almost "autonomous" driving functions on particular types of users of the civil/urban area: for example, in the case of persons with reduced mobility or with sensory handicaps, the command and control sequence can define and send instructions for moving/shifting or staying in position which take into account, in real time, dangerous situations (or the possibility of safe movement!), which are perceived by the user through an object 2 which comprises a multi-sensory interface.

In all the examples described so far, this method can therefore provide a step for displaying the video flow or sequence by an operator on a multimedia terminal: as already seen at the level of network of devices, the displaying step can be accompanied by a step for storing the overall video flow or sequence in a data storage unit .

In conjunction with the displaying of an overall video flow or sequence, there can also be a step for selecting, possibly automated (or according to arbitrary parameters which can be assigned by an operator) , the video sequence or "groupings" of one or more of the above-mentioned partial video shots: the selecting step can occur before the step for transmitting/displaying the video sequence and/or any transmitting/displaying of the groupings of partial video shots.

Due to the hardware and the operational possibilities at the method level described so far, this invention is able to provide various function levels in addition to those already listed: for example, it is possible to implement a so-called "shared system", wherein during an event various agencies or persons "owning" stand-alone filming devices (for example, a community of smartphone users equipped with video cameras) interconnect with each other, for example on the basis of a shared software installed on the various personal devices: due to the interaction between the single devices and the most common methods of geo-localisation (using GPS mapping and/or localisation based on the cells of the mobile phone network) the network 1 may be formed in am extremely fast manner and with extremely low costs, thus being able to provide a video coverage of the event in which the owners (spectators or "protagonists") of the single portable personal devices with video filming capacity are participating.

Moreover, so as to give the maximum operational flexibility to the network 1, and to the management and control method, it is possible to qualify or quantify indicative status parameters for subjective or statistical preference during determination of the reference values for levels of truth/probability: for example, it is possible to quantify a particular "filming position" in an area, which is voted for by a majority of users, or it is possible to indicate "preferred" objects in terms of statistical ratings linked to the probability of winning a sports event, or the probability of executing an optimum or "below standard" performance and so on.

Also, this invention can provide, both at hardware and method level, the possibility of self-learning, by the network of devices 1, an optimum command and control sequence for each type of event which can be followed by the network: this command and control sequence can, for example, consist in storing various combinations and/or selections and/or variations of filming parameters relative to several events separate in time (for example, motor races held at different times on the same circuit) and which, by subsequent suitable comparison with respect to the corresponding truth/probability values, are classified in a hierarchical manner in terms of greater or lesser "efficiency" in following and recording/filming the events actually occurring in a particular environment .

Thanks to this hypothetical succession of supplementary passages, the network of devices can provide a database of preferable command and control sequences (or, in other words, a database of "event filming profiles") , which can be applied in a preferential manner according to the type of events/environments.

If, on the other hand, partial video shots are to be provided efficiently and flexibly (for example, to assist an operator by indicating video events which are more worthy of attention) , the displaying step can be actuated in different ways: it is possible to show on a video a list arranged in order to interest of the various events filmed at different points of the environment, indicating them (either by additional graphic elements processed by the management unit 4 or by visual means such as, for example, a larger size of the video "window") according to the status parameters which have been adjudged to be more important (considering, for example, the possibility of avoiding those undesirable situations in which a television commentator continues to talk about matters of little importance when the public is aware that an event worthy of more attention is occurring) .

The displaying of the partial video shots can also be done on a single screen, according to a "mosaic" video composition.

With regards to the means of managing the partial video shots, the network of devices 1 can be implemented, using the information technology systems currently available, in such a way as to guarantee an extremely high degree of data security and redundancy of the transceiver functions: for example, it is possible to store all the data relative to the partial video shots (or the status parameters linked to the objects 2) "locally", that is, using suitable storage means which can be associated with the video capturing elements 3a or with the objects 2: the data stored can then be transmitted "on condition", for example, after predetermined intervals of time or as the objects 2 pass/come close to a suitable receiving device (for example, the passage of the vehicles in the the "pit lane") . Again from the point of view of redundancy, it is possible to implement a so-called "zonal distribution", wherein video cameras inside and outside a predetermined environment can cooperate with each other without necessarily communicating with the management unit 4: in the case of interruption of the data flow (which will be typically performed in wireless mode) , various processing sub-units located in each "zone" in which the environment is subdivided will receive the data generated locally, store the data "locally" and send the data soon as possible to the management unit 4, either as a data packet or more simply as a system hyperlink.

From the point of view of selective management, and the automatic assignment of a level of preference of one partial video shot relative to another, it is also possible to implement a capacity for self-suspension of films in replay: if during re-presentation of an event (in replay) another event is adjudged, in real time, to be "of greater importance" relative to that of the replay, the network 1 can automatically suspend the video on demand to immediately move to the more important display...if necessary completing the re-presentation in replay upon completion of the more important event. However, if an event in the environment is classified as extremely important (and this can happen, for example, if the truth/probability values associated by the management unit 4 to this event are much higher than certain threshold values and/or are high for a minimum number of different single parameters), it is possible to implement a direct command and control sequence towards several video capturing elements 3a which will film the event simultaneously and from different angles (and/or points of view) , thus maximising the usability: for example, in the case of a particularly exciting overtaking, the management unit 4 can simultaneously activate video cameras fixed to the sides of the track, mobile video cameras (for example, those mounted on helicopters or drones) and video cameras on board vehicles involved in the manoeuvre, whilst keeping the other video cameras in "normal" work conditions.

On the basis of values which can be set by a user (whether it is a "director" or a composer of the overall video sequence) , this invention can also generate overall video sequences with different levels of completeness and/or variations in the filming parameters: the overall video sequences can then be released, for example, after payment of different prices, to various end users (paying spectators, television networks and so on) , thus offering the possibility of having multiple outputs with the "direction" at various degrees of development and content according to the end user.

The invention achieves interesting advantages.

Firstly, it should be noted that the basic constructional architecture of the network of devices can be adapted to any type of "environment", whether it is one or more rooms/closed spaces designed for multiple uses (logistics structures, spaces for the movement and/or the waiting of passengers or persons and so on) or an open air environment which has a closed or open route (for example, a circuit or a network of interconnected communication paths) : in all these types of environment, the network is able to considerably increase the degree of automation in the making of video sequences which temporally cover the events in the filming environment and which at the same time can be constructed in a very fast and dynamic manner in terms of filming transitions, changes of filming points, variations in the optical filming conditions (lighting, visual effects, focussing and so on), tracking of one or more moving objects.

Moreover, it should be noted that this invention can be implemented at widely varying levels of completeness, passing, for example, from the creation and management of a video film (dynamically variable in terms of the qualitative and/or quantitative parameters referred to in the previous paragraph) of a single object in a relatively simple environment from the topological/volumetric point of view to the creation and management of a video film of several objects in very large environments or with a very high degree of interconnection between areas which are very different to each other: also, the arrangement of the video filming means in the network of devices can be such as to comprise mobile video cameras, even and especially mounted on the moving objects, without losing anything in terms of variety in the composition or in the "cinema/television editing" of the video sequences.

Lastly, it should be noted that this invention is able integrate, at a functional level, in macro-systems which as well as the mere video filming (and in addition to the creation of video sequences composed and edited with a wide dynamic variation of filming parameters) can offer considerable and extremely wide ranging additional functions, such as, for example, the possibility of implementing real time security measures (for example, closing of passages and/or directing the objects along certain preferential directions) , in perfect correspondence with the automatic detection of particularly critical events ...or also the possibility of implementing an advanced telemetry service as well as teaching the driving of vehicles, which takes into account both the on board view of the pupil and the passages of the vehicle with respect to ideal points of travel as visible from the video cameras situated on the route .