Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE AND METHOD FOR DETECTING A LEVEL CROSSING BARRIER
Document Type and Number:
WIPO Patent Application WO/2023/126316
Kind Code:
A1
Abstract:
There is provided a detection device (100) configured to detect in a 2D image of a first scene, acquired by a first imaging module (112), a set of pixels representing a barrier (200) of a level crossing (40), and to synthetize a virtual 2D image from a set of points of a 3D cloud of points of a second scene acquired by the second imaging module (112), the first and the second scene comprising the barrier (200), the virtual 2D image comprising a 2D arrangement of pixels and being synthetized according to a first plurality of acquisition parameters so that a plurality of pixels of the virtual 2D image corresponds to the set of pixels of the 2D image representing the barrier (200), the 2D image being merged with the virtual 2D image to provide an enhanced 2D image from which the parameters of the barrier (200) are determined.

Inventors:
LAVIRON PHILIPPE (FR)
ZHANG ALEX (CN)
Application Number:
PCT/EP2022/087545
Publication Date:
July 06, 2023
Filing Date:
December 22, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THALES SA (FR)
THALES CHINA ENTERPRISES MAN CO LTD (CN)
International Classes:
G06V20/58; G06V10/80; G08B13/00; G08G1/00
Other References:
ZHAO GUANGLIANG ET AL: "A Video Analytic System for Rail Crossing Point Protection", 2021 17TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), IEEE, 16 November 2021 (2021-11-16), pages 1 - 8, XP033997527, DOI: 10.1109/AVSS52988.2021.9663781
Attorney, Agent or Firm:
ATOUT PI LAPLACE et al. (FR)
Download PDF:
Claims:
CLAIMS A detection device (100) for determining one or more parameters of a barrier (200) of a level crossing (40), comprising:

- an imaging unit (110), the imaging unit (110) comprising a first imaging module (111 ) configured to acquire two-dimensional (2D) images of a first scene according to a first plurality of acquisition parameters, the first imaging module (111 ) storing intrinsic and extrinsic parameters, each of the 2D images comprising a 2D arrangement of pixels, the imaging unit (110) further comprising a second imaging module (112) configured to acquire a three-dimensional (3D) cloud of points of a second scene according to a second plurality of acquisition parameters, the first and the second scene comprising the barrier (200) of the level crossing (40);

- a database (130) configured to store a position of the barrier (200) of the level crossing (40) in a given used coordinate system;

- a processing unit (120) configured to: o receive a 2D image and a 3D cloud of points simultaneously acquired by the first and by the second imaging module (112), respectively; o detect in the 2D image a set of pixels representing the barrier (200) of the level crossing (40); o detecting the barrier (200) from the 3D cloud of points and from the data stored in said database; o synthetize a virtual 2D image from a set of points of the 3D cloud of points using the intrinsic and extrinsic parameters of the first imaging module (111 ), the virtual 2D image comprising a 2D arrangement of pixels and being synthetized according to the first plurality of acquisition parameters so that a plurality of pixels of the virtual 2D image corresponds to the set of pixels of the 2D image representing the barrier (200) of the level crossing (40); o merge the 2D image with the virtual 2D image, which provides an enhanced 2D image; o determine one or more parameters of the barrier (200) of the level crossing (40) from the enhanced 2D image and from the data stored in said database. The detection device (100) of claim 1 , wherein the database (130) is further configured to store coordinates of a 3D barrier (200) movement envelope within which the barrier (200) of the level crossing (40) moves, the processing unit (120) being configured to synthetize the virtual 2D image from the set of points of the 3D cloud of points that are located inside the 3D barrier (200) movement envelope. The detection device (100) as claimed in one of the preceding claims, wherein the processing unit (120) is configured to synthetize the virtual 2D image from the set of the 3D cloud of points, the synthetization of the virtual 2D image comprising:

- A projection of the set of 3D cloud of points on a 2D plan representing a focal plane of the first imaging module (111 );

- A removal from the projected points of one or more points representing occulted objects with respect to the first imaging module (111 );

- An interpolation of the remaining projected points to synthetize the virtual 2D image, each pixel of the virtual 2D image corresponding to a pixel of the 2D image. The detection device (100) as claimed in one of the preceding claims, wherein the processing unit (120) is further configured to:

- synthetize a virtual 3D cloud of points from the 2D image acquired by the first imaging module (111 ), the virtual 3D cloud of points being synthetized according to the second plurality of acquisition parameters ;

- merge the 3D cloud of points with the virtual 3D cloud of points, thus obtaining an enhanced 3D cloud of points;

- determine one or more parameters of the barrier (200) of the level crossing (40) from the enhanced 3D cloud of points. The detection device (100) as claimed in one of the preceding claims, wherein the first imaging module (111 ) is configured to acquire the 2D images in the visible portion of the electromagnetic spectrum. 18 The detection device (100) as claimed in one of the preceding claims, wherein a color is assigned to each point of the 3D cloud of points depending on one or more assigned values. The detection device (100) as claimed in one of the preceding claims, wherein the one or more parameters of the barrier (200) of the level crossing (40) comprise a barrier (200) position parameter, the processing unit (120) being configured to determine the barrier (200) position parameter by measuring an angle between a current position of the barrier (200) and a reference position of the barrier. The detection device (100) as claimed in claim 7, wherein the one or more parameters of the barrier (200) of the level crossing (40) further comprise a barrier (200) movement parameter, the processing unit (120) being configured to determine the barrier (200) movement parameters by comparing the current position of the barrier (200) to at least one previous position of the barrier (200) with reference to time. The detection device (100) as claimed in claims 2 and 7, wherein the one or more parameters of the barrier (200) of the level crossing (40) further comprise an integrity parameter, the processing unit (120) being configured to determine the integrity parameters by comparing a length of the barrier (200) as measured within the 3D barrier (200) movement envelope with a length of the barrier (200) as measured without considering the 3D barrier (200) movement envelope. The detection device (100) as claimed in one of the preceding claims, wherein the processing unit (120) is configured to detect in the 2D image the set of pixels representing the barrier (200) of the level crossing (40) by applying a machine learning algorithm on the pixels of the 2D image. A method for determining one or more parameters of a barrier (200) of a level crossing (40), the method comprising the steps of:

- acquiring (301 ) two-dimensional (2D) images of a first scene according to a first plurality of acquisition parameters, each of the 2D images comprising a 2D arrangement of pixels, said step of acquiring 2D images further comprising storing intrinsic and extrinsic parameters, and acquire a three-dimensional (3D) cloud of points of a second scene according to a second plurality of acquisition parameters, the first and 19 the second scene comprising the barrier (200) of the level crossing (40);

- storing a position of the barrier (200) of the level crossing (40) in a given used coordinate system;

- detecting (302) in the 2D image a set of pixels representing the barrier (200) of the level crossing (40);

- detecting the barrier (200) from the 3D cloud of points and from the data stored in said database;

- synthetizing (303) a virtual 2D image from a set of points of the 3D cloud of points using said intrinsic and extrinsic parameters, the virtual 2D image comprising a 2D arrangement of pixels and being synthetized according to the first plurality of acquisition parameters so that a plurality of pixels of the virtual 2D image corresponds to the set of pixels of the 2D image representing the barrier (200) of the level crossing (40);

- merging (304) the 2D image with the virtual 2D image, which provides an enhanced 2D image; and

- determining (305) one or more parameters of the barrier (200) of the level crossing (40) from the enhanced 2D image and from the data stored in said database.

Description:
DEVICE AND METHOD FOR DETECTING A LEVEL CROSSING BARRIER

TECHNICAL FIELD

The invention generally relates to monitoring systems and in particular to a device and a method for detecting the status of a level crossing barrier.

BACKGROUND

The deployment of railway networks has grown rapidly over the last two decades, particularly in countries experiencing significant social and/or economic progress. In order to achieve coexistence between a railway network and other types of transport networks, e.g. road network, level crossings are widely deployed. A level crossing allows a railway line of a railway network to cross a road or a path belonging to another transport network. The use of a level crossing to achieve such coexistence is cost effective when compared to other solutions consisting, for example, to build an overpass or a tunnel. In order to achieve a certain level of safety when crossing a level crossing, a level crossing is generally equipped with a number of rotating barriers configured to control the access to the level crossing for the vehicles of one of the involved transport networks. For example, the barriers of a level crossing may be used to block the access to the level crossing for the vehicles of a road network when a control unit detects an approaching railway vehicle running on the tracks of the railway network. However, a functioning fault may occur in a level crossing, for example when the barriers of the level crossing are damaged and/or when the control unit ruling the rotational movement of the barriers is dysfunctional. Thus, there is a need for a permanent monitoring of the barriers of a level crossing.

A known solution to allow a permanent monitoring of a level crossing consists in deploying one or more cameras providing images in the visible portion of the wavelength spectrum. However, the performance of such a solution is strongly dependent on the lighting conditions and the monitoring integrity (safety level) cannot be demonstrated. For example, a total darkness at night could impede the deployed cameras to provide useable images of the level crossing, making it impossible to monitor the level crossing. To overcome these limitations, it is known to equip the level crossing with sources of light in order to provide the required lighting to take usable images in the visible portion of the wavelength spectrum. The deployment of light sources yet entails additional costs related to power consumption and maintenance work, and the monitoring integrity (safety level) demonstration cannot be achieved either.

Another known solution to monitor a level crossing relies on mechanical systems to check the status of the barriers of the level crossing. Such mechanical systems include, for example and without limitations, inclinometers fitted on the barrier and/or encoders attached to the LX barrier axis. However, this solution is not compatibility with the detection of the barrier integrity (completeness) status.

Thus, there is a need for an enhanced device for monitoring a level crossing barrier.

SUMMARY

To address these and other problems, there is provided a detection device for determining one or more parameters of a barrier of a level crossing, comprising:

- an imaging unit, the imaging unit comprising a first imaging module configured to acquire two-dimensional (2D) images of a first scene according to a first plurality of acquisition parameters, the first imaging module storing intrinsic and extrinsic parameters, each of the 2D images comprising a 2D arrangement of pixels, the imaging unit further comprising a second imaging module configured to acquire a three-dimensional (3D) cloud of points of a second scene according to a second plurality of acquisition parameters, the first and the second scene comprising the barrier of the level crossing;

- a database configured to store a position of the barrier of the level crossing in a given used coordinate system;

- a processing unit configured to:

■ receive a 2D image and a 3D cloud of points simultaneously acquired by the first and by the second imaging module, respectively;

■ detect in the 2D image a set of pixels representing the barrier of the level crossing;

■ synthetize a virtual 2D image from a set of points of the 3D cloud of points using the intrinsic and extrinsic parameters of the first imaging module, the virtual 2D image comprising a 2D arrangement of pixels and being synthetized according to the first plurality of acquisition parameters so that a plurality of pixels of the virtual 2D image corresponds to the set of pixels of the 2D image representing the barrier of the level crossing;

■ merge the 2D image with the virtual 2D image, which provides an enhanced 2D image;

■ determine one or more parameters of the barrier of the level crossing from the enhanced 2D image.

In one embodiment, the database may be further configured to store coordinates of a 3D barrier movement envelope within which the barrier of the level crossing moves, the processing unit being configured to synthetize the virtual 2D image from the set of points of the 3D cloud of points that are located inside the 3D barrier movement envelope.

In some embodiments, the processing unit may be configured to synthetize the virtual 2D image from the set of the 3D cloud of points, the synthetization of the virtual 2D image comprising:

- A projection of the set of 3D cloud of points on a 2D plan representing a focal plane of the first imaging module;

- A removal from the projected points of one or more points representing occulted objects with respect to the first imaging module;

- An interpolation of the remaining projected points to synthetize the virtual 2D image, each pixel of the virtual 2D image corresponding to a pixel of the 2D image.

In one embodiment, the processing unit may be further configured to: synthetize a virtual 3D cloud of points from the 2D image acquired by the first imaging module, the virtual 3D cloud of points being synthetized according to the second plurality of acquisition parameters; merge the 3D cloud of points with the virtual 3D cloud of points, thus obtaining an enhanced 3D cloud of points; determine one or more parameters of the barrier of the level crossing from the enhanced 3D cloud of points. In one embodiment, the first imaging module may be configured to acquire the 2D images in the visible portion of the electromagnetic spectrum.

In some embodiments, a color may be assigned to each point of the 3D cloud of points depending on one or more assigned values.

The one or more parameters of the barrier of the level crossing may comprise a barrier position parameter, the processing unit being configured to determine the barrier position parameter by measuring an angle between a current position of the barrier and a reference position of the barrier.

In some embodiments, the one or more parameters of the barrier of the level crossing may further comprise a barrier movement parameter, the processing unit being configured to determine the barrier movement parameters by comparing the current position of the barrier to at least one previous position of the barrier with reference to time.

The one or more parameters of the barrier of the level crossing may further comprise an integrity parameter, the processing unit being configured to determine the integrity parameters by comparing a length of the barrier as measured within the 3D barrier movement envelope with a length of the barrier as measured without considering the 3D barrier movement envelope.

The processing unit may be configured to detect in the 2D image the set of pixels representing the barrier of the level crossing by applying a machine learning algorithm on the pixels of the 2D image.

There is further provided a method for determining one or more parameters of a barrier of a level crossing, the method comprising the steps of:

- acquiring two-dimensional (2D) images of a first scene according to a first plurality of acquisition parameters, each of the 2D images comprising a 2D arrangement of pixels, said step of acquiring 2D images further comprising storing intrinsic and extrinsic parameters, and acquire a three-dimensional (3D) cloud of points of a second scene according to a second plurality of acquisition parameters, the first and the second scene comprising the barrier of the level crossing;

- storing a position of the barrier of the level crossing in a given used coordinate system;

- detecting in the 2D image a set of pixels representing the barrier of the level crossing ;

- synthetizing a virtual 2D image from a set of points of the 3D cloud of points using said intrinsic and extrinsic parameters, the virtual 2D image comprising a 2D arrangement of pixels and being synthetized according to the first plurality of acquisition parameters so that a plurality of pixels of the virtual 2D image corresponds to the set of pixels of the 2D image representing the barrier of the level crossing;

- merging the 2D image with the virtual 2D image, which provides an enhanced 2D image; and

- determining one or more parameters of the barrier of the level crossing from the enhanced 2D image.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate various embodiments of the invention and, together with the general description of the invention given above, and the detailed description of the embodiments given below, serve to explain the embodiments of the invention.

Figure 1 represents a monitoring system deployed in a level crossing, according to some embodiments of the invention;

Figure 2 schematically illustrates the structure of a detection device deployed in or near a level crossing, according to some embodiments of the invention; and

Figure 3 is a flowchart illustrating a method for determining one or more parameters of a barrier of a level crossing, in accordance with an embodiment of the invention. DETAILED DESCRIPTION

Figure 1 shows a monitoring system 10 deployed in a level crossing 40, according to some embodiments of the invention. The level crossing 40 represents an intersection area of two paths belonging to two different transport networks 20, 30. Examples of transport networks 20, 30 include, but are not limited to, railway networks, road networks and bikeway networks. The level crossing 40 may for example represent an intersection area where the tracks of a railway network cross a road or a path belonging to a road network.

The monitoring system 10 comprises one or more barriers 200 that may be used to control the access to the level crossing 40 for associated vehicles running on one of the involved transport networks 20, 30. Each of the barriers 200 may be subject to a rotation movement and be associated with three possible states with respect to the associated vehicles, comprising: a closed barrier state, an open barrier state, and a moving barrier state. In some embodiments, each barrier 200 may be further associated with an additional state corresponding to a non-integral state.

While a closed barrier blocks the access to the level crossing 40 for the associated vehicles, an opened barrier allows the associated vehicles to cross the level crossing 40. A moving barrier refers to the state between the open state and the closed state. A barrier is in a non-integral state if the barrier has been degraded, for example vandalized or broken. The monitoring system 10 comprises a control unit 300 that is configured to control the state of one or more barriers 200 deployed in the level crossing 40. For example, the control unit 300 may be configured to change the state of a barrier 200 of a level crossing 40 from an open state to a closed state when a railway vehicle, e.g. a train or a subway, is approaching. In some embodiments of the invention, the level crossing 40 further comprises warning devices such as crossing lights and/or electric bells that may be activated before, during and/or after switching the state of one or more barriers 200 of the level crossing 40. In such embodiments, the control unit 300 is configured to activate the warning devices.

The monitoring system 10 further comprises a detection device 100 comprising one or more imaging modules. Each of the imaging modules has a field of view that may be static over time or adjustable. Advantageously, the field of view of each imaging module may be chosen in a way to encompass one or more barriers 200 of the level crossing 40. For example, each imaging module of the detection device 100 may be configured so that its corresponding field of view encompasses all the barriers 200 of the level crossing 40. Alternatively, one or more imaging modules of the detection device 100 may be configured so that their corresponding fields of view encompass, at a given time, a single barrier 200 of the level crossing 40. Further, the detection device 100 is configured to determine one or more parameters of each barrier 200 of the level crossing 40 by using data acquired by means of the imaging modules.

In some embodiments of the invention, the monitoring system 10 further comprises one or more sources of light that may be controlled by the detection device 100. For example, each source of light may be associated with one imaging unit 110 and configured to illuminate the field of view of the corresponding imaging unit 110. Further, each source of light may be activated only to allow acquiring data by the corresponding imaging unit 110.

Figure 2 schematically illustrates the structure of a detection device 100 deployed in or near a level crossing 40, according to some embodiments of the invention. The detection device 100 comprises an imaging unit 110 oriented towards a barrier 200 of the level crossing 40. The imaging unit 110 comprises a first imaging module 111 , the first imaging module 111 being configured to acquire two- dimensional (2D) images of a first scene according to a first plurality of acquisition parameters, each of the 2D images comprising a 2D arrangement of pixels. The first plurality of acquisition parameters comprises, for example and without limitation, an observation point, a field of view and a line of sight. Further, the first scene captured by the first imaging module 111 may comprise a barrier 200 of interest deployed in the level crossing 40.

In one embodiment of the invention, the first imaging module 111 may be configured to acquire the 2D images in the visible portion of the electromagnetic spectrum.

In another embodiment of the invention, the first imaging module 111 may be configured to acquire the 2D images in a range of the electromagnetic spectrum that includes the visible portion and/or the infrared portion. The imaging unit 110 further comprises a second imaging module 112, the second imaging module 112 being configured to acquire a three-dimensional (3D) cloud of points of a second scene according to a second plurality of acquisition parameters. The second plurality of acquisition parameters comprises, for example and without limitation, an observation point, a field of view and a plurality of lines of sight. Further, the second scene captured by the second imaging module 112 may comprise the barrier 200 of interest deployed in the level crossing 40. Each of the points of the 3D cloud of points may be assigned to one or more values depending on the detection technique implemented in the second imaging module 112. The second imaging module 112 may, for example an without limitation, be a LiDAR (Light Detection and Ranging) module capable of scanning a given area by sequentially emitting laser beams along different lines of sight and by detecting the returned portion of the emitted beams. Thus, a point of a 3D cloud of points may be obtained by comparing the emitted beam with the corresponding returned beam. The values assigned to such a point may, for example and without limitation, include the time of flight (ToF), the intensity of returned beam, and so on.

In some embodiments of the invention, a color may be assigned to each point of the 3D cloud of points depending on one or more values assigned to the considered point.

The detection device 100 may comprise a database 130 configured to store a position in a given coordinate system of the barrier 200 of interest of the level crossing 40. Given the rotational movement of a barrier 200 of a level crossing 40, the database 130 may be also configured to store the possible positions of the barrier 200 of interest when changing from a closed state to an open state or vice versa.

In some embodiments of the invention, the barrier 200 of interest of the level crossing 40 may be subject to translational movements which may be caused, for example and without limitation, by the wind or by an accident. In such embodiments, the database 130 may be configured to store a 3D barrier 200 movement envelope within which the barrier 200 of interest of the level crossing 40 moves.

The detection device 100 further comprises a processing unit 120, the processing unit 120 being configured to determine one or more parameters of the barrier 200 of interest by processing 2D images and 3D cloud of points, respectively, provided by the first 111 and by the second imaging module 112 of the imaging unit 110. More precisely, the processing unit 120 may be configured, at a given moment in time, to receive a 2D image and a 3D cloud of points simultaneously acquired by the first and by the second imaging module 111 and 112, respectively, the 2D image and the 3D cloud of points being acquired with respect to a common barrier 200 of interest of the level crossing 40.

The processing unit 120 may further be configured to detect in the received 2D image a set of pixels representing the barrier 200 of interest of the level crossing 40. In some embodiments of the invention, the detection of such a set of pixels by the processing unit 120 may be carried out by making use of one or more characteristics of the barrier 200 of interest such as its color, its shape and/or its geometric dimensions. In other embodiments of the invention, the processing unit 120 may be configured to detect the set of pixels representing the barrier 200 of interest by applying a machine learning algorithm on the pixels of the 2D image. The parameters of the machine learning algorithm may be optimized beforehand using real-world and/or synthetized 2D images. Examples of machine learning algorithms include, but are not limited to, linear regression, logistic regression, decision tree, K-means, etc.

The processing unit 120 may further be configured to synthetize a virtual 2D image from the received 3D cloud of points, the virtual 2D image comprising a 2D arrangement of pixels. Further, the virtual 2D image may be synthetized by the processing unit 120 according to the first plurality of acquisition parameters of the corresponding 2D image acquired by the first imaging module 111. For example, the virtual 2D image may be synthetized according to the same observation point, the same field of view and the same line of sight as the corresponding 2D image. To perform the synthetization, the processing may be configured to access and use intrinsic and extrinsic parameters associated with the first imaging module 111. In some embodiments, intrinsic and extrinsic parameters of the first imaging module 111 may be obtained by calibration. In some embodiments, intrinsic parameters may be received by the system (for example they may be set by the first imaging module supplier). The intrinsic parameters may be represented by a matrix. The matrix associated with the intrinsic parameters provides a mapping relation between the position of an object in the real world (for example in camera 3D coordinates) and the position (for example pixels) of the object on the image (for example in pixel 2D coordinates). The intrinsic matrix K may be represented as follows:

In the intrinsic matrix K, f x and f y respectively represent the horizontal and the vertical focal lengths in pixels, and (c x , c y ) represents the position in pixels of the optical center in the image.

The extrinsic parameters are also represented by a matrix. The matrix associated with the extrinsic parameters provides a mapping relation between the position of an object in the real world (for example in world 3D coordinates) and the position (for example pixels) of the object on the image (for example in pixel 2D coordinates). The extrinsic matrix T may be represented as follows:

In the intrinsic matrix T, [R] designates a rotation matrix (for three rotation angles) and {t} defines a translation vector (with three components).

The calibration of the extrinsic parameters may be performed using a LiDAR, which provides the relative position (shifting and rotation) to the LiDAR as a reference to the world coordinates.

In some embodiments of the invention, the processing unit 120 may be configured to synthetize the virtual 2D image so that each pixel of the virtual 2D image corresponds to a pixel of the corresponding 2D image as acquired by the first imaging module 111. This means that two corresponding pixels of the virtual 2D image and the 2D image as acquired by the first imaging module 111 relate to a same location. In one embodiment of the invention, the processing unit 120 may be configured to compute each pixel of the virtual image from one point of the 3D cloud of points. In such an embodiment, the color assigned to the considered pixel may be the color assigned to the associated point, i.e. the color obtained from one or more value of the associated point.

In another embodiment of the invention, the processing unit 120 may be configured to compute each pixel of the virtual image from one or more points of the 3D cloud of point. In such an embodiment, the color assigned to the considered pixel may be obtained, for example and without limitation, by merging the colors assigned to the one or more associated points.

The processing unit 120 may further be configured to merge the 2D image as acquired by the first imaging module 111 with the corresponding synthetized virtual 2D image, thus obtaining an enhanced 2D image. The enhanced 2D image represents an enhanced 2D representation of the level crossing 40. More precisely, each pixel of the enhanced 2D image may be obtained by merging two corresponding pixels in the merged images. In some embodiments of the invention, the processing unit 120 may be configured, when a pixel in one of the merged images is missing, to compute the corresponding pixel in the enhanced image by treating the missing pixel as transparent pixel.

In some embodiments of the invention, the processing unit 120 may be configured to synthetize the virtual 2D image from the 3D cloud of points by implementing the steps consisting in:

- projecting the 3D cloud of points on a 2D plan representing a focal plane of the first imaging module 111 ;

- removing from the projected points one or more points representing occulted objects with respect to the first imaging module 111 ; interpolating the remaining projected points to synthetize the virtual 2D image, each pixel of the virtual 2D image corresponding to a pixel of the 2D image as acquired by the first imaging module 111. In other embodiments of the invention, the processing unit 120 may be configured to synthetize the virtual 2D image by only considering points of the 3D cloud of points that are located inside the 3D barrier 200 movement envelope.

In some embodiments of the invention, the processing unit 120 may be configured to synthetize a virtual 3D cloud of points from the received 2D image as acquired by the first imaging module 111. Further, the virtual 3D cloud of points may be synthetized by the processing unit 120 according to the second plurality of acquisition parameters of the corresponding 3D cloud of points acquired by the second imaging module 112. For example, the processing unit 120 may be configured to synthetize the virtual 3D cloud of points so that each point of the virtual 3D cloud of points corresponds to a point of the corresponding 3D cloud of points as acquired by the second imaging module 112. This means that two corresponding points of the virtual 3D cloud of points and the 3D cloud of points as acquired by the second imaging module 112 relate to a same location. In such embodiments, the processing unit 120 may further be configured to merge 3D cloud of points as acquired by the second imaging module 112 with the corresponding synthetized virtual 3D cloud of points, thus obtaining an enhanced 3D cloud of points. The enhanced 3D cloud of points represents an enhanced 3D representation of the level crossing 40.

The processing unit 120 may further be configured to determine one or more parameters of the barrier 200 of interest from at least one of the enhanced representations of the level crossing 40. For example, the processing unit 120 may be configured to determine a barrier 200 position parameter by measuring an angle between a current position of the barrier 200 as determined from at least one of the enhanced representations of the level crossing 40 and a reference position of the barrier 200 as stored in the database 130. The reference position of the barrier 200 may, for example and without limitation, corresponds to a closed state or to an open state of the barrier 200 of interest of the level crossing 40.

In some embodiments of the invention, the processing unit 120 may further be configured to determine a barrier 200 movement parameter from one of the enhanced representations of the level crossing 40 and a corresponding previously determined enhanced representation, the current and the previous representation being sequentially determined by the processing unit 120 with respect to two successive moments in time. For example, the barrier 200 movement parameter may be determined by comparing the current position of the barrier 200 to a previous position of the barrier 200 with reference to time.

In other embodiments of the invention, the processing unit 120 may further be configured to determine a barrier 200 integrity parameter from at least the enhanced 3D representation of the level crossing 40. The barrier 200 integrity parameter may be determined by the processing unit 120 by comparing a length of the barrier 200 as measured within the 3D barrier 200 movement envelope with a length of the barrier 200 as measured without considering the 3D barrier 200 movement envelope.

Figure 3 is a flowchart illustrating a method for determining one or more parameters of a barrier 200 of interest of a level crossing 40, in accordance with an embodiment of the invention.

At step 300, a 2D image and a 3D cloud of points may be simultaneously received, the 2D image and the 3D cloud of points being acquired by a first and by a second imaging module 112, respectively. The first imaging module 111 may acquire the 2D image according to a first plurality of acquisition parameters. Similarly, the second imaging module 112 may acquire the 3D cloud of points according to a second plurality of acquisition parameters.

At step 301 , a set of pixels representing the barrier 200 of interest of the level crossing 40 may be detected in the 2D image. The detection of such a set of pixels may be achieved by applying a machine learning algorithm on the pixels of the 2D image. The parameters of the machine learning algorithm may be optimized beforehand using real-world and/or synthetized 2D images. Exemplary machine learning algorithms include, without limitation, linear regression, logistic regression, decision tree, K-means, etc.

At step 302, the barrier 200 of interest may be detected from the 3D cloud of points based on the data stored in the database 130.

At step 303, a virtual 2D image may be synthetized from the 3D cloud of points. The step of synthetizing a virtual 2D image from the set of points of the 3D cloud of points is implemented by synthetizing the virtual 2D image on the pixels represented the barrier (as detected in the detection step 301 ).

The virtual 2D image may be synthetized according to the first plurality of acquisition parameters of the corresponding 2D image acquired by the first imaging module 111. Step 303 may further comprise synthetizing a virtual 3D cloud of points from the 2D image according to the second plurality of acquisition parameters of the corresponding 3D cloud of points acquired by the second imaging module 112.

At step 304, the 2D image, as acquired by the first imaging module 111 and the corresponding synthetized virtual 2D image, may be merged, thereby obtaining an enhanced 2D image which represents an enhanced 2D representation of the level crossing 40. Step 304 may further comprise merging the 3D cloud of points as acquired by the second imaging module 112 and the corresponding synthetized virtual 3D cloud of points, thereby obtaining an enhanced 3D cloud of points which represents an enhanced 3D representation of the level crossing 40.

At step 305, one or more parameters of the barrier 200 of interest of the level crossing 40 may be determined from at least one of the enhanced representations of the level crossing 40. The one or more parameters include, for example and without limitation, a barrier position parameter, a barrier movement parameter and a barrier integrity parameter.

It should be noted that the functions, acts, and/or operations specified in the flow charts, sequence diagrams, and/or block diagrams may be re-ordered, processed serially, and/or processed concurrently consistent with embodiments of the invention. Moreover, any of the flow charts, sequence diagrams, and/or block diagrams may include more or fewer blocks than those illustrated consistent with embodiments of the invention.

While embodiments of the invention have been illustrated by a description of various examples, and while these embodiments have been described in considerable detail, it is not the intent of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative methods, and illustrative examples shown and described.