Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAMERA BASED WADE ASSIST
Document Type and Number:
WIPO Patent Application WO/2018/234200
Kind Code:
A1
Abstract:
The present invention refers to a method for wade level detection in a vehicle (110) comprising at least one camera (116, 118, 120, 122) with a field of view (β) covering at least part of a vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, comprising the steps of calibrating a position of the at least one camera (116, 118, 120, 122) with respect to the body (126) of the vehicle (110) prior to usage of the vehicle (110), learning a shape of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, receiving a camera image covering the part of a vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, identifying a part of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, which is not covered by liquid (130), comparing the part of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, which is not covered by liquid (130), to the shape of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, and detecting a wade level based on the comparison. The present invention also refers to a driving assistance system (12) for a vehicle (10), wherein the driving assistance system (12) is adapted to perform the above method. Furthermore, the present invention also refers to a vehicle (10) with an above driving assistance system (12).

Inventors:
YOGAMANI SENTHIL KUMAR (IE)
CHANDRA SUNIL (IE)
HUGHES CIARAN (IE)
Application Number:
PCT/EP2018/066037
Publication Date:
December 27, 2018
Filing Date:
June 18, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
G06K9/00
Domestic Patent References:
WO2015071170A12015-05-21
Foreign References:
US20140293056A12014-10-02
GB2518850A2015-04-08
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (DE)
Download PDF:
Claims:
Patent claims

1 . Method for wade level detection in a vehicle (1 10) comprising at least one camera (1 16, 1 18, 120, 122) with a field of view (β) covering at least part of a vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, comprising the steps of

calibrating a position of the at least one camera (1 16, 1 18, 120, 122) with respect to the body (126) of the vehicle (1 10) prior to usage of the vehicle (1 10), learning a shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards,

receiving a camera image covering the part of a vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards,

identifying a part of the vehicle body (126) from the at least one camera (1 16, 1 18,

120, 122) in a direction downwards, which is not covered by liquid (130), comparing the part of the vehicle body (126) from the at least one camera (1 16,

1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), to the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120,

122) in a direction downwards, and

detecting a wade level based on the comparison.

2. Method according to claim 1 , characterized in that

the step of learning a shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards comprises performing self-learning of the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards.

3. Method according to any of claims 1 or 2, characterized in that

the step of identifying a part of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), comprises modeling the liquid (130) as a dynamic texture evolving in space with a color priori.

4. Method according to claim 3, characterized in that the step of modeling the liquid (130) as a dynamic texture evolving in space with a color priori comprises modeling the liquid (130) as a pixel wise temporally evolving an Auto Regressive Moving Average process.

5. Method according to any of claims 1 to 4, characterized in that

the method comprises a step of detecting ground water in a driving direction and a step of automatically activating the wade level detection upon detected ground water in the driving direction.

6. Method according to any preceding claim, characterized in that

the step of detecting a wade level based on the comparison comprises performing a subtraction of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), from the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards.

7. Method according to claim 6, characterized in that

the step of detecting a wade level based on the comparison further comprises a step of modeling image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance).

8. Method according to claim 7, characterized in that

comprising the step of tiling the image into various blocks, whereby the top k Gaussians are learnt separately for each block.

9. Method according to any preceding claim, characterized in that

the method comprises an additional step of performing a vehicle to vehicle warning upon detection of a pre-defined wade level.

10. Method according to any preceding claim, characterized in that

the method comprises an additional step of performing a dynamic online background subtraction for other vehicles.

11. Method according to any preceding claim, characterized in that the method comprises an additional step of performing a displaced volume detection of liquid (130) displaced by the vehicle (1 10).

12. Method according to any preceding claim, characterized in that

the vehicle comprises an ultrasonic distance sensor, and

the method comprises an additional step of fusing the detected wade level based on the comparison and a wade level detected by the ultrasonic distance sensor.

13. Driving assistance system (12) for a vehicle (10), characterized in that the driving assistance system (12) is adapted to perform the method according to any preceding claim.

14. Vehicle (10) with a driving assistance system (12) according to preceding claim 13.

Description:
Camera based wade assist

The present invention refers to a method for wade level detection in a vehicle comprising at least one camera.

The present invention also refers to a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.

The present invention further refers to a vehicle with the above driving assistance system.

Existing solutions for wade level detection in a vehicle are based on ultrasonic sensors, which are typically mounted on side mirrors. The ultrasonic sensors are typically directed downwards to ground to detect a distance to a water surface from the ground.

Fig. 1 shows a vehicle 10 known in the Art. The vehicle has a wing mirror 12, and an ultrasonic sensor 14 is mounted at the wing mirror 12. The ultrasonic sensor 14 has a field of view a. As can be seen in Fig. 1 , the vehicle is partially covered by water 16. Hence, ultrasonic waves emitted from the ultrasonic sensor 14 are reflected from a surface 18 of the water 16. The wade level is determined based on reflections of ultrasonic pulses emitted from the ultrasonic sensor 14. The reflections are received by the ultrasonic sensor 14 and based on a runtime of the ultrasonic waves, a distance to the surface 18 of the water 16 is determined based on a known position of the ultrasonic sensor 14 at the vehicle 10. Additional contact sensors 20 for detecting the water 16 based on contact are provided at a bottom of the vehicle 10. The contact sensors 20 are e.g. used for validation of the ultrasonic sensors 14.

However, such ultrasonic sensors have a limited field of view for detecting the water level, i.e. the surface of the water. Hence, the ultrasonic sensors merely perform a local measurement of the water level. Furthermore, the ultrasonic sensors typically require a flat surface, which limits the usability of the ultrasonic sensors. Still further, water is a liquid medium, so that its surface can move a lot in an unpredictable manner. Water is a dynamically evolving manifold because of its fluidic nature. This makes it difficult to determine the wade level using ultrasonic sensors. In this context, according to WO 2015/071 170 A1 , a vehicle comprises a system for aiding driver control of the vehicle when the vehicle is wading in a body of water. The system comprises a measurement apparatus for determining a measured depth of water in which the vehicle is wading. The measurement apparatus is positioned and arranged relative to the vehicle such that the measured depth is indicative of the depth of water in a first measurement region relative to the actual vehicle. The processor is coupled to the measurement apparatus and is configured to calculate an estimated water depth in dependence upon the measured depth and in dependence upon the vehicle speed. Hence, a vehicle comprises a system having a control unit and at least one remote sensor, which includes a first ultrasonic transducer sensor mounted to a left-side mirror of the vehicle; and a second ultrasonic transducer sensor mounted to a right-side mirror of the vehicle the first and second ultrasonic transducer sensors are positioned on the vehicle. The first and second ultrasonic transducer sensor are configured to emit and receive a pulsed ultrasonic signal. The time of receipt of an echoed ultrasonic signal is indicative of a distance sensed between the ultrasonic transducer sensor and the surface level of a body of water in a measurement region adjacent to the vehicle.

It is an object of the present invention to provide a method for wade level detection in a vehicle, a driving assistance system for a vehicle, and a vehicle with such a driving assistance system, which overcome at least some of the above problems. In particular, it is an object of the present invention to provide a method for wade level detection in a vehicle, a driving assistance system for a vehicle, and a vehicle with such a driving assistance system, which enable a reliable wade level detection in a simple manner.

This object is achieved by the independent claims. Advantageous embodiments are given in the dependent claims.

In particular, the present invention provides a method for wade level detection in a vehicle comprising at least one camera with a field of view covering at least part of a vehicle body from the at least one camera in a direction downwards, comprising the steps of calibrating a position of the at least one camera with respect to the body of the vehicle prior to usage of the vehicle, learning a shape of the vehicle body from the at least one camera in a direction downwards, receiving a camera image covering the part of a vehicle body from the at least one camera in a direction downwards, identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, comparing the part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, to the shape of the vehicle body from the at least one camera in a direction downwards, and detecting a wade level based on the comparison.

The present invention also provides a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.

The present invention further provides a vehicle with the above driving assistance system.

The basic idea of the invention is to use at least one camera, typically at least one optical camera, to perform wade detection. The usage of the at least one camera has the advantage, that is has a field of view superior to a field of view of a typical ultrasonic sensor. Due to the superior field of view, an increased area of the surface can be monitored, so that the reliability of the wade detection can be increased. Furthermore, the wide field of view enables analysis of the field of view already prior to reaching a wade area. Furthermore, nowadays vehicles are frequently equipped with at least one camera. Hence, using this camera, the method can be performed without additional camera hardware. Cameras provide compared to ultrasonic sensors a huge amount of sensor data, which enable a detailed analysis to implement a reliable wade detection, in particular a wade level detection.

Wade can be a desired or accepted feature, e.g. in the case of off-road vehicles.

However, also for regular vehicles, it is important to monitor a wade level to avoid damages due to the liquid. The liquid is typically water or mud with a liquid

characteristic.

The wade level refers to a height of the liquid in an area around the vehicle. For off-road vehicles, wade levels of even more than a meter can be achieved, whereas for regular vehicles, a wade level of few centimeters can already be dangerous because of possible damages to the vehicle, in particular to the motor, in particular due to water entering into the cylinder. Based on an orientation of the vehicle, the wade level can be different e.g. on a left and right side of the vehicle, or at its front and at its rear. The vehicle has at least one camera. Preferably, the vehicle has a surround view camera system with multiple cameras covering all directions around the vehicle. Hence, the wade level can be determined all around the vehicle.

The at least one camera has a field of view, which covers part of the vehicle body from the at least one camera in a direction downwards. Hence, this part of the vehicle body can be used as reference for the detection of a liquid level around the vehicle. The at least one camera provides images covering the part of a vehicle body from the at least one camera in a direction downwards. The images can be provided at any suitable rate, e.g. depending on a vehicle speed.

Calibrating a position of the at least one camera with respect to the body of the vehicle is required to generate an absolute reference for determining the wade level. Different cameras can have different references. However, also in this case, the information can be combined based on the known reference positions of the camera. Calibration has to be made at least once prior to usage of the vehicle. The calibration step S100 can be repeated, e.g. in order to adapt to changed vehicle feature, e.g. when air pressure in wheels of the vehicle changes or when the air pressure is changed based on current driving conditions. In particular, off-road vehicles can require different air pressure when circulating on a road or in off-road conditions.

According to a modified embodiment of the invention, the step of learning a shape of the vehicle body from the at least one camera in a direction downwards comprises performing self-learning of the shape of the vehicle body from the at least one camera in a direction downwards. Accordingly, the method can be easily applied to different types of vehicles. Changes in the appearance of the vehicle can be easily considered and do not lead to a false wade detection, since the vehicle can adapt to such changes, e.g. in case the color of the vehicle is changed, dirt or water drops reside at the vehicle body, stickers are attached to the vehicle body, or others. Since the vehicle body is a static object, it can be learned as background information during a simple training stage. The shape of the vehicle can be self-learned, i.e. outside the factory. The step of learning a shape of the vehicle body has to be robust to handle illumination variations and presence of reflections on the body of the vehicle. Typically, for each camera, only a part of the vehicle body will be visible and learned as vehicle body. Preferably, an initial training step is performed prior to usage of the vehicle.

According to a modified embodiment of the invention, the step of identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, comprises modeling the liquid as a dynamic texture evolving in space with a color priori. Due to the nature of the liquid, e.g. water or even mud, its surface can move in an unpredictable manner. Liquids are dynamically evolving manifolds because of their fluidic nature. For example, waves can be formed on a surface of the liquid. This makes it in general difficult to determine a correct wade level. However, when adequately modeling the liquid, a correct wade level can be determined despite a movement of the liquid.

According to a modified embodiment of the invention, the step of modeling the liquid as a dynamic texture evolving in space with a color priori comprises modeling the liquid as a pixel wise temporally evolving an Auto Regressive Moving Average process. The Auto Regressive Moving Average is also referred to as ARMA process. The ARMA process is used in statistical analysis of time series. The ARMA process provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average.

According to a modified embodiment of the invention, the method comprises a step of detecting ground water in a driving direction and a step of automatically activating the wade level detection upon detected ground water in the driving direction. With the at least one camera being directed into the driving direction, detection of ground water is enabled. The ground water can typically be detected already well in advance of the vehicle, depending on a type of camera used and/or an orientation of the camera.

However, also other environment sensors of the vehicle can be used to determine ground water in the driving direction. Accordingly, the wade level detection can already be started in advance, so that the wade level detection is already up and running, when the vehicle enters the water. A manual interaction to start wade detection can be omitted.

According to a modified embodiment of the invention, the step of detecting a wade level based on the comparison comprises performing a subtraction of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, from the shape of the vehicle body from the at least one camera in a direction downwards. By performing the subtraction, it can be determined up to which height the water level reaches in the area of the camera. If the subtraction result is essentially zero, the current image is supposed to represent the shape of vehicle body as previously learned. Hence, no water is present around the vehicle. The higher the subtraction result, the bigger the difference between the current image and the shape of vehicle body as previously learned. Based on the subtraction, non-body parts can be easily eliminated for determining the wade level. A learning stage has to be robust to handle illumination variations and presence of reflections on the body part.

According to a modified embodiment of the invention, the step of detecting a wade level based on the comparison further comprises a step of modeling image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance). There are many variants of Gaussian mixture model (GMM) available, which can be used to model the image points. Preferably, the step of modeling image points remaining after the subtraction by K Gaussians comprises performing an adaptive mixture of Gaussians, where the K Gaussians are sorted based on ratio ωί/σ 2 which chooses the least variance and therefore more consistent Gaussian T, and the top k Gaussians are chosen from the sorted order. Further preferred, a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of Gaussian, background subtraction is analyzed at pixel-level approach.

Recursive equations are used to constantly update the parameters and but also to simultaneously select an appropriate number of components for each pixel.

According to a modified embodiment of the invention, the method comprises an additional step of tiling the image into various blocks, whereby the top k Gaussians are learnt separately for each block. Hence, for each block, a deviation can be calculated from the learnt model during training time. Training data can be artificially augmented with various noisy effects like reflection, illumination changes, etc. We use this method in particular because it includes an evolving Gaussian function, which can model temporal variability of the liquid surface.

A probability of a pixel being modeled by these K Gaussians can be calculated as

where

Parameter updates are performed as follows in the case components match with I:

where

Parameter updates are performed as follows in the case components do not match with I:

According to a modified embodiment of the invention, the method comprises an additional step of performing a vehicle to vehicle warning upon detection of a predefined wade level. Hence, vehicles which do not support wade detection can be supplied with wade level information. In order to perform the vehicle to vehicle warning, the vehicle comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information. The communication device can be provided to communicate according to any suitable mobile communication standard including Bluetooth, WiFi, GPRS, UMTS, LTE, 5G, or others, just to name a few. Vehicle to vehicle warning enables non-intrusive wade detection or even wade level detection for the warned vehicle.

According to a modified embodiment of the invention, the method comprises an additional step of performing a dynamic online background subtraction for other vehicles. Accordingly, the at least one camera is used to determine, if other vehicles are submerged. A wade level can be estimated based on a known or estimated size of the other vehicle and approximately doing background subtraction to see how much of the vehicle body is occluded by the liquid.

According to a modified embodiment of the invention, the method comprises an additional step of performing a displaced volume detection of liquid displaced by the vehicle. Hence, an increase in e.g. water level based on the vehicle entering or moving within the liquid can be considered. E.g. when a surface of the ground water is known, based on dimensions of the vehicle, it can be easily determined, how much a liquid level will raise based on the presence of the vehicle and its inherent liquid displacement. For small pits and large vehicles, a volume of displaced liquid is much larger, and an immersion of the vehicle increases rapidly. Thus, a predictive model can be appended by using a particle filter, and camera based estimation offers tracking and localization because of a wide field of view (FoV).

According to a modified embodiment of the invention, the vehicle comprises an ultrasonic distance sensor, and the method comprises an additional step of fusing the detected wade level based on the comparison and a wade level detected by the ultrasonic distance sensor. The fused information on the wade level increases reliability of the wade detection and in particular the wade level detection. The ultrasonic distance sensor can be employed as known in the Art to determine the wade level. Fusion can be performed by a heterogeneous Bayesian model, as data corresponding to the different sensors are very different and have different ranges. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. Individual features disclosed in the embodiments con constitute alone or in combination an aspect of the present invention. Features of the different embodiments can be carried over from one embodiment to another embodiment.

In the drawings:

Fig. 1 shows a schematic view of a vehicle known in the Art with a driving

assistance system for detecting a wade condition using an ultrasonic sensor in a lateral view,

Fig. 2 shows a schematic view of a vehicle with a driving assistance system for detecting a wade condition using multiple cameras according to a first, preferred embodiment in a top view with additional camera views of the multiple cameras,

Fig. 3 shows a detailed schematic camera view of a left wing camera in

accordance with Fig. 2, whereby the camera view is shown once with and once without wade condition, in accordance with the first embodiment,

Fig. 4 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view,

Fig. 5 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view,

Fig. 6 shows a schematic detailed camera view of a rear camera in accordance with Fig. 2, whereby the rear camera view is shown with individual blocks, in accordance with the first embodiment, and Fig. 7 shows a perspective camera view of a front camera in accordance with Fig. 2, whereby the front camera view shows a submerged vehicle, in accordance with the first embodiment, and

Fig. 8 a flow chart indicating a method for performing wade level detection with the vehicle and the driving assistance system according to the first embodiment.

Figure 2 shows a vehicle 1 10 with a driving assistance system 1 12 according to a first, preferred embodiment of the present invention.

The driving assistance system 1 12 comprises a processing unit 1 14 and a surround view camera system 1 16, 1 18, 120, 122. The surround view camera system 1 16, 1 18, 120, 122 comprises a front camera 1 16 covering a front direction of the vehicle 1 10, a rear camera 1 18 covering a rear direction of the vehicle 1 10, a right mirror camera 120 covering a right direction of the vehicle 1 10, and a left mirror camera 122 covering a left direction of the vehicle 1 10.

The cameras 1 16, 1 18, 120, 122 and the processing unit 1 14 are connected via data bus connection 124.

Each camera 1 16, 1 18, 120, 122 has a field of view β, which can be seen e.g. in Fig. 4 or 5, and which covers part of a vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards, as can be seen in Fig. 2 as well as in Fig. 3.

Subsequently, a method for wade level detection in the vehicle 1 10 according to the first embodiment will be discussed. The wade level refers to a height of a liquid 130, typically water, around the vehicle 10. In particular, wade condition refers to presence of the liquid 130 around the vehicle, and wade level refers to a height of a surface 132 of the liquid 130. The method will discussed with reference to Fig. 8, which shows a flow chart of the inventive method. Apparently, some of the method steps can be performed in an order different to the order of the described embodiments.

The method starts with step S100, which refers to calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10. Calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 refers to generating an absolute reference for determining the wade level. The step of calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 is performed once prior to usage of the vehicle 1 10. Later on, step S100 can be repeated, e.g. due to changing driving condition.

In step S1 10, a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is learned. This comprises performing self-learning of the shape of the vehicle body 126 from the cameras 1 16, 1 18, 120, 122 in a direction downwards. Since the vehicle body 126 is a static object, the shape is learned as background information during a training stage. Step S1 10 can be performed at essentially any time. Step S1 10 does not have to be performed continuously or every time the method is performed. The step of learning a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is performed once prior to usage of the vehicle 1 10 as initial training. Later on, the training can be continued.

In step S120, ground water in a driving direction is detected. Upon positive detection of ground water in the driving direction, the further wade level detection is automatically started. Detection of the ground water is performed using the camera 1 16, 1 18, 120, 122 facing in the driving direction. Most commonly, the front camera 1 16 is used to detect the ground water.

In step S130, the control unit 1 14 starts receiving camera images from the cameras 1 16, 1 18, 120, 122, each of which covering the respective part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards.

In step S140, a part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is identified. In order to enable a detection of the surface 132, the liquid 130 is modeled as a dynamic texture evolving in space with a color priori. This comprises modeling the liquid 130 as a pixel wise temporally evolving an Auto Regressive Moving Average process. The Auto Regressive Moving Average is also referred to as ARMA process. The ARMA process is used in statistical analysis of time series and provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average. According to step S150, the part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is compare to the shape of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards. Hence, the vehicle body 126 is used as reference for the detection of a liquid level around the vehicle 1 10. The comparison is performed as a subtraction of the images, so that remaining image contents refers to a wade level. As can be seen in Fig. 3, an identical part of the image provided by the camera 1 16, 1 18, 120, 122 is subtracted from the learned shape of the vehicle body 126, so that only differing parts remain.

Based on the comparison, the wade level is detected in step S160. Accordingly, image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance) are modeled. In particular, an adaptive mixture of Gaussians is performed, where the K Gaussians are sorted based on a ratio ωί/σ 2, which chooses the least variance and therefore more consistent Gaussian T. The top k Gaussians are chosen from the sorted order. Furthermore, a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of

Gaussian, background subtraction is analyzed at pixel-level approach. Recursive equations are used to constantly update the parameters and also to simultaneously select an appropriate number of components for each pixel.

Accordingly, each image of each camera 1 16, 1 18, 120, 122 is tiled into various blocks 134, as can be seen by way of Example in Fig. 6, whereby the top k Gaussians are learnt separately for each block 134. Hence, for each block 134, a deviation is calculated from the learnt model during training time. Training data is artificially augmented with various noisy effects like reflection, illumination changes, etc.

According to step S170, a displaced volume detection of liquid 130 displaced by the vehicle 1 10 is performed. Hence, an increase in liquid level based on the vehicle 1 10 entering the liquid 130 and moving therein is determined. Based on a surface 132 of the ground water and dimensions of the vehicle 1 10, it is determined, how much a wade level raises based on the presence of the vehicle 1 10 and its liquid 130 displacement. According to step S180, a dynamic online background subtraction for other vehicles is performed. Accordingly, the cameras 1 16, 1 18, 120, 122 are used to determine, if other vehicles 138 are submerged, as can be seen with respect to Fig. 7. A wade level can be estimated based on a known or estimated size of the other vehicle 138 and

approximated doing a background subtraction to see how much of the vehicle body 126 is occluded by the liquid 130.

According to step S190, a vehicle to vehicle warning is performed upon detection of the wade level, i.e. the warning is performed when the wade level is above a pre-defined wade level. The driving assistance system 12 comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information. The communication device is provided to communicate according to a suitable mobile communication standard including

Bluetooth, WiFi, GPRS, UMTS, LTE, 5G, or others.

Reference signs list

10 vehicle (state of the Art)

12 wing mirror (state of the Art)

14 ultrasonic sensor (state of the Art)

16 water (state of the Art)

18 surface (state of the Art)

20 contact sensor (state of the Art)

1 10 vehicle

1 12 driving assistance system

1 14 processing unit

1 16 front camera

1 18 rear camera

120 right mirror camera

122 left mirror camera

124 data bus connection

126 vehicle body

130 liquid

132 surface

134 block

136 identical part