Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBSTACLE DETECTION USING A CLASSIFIER TRAINED BY HORIZON-BASED LEARNING
Document Type and Number:
WIPO Patent Application WO/2019/093885
Kind Code:
A1
Abstract:
An obstacle detection system comprisese a camera (301) configured to capture an image of an environment. A classifier (302) is configured to classify a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification. An obstacle detector (303) is configured to identify the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above-horizon. The obstacle detector (302) is configured to compare the certainty of the classification into below-horizon or above-horizon with a threshold. It identifies the portion of the image as the obstacle if the certainty of the classification is below the threshold, and identifies the portion of the image as a non-obstacle if the portion is classified into below-horizon or above horizon with a certainty above the threshold.

Inventors:
DE CROON GUIDO (NL)
Application Number:
PCT/NL2018/050740
Publication Date:
May 16, 2019
Filing Date:
November 06, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV DELFT TECH (NL)
International Classes:
G06K9/00
Domestic Patent References:
WO2014072737A12014-05-15
Foreign References:
US20140314270A12014-10-23
US20160167226A12016-06-16
Other References:
CARRIO ADRIAN ET AL: "A real-time supervised learning approach for sky segmentation onboard unmanned aerial vehicles", 2016 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), IEEE, 7 June 2016 (2016-06-07), pages 8 - 14, XP032917954, DOI: 10.1109/ICUAS.2016.7502586
DE CROON G C H E ET AL: "Sky Segmentation Approach to obstacle avoidance", AEROSPACE CONFERENCE, 2011 IEEE, IEEE, 5 March 2011 (2011-03-05), pages 1 - 16, XP031938114, ISBN: 978-1-4244-7350-2, DOI: 10.1109/AERO.2011.5747529
G. DE CROON; B. REMES; C. DEWAGTER; R. RUIJSINK: "IEEE Aerospace Conference", 2011, BIG SKY, article "Sky segmentation approach to obstacle avoidance"
Y. GAL; Z. GHAHRAMANI: "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 2016, pages 1050 - 1059
L. BREIMAN; RANDOM FORESTS, MACHINE LEARNING, vol. 45, no. 1, 2001, pages 5 - 32
T. MAENPAA: "The local binary pattern approach to texture analysis: extensions and applications", OULUN YLIOPISTO, 2003
K. I. LAWS: "Texture energy measures", PROC. IMAGE UNDERSTANDING WORKSHOP, 1979, pages 47 - 51
Attorney, Agent or Firm:
NEDERLANDSCH OCTROOIBUREAU (NL)
Download PDF:
Claims:
CLAIMS:

1. An obstacle detection system, comprising

a camera (301) configured to capture an image of an environment;

a classifier (302) configured to classify a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification;

an obstacle detector (303) configured to identify the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above- horizon.

2. The obstacle detection system of claim 1 , wherein the obstacle detector (303) is configured to:

compare the certainty of the classification into below-horizon or above-horizon with a threshold; and

identify the portion of the image as the obstacle if the certainty of the classification is below the threshold, and to identify the portion of the image as a non- obstacle if the portion is classified into below-horizon or above horizon with a certainty above the threshold.

3. The obstacle detection system of claim 1 , further comprising an autonomously moving object (300), wherein the object is configured to avoid an area associated with the portion of the image that has been identified as obstacle.

4. The obstacle detection system of claim 1 , further comprising a training unit (304) configured to train the classifier based on a plurality of input images and an indication of the horizon in each input image. 5. The obstacle detection system of claim 4, further comprising a sensor (305), wherein the training unit is configured to determine the horizon based on an output of the sensor.

6. The obstacle detection system of claim 1 , wherein the classifier (302) is configured to recognize an image feature included in an image and to perform the classification of the portion of the image based on the recognized image feature.

7. The obstacle detection system of claim 5, wherein the classifier (302) is configured to output a low certainty of the classification for the portion of the image if the image feature has appeared for portions both below the horizon and above the horizon in different images that are processed by the training unit.

8. The obstacle detection system of claim 4, further comprising an autonomously moving object (300) wherein the autonomously moving object (300) is configured to move around while capturing the input images for the training unit.

9. The obstacle detection system of claim 8, wherein the autonomously moving object (300) comprises a sensor (305) to determine a roll/pitch to determine the horizon in the input images. 10. The obstacle detection system of claim 8, wherein the autonomously moving object (300) comprises a drone, wherein the drone is configured to move around at different heights and different rotation angles while capturing the input images for the training unit. 1 1. A method for detecting an object, the method comprising

capturing (401) an image of an environment using a camera;

classifying (402) a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification, using a classifier; and

identifying (403) the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above-horizon, using an obstacle detector.

12. A computer program product comprising computer code to cause a computer system to control to capture (401) an image of an environment using a camera, classify (402) a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification, using a classifier, and identify (403) the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above-horizon.

Description:
OBSTACLE DETECTION USING A CLASSIFIER TRAINED BY

HORIZON-BASED LEARNING

FIELD OF THE INVENTION

The invention relates to obstacle detection using horizon-based learning.

BACKGROUND OF THE INVENTION

Robots may only reach their potential if they can operate autonomously.

One of the most important autonomy capabilities is to detect obstacles, so that the robot can avoid the obstacles. Prior art robots have a capability of detecting obstacles, but the reliability of this capability is limited.

In robotics, optical flow is a known technique to detect obstacles. Optical flow is problematic as it requires both sufficient motion and sufficient texture, and is computationally expensive if you want dense estimates.

Other known obstacle detection method uses ground-detection. Such an existing method typically cannot deal with a floor that has an abrupt change in visual appearance (color or texture), which means that the robot has a limited use.

G. de Croon, B. Remes, C. deWagter, and R. Ruijsink. Sky segmentation approach to obstacle avoidance, in IEEE Aerospace Conference, Big Sky, Montana, USA, 201 1 , discloses an approach that segments the image, to make the difference between 'sky' and 'non-sky'. This idea is based on the fact that non-sky regions above the horizon line are obstacles to flying robots. De Croon et al. learned a decision tree based on a wide range of computer vision features for classifying pixels as sky or non- sky. This sky segmentation approach to obstacle avoidance allowed for very timely avoidance of far-away obstacles. This led to calm and timely obstacle avoidance actions. A solution for handling unknown environments (different lighting, different obstacles) was not addressed.

SUMMARY OF THE INVENTION

It would be advantageous to be able to detect obstacles with just a single, lightweight, energy-efficient sensor, such as a single camera. More generally, it would be advantageous to have a more reliable obstacle detection method and apparatus.

To address this concern, an obstacle detection system is provided, comprising a camera configured to capture an image of an environment; a classifier configured to classify a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification; and an obstacle detector configured to identify the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above- horizon.

The classifier that generates a certainty of a classification of a portion of an image into below-horizon or above-horizon was found to be suitable for obstacle detection. This is based on the insight that obstacles can typically occur both below and above the horizon, so that the classification into either above-horizon or below- horizon cannot be made with great certainty.

The obstacle detector may be configured to compare the certainty of the classification into below-horizon or above-horizon with a threshold; and identify the portion of the image as the obstacle if the certainty of the classification is below the threshold, and to identify the portion of the image as a non-obstacle if the portion is classified into below-horizon or above horizon with a certainty above the threshold. This provides a suitable criterion to decide whether an object is detected.

The obstacle detection system may comprise an autonomously moving object, wherein the object is configured to avoid an area associated with the portion of the image that has been identified as obstacle. This is an efficient obstacle avoidance mechanism of an autonomously moving object.

The obstacle detection system may comprise a training unit configured to train the classifier based on a plurality of input images and an indication of the horizon in each input image. This is a useful feature to improve the performance of the classifier using input images. This may also be used to further improve the performance of the classifier over time while using the images captured when detecting obstacles.

The obstacle detection system may comprise a sensor. In an embodiment the sensor can be an attitude sensor. Moreover, the training unit may be configured to determine the horizon based on an output of the sensor. The sensor can be useful, for example, to train the classifier without user indication of the horizon.

The classifier may be configured to recognize an image feature included in an image and to perform the classification of the portion of the image based on the recognized image feature. For example an image feature included in the portion itself may be used, or an image feature of the image around the portion. Image features may include, but are not limited to, texture and color. The classifier may be configured to output a low certainty of the classification for the portion of the image if the image feature has appeared for portions both below the horizon and above the horizon in different images that are processed by the training unit. This configuration may be the result of a training procedure, which may be conducted by the training unit. Portions that can appear on either side of the horizon in different images are associated with a low certainty of the classification. Also, these portions are likely to contain obstacles. Therefore, obstacle may be assumed if the certainty of the classification is low.

The obstacle detection system may comprise an autonomously moving object wherein the autonomously moving object is configured to move around while capturing the input images for the training unit. This helps to generate sufficient variety in input images for the training unit, by varying the image content and/or the position of the horizon in different images.

The autonomously moving object may comprise a sensor to determine a roll/pitch/etc. to determine the horizon in the input images and may involve a sensor that determines height, which can be used by the training unit. The determined roll and pitch help to generate a ground truth for the horizon classification, without needing user interactions.

For example, the autonomously moving object may comprise a drone, wherein the drone may be configured to move around at different heights and different rotation angles while capturing the input images for the training unit.

According to another aspect, a method for detecting an object is provided. The method comprises

capturing an image of an environment using a camera;

classifying a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification, using a classifier; and

identifying the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above-horizon, using an obstacle detector.

According to another aspect, a computer program product is provided, comprising computer code. The computer code is configured to cause a computer system to control to capture an image of an environment using a camera, classify a portion of the image as below-horizon or above-horizon based on image information of the captured image, wherein the classifier is configured to output an indication of a certainty of the classification, using a classifier, and identify the portion of the image as an obstacle based on the certainty of the classification into below-horizon or above- horizon.

The person skilled in the art will understand that the features described above may be combined in any way deemed useful. Moreover, modifications and variations described in respect of the system may likewise be applied to the method and to the computer program product, and modifications and variations described in respect of the method may likewise be applied to the system and to the computer program product. BRIEF DESCRIPTION OF THE DRAWINGS

In the following, aspects of the invention will be elucidated by means of examples, with reference to the drawings. The drawings are diagrammatic and may not be drawn to scale.

Fig. 1 shows an illustration of a horizon and objects.

Fig. 2 shows another illustration of a horizon and objects.

Fig. 3 shows a block diagram of an obstacle detection system.

Fig. 4 shows a flowchart of a method of obstacle detection.

DETAILED DESCRIPTION OF EMBODIMENTS

Certain exemplary embodiments will be described in greater detail, with reference to the accompanying drawings.

The matters disclosed in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. Accordingly, it is apparent that the exemplary embodiments can be carried out without those specifically defined matters. Also, well-known operations or structures are not described in detail, since they would obscure the description with unnecessary detail.

Fig. 1 and 2 show, by means of example, the relation between the observer's eye height (camera 1) and the horizon line (the dashed line 2). Fig. 1 makes clear that an assumption of the method is that the appearance of the obstacle above and below the horizon is similar. In a theoretical case, the camera could be exactly aligned with the transition of one color and texture to another (e.g., on the exact separation point between the green 3 and brown 4 of the tree in Fig. 1). However, given a class of objects of slightly different sizes, and given the natural variation in the height of any flying robot, in practice this is extremely unlikely. The same kind of height variation will make the plant 5 in the Fig. 2 (just touching the horizon line 2) appear as an obstacle as well. The height of the camera 1 is still very important though: what constitutes an obstacle at one height may not constitute an obstacle at another. In Fig. 2, for instance, the desk 6 in the indoor environment is not an obstacle when flying at the shown camera height, but will be an obstacle when flying considerably lower.

Additionally, the orange circle 7 around the camera 1 indicates a size of the robot in which the camera 1 is mounted: in theory, the concept may relate to a point-sized observer. However, in practice for most flying robots, the body 7 of the agent is small compared to the height variations during learning. Moreover, most ground-based obstacles will significantly extend below and above the horizon. As a consequence, the practical problems for a small-bodied robot are limited. Finally, overhanging obstacles such as the lamp 8 will be detected as an obstacle if they intersect the horizon line.

Fig. 3 shows a block diagram of a system 300 for obstacle detection. The system may be implemented as a computer system, for example using a control unit, such as a computer processor 31 1 , in conjunction with a memory 310 that comprises instructions configured to cause the control unit to perform certain well defined functions, such as method steps. Moreover, the memory 310 may store parameters of a classifier. The memory 310 may further comprise, for example, captured images and data obtained from a sensor 305. The system 300 may comprise a sensor 305, configured to detect parameters of the system that are relevant for horizon detection. For example, the sensor 305 may comprise an accelerometer which allows to determine the object's attitude and hence (based on the geometry of the object, sensor, and camera) the horizon line in the camera image. An output signal of the sensor 305 is provided to the control unit 31 1. Corresponding data may be stored in the memory 310 or processed directly to determine the horizon with respect to a captured image, for example. The system 300 may comprise a camera 301 for capturing images, which may be stored in the memory 310, and in which obstacles may be detected, under control of the control unit 311. The system 300 may be mounted, for example, on an autonomously moving object, such as a robot or a drone. Alternatively, the system 300 may be mounted on a non-autonomously movable object, such as a vehicle. In the latter case, for example the system 300 may be coupled to a warning system to inform a use of an approaching obstacle. The system can also be used in conjunction with other obstacle detection systems.

The obstacle detection system 300 may comprise a classifier 300 configured to classify a portion of the image as below-horizon or above-horizon, based on image information of the captured image. The classifier 302 is configured to output an indication of a certainty of the classification. The classifier 302 may comprise for example, computer code to analyze at least one captured image and associated parameters to recognize and distinguish certain image features that are characteristic for image portions above the horizon from image features that are characteristic for image portions below the horizon. The parameters may be determined by a training process, for example, to learn from examples.

The system 300 may comprise an obstacle detector 303 configured to identify a portion of the image as an obstacle based on the certainty of the classification of that portion, as classified by the classifier 302, into below-horizon or above-horizon. To that end, the ouput certainty of the classifier 302 is input to the obstacle detector 303.

In certain implementations, the obstacle detector 302 may be configured to compare the certainty of the classification into below-horizon or above-horizon with a threshold. The obstacle detector 302 may identify the portion of the image as an obstacle if the certainty of the classification of that image portion is below the threshold. On the other hand, the obstacle detector 302 may be configured to identify the portion of the image as a non-obstacle if the portion is classified into below-horizon or above horizon with a certainty above the threshold.

In an implementation of an autonomously moving object, the object may be configured to avoid an area associated with the portion of the image that has been identified as obstacle. This feature may be implemented by computer code to program the control unit to control this obstacle avoidance. Once the obstacle has been detected by the obstacle detection system 300, obstacle avoidance may be

implemented in a way known in the art by itself.

The obstacle detection system 300 may further comprise an optional training unit 304 configured to train the classifier 302, based on a plurality of input images and an indication of the horizon in each input image. The training unit 304 may, for example, combine data retrieved from the sensor 305 and images retrieved from the camera 301 , to generate ground truth data to train the classifier 302. For example, the training unit 304 may calculate the position of the horizon in a captured image using the data from the sensor. In general the horizon will be a line in the image. Pixels above the horizon may be categorized above-horizon. Pixels below the horizon may be categorized below-horizon. This ground truth information may be used to train the classifier. In an alternative embodiment, the classifier 302 may be preconfigured, so that a training unit 304 is not necessary.

The training unit 304 may be configured to determine the horizon based on an output of the sensor. The sensor 305 may be any sensor that provides information relevant to calculation of the horizon. For example, known sensors such as magnetometers, gyroscope and accellerometers can be used in a state estimation filter to generate information about the roll and pitch angles of the system 300, in particular of the camera 301. From this the system 300 can compute the horizon as follows: The roll and pitch angles of the system can be used to calculate those angles of the camera (when knowing their relative geometry). Knowing the properties of the camera (e.g., the image location of the principal axis, the angles of the field-of-view, the parameters needed for image undistortion) then allows to project a horizon line in the image.

Additionally, other sensors such as a pressuremeter, sonar, and / or messages from the Global Position System (GPS) can be used to generate information on the height of the system 300. This height can be used by the training unit to associate different heights with different types of obstacles.

The classifier 302 is configured to recognize an image feature included in an image and to perform the classification of the portion of the image based on the recognized image feature. Image features can include color, intensity, and/or texture. For example, the input to the classifier is the pixel data of the image. Alternatively, certain preprocessing operations may be performed to the pixel data of the image, to form pre-processed input to the classifier. This preprocessing may involve different types of features, such as manually designed features or automatically learned features (as in convolutional neural networks).

The classifier 302 may be configured to output a low certainty of the classification for the portion of the image if the image feature has appeared for portions both below the horizon and above the horizon in different images that are processed by the training unit. Image features of objects that appear above the horizon in some images and below the horizon in some other images, may be classified only with a reduced certainty. This is done by applying these images to the training unit 304, so that the classifier is trained by images showing objects in all positions where they can occur during use of the system.

This can be implemented, for example in an autonomously moving object 315 by configuring the autonomously moving object 315, for example by means of the control unit 31 1 and suitable computer code in the memory 310, to move around while capturing the input images for the training unit 304. By moving the moving object 315, on which the camera 301 and sensor 305 are mounted, around, any objects in the environment can be captured in all relevant locations above and/or below the horizon.

The autonomously moving object 315 may be a drone, for example, wherein the drone is configured to move around at different heights and different rotation angles while capturing the input images for the training unit. Alternatively, the autonomously moving object 315 may be a robot on wheels or feet or an

autonomously driving vehicle, for example.

Fig. 4 shows a method for detecting an object. The method may start with step 401 , by capturing an image of an environment using a camera. Then, a portion of the captured image may be selected in step 410 and classified in step 402, as being below-horizon or above-horizon, based on image information of the captured image. In doing so, the classifier outputs an indication of a certainty of the classification. After that, in step 403, the system decides whether the portion of the image is an obstacle based on the certainty of the classification into below-horizon or above-horizon. In step 411 , it is decided if all desired portions of the image have been processed for obstacle detection. If no, the method proceeds from step 410 by selecting the next portion of the image. If yes, in step 412 it is determined if training is desired. If no training is desired, the method proceeds from step 401 with the next captured image, or terminates. If training is desired, the method proceeds at step 413.

In training step 413, sensor data is collected at the same time as an image is captured; the horizon line may determined in the image based on the sensor data; and the image with the horizon line is used to train the classifier.

In an embodiment, when a fixed orientation of the object and the sensor is present no sensor information is required for determining the horizon line. For example, for a driving robot in a factory with flat floors, the horizon line can be always in the center of the image.

It is observed that this training phase may be performed for each image during operation, for example, after the step of deciding whether there is an obstacle. This way, the performance of the classifier is improved continuously over time.

A computer program product may be implemented comprising computer code to cause a computer system to control to perform the above-described method.

The present disclosure provides a method that may allow, in certain embodiments, the detection of obstacles with a single camera. It builds upon the fact that if a camera looks straight ahead, things that are higher than the camera will be above the horizon line, and things that are lower than the camera will be below the horizon line. Obstacles can be defined as objects that are present above and below the horizon line, while "objects" that are no threat of becoming an obstacle, such as the sky or the ground, will typically always be on the same side of the horizon line. The method, may involve to use the robot's sensors (such as accelerometers) to know the robot's attitude, and hence the horizon line. Knowing where the horizon line should be, allows a machine learning algorithm to classify pixels in the image as being above or below the horizon line. The classifications, and the corresponding uncertainties, can be used to identify obstacles, since obstacles will be harder to classify than non- obstacles such as ground and sky.

After evaluating the classification uncertainty in an image, a process of post- processing can be employed, which removes potentially isolated / noisy obstacle detections. The post-processing can in this way facilitate subsequent processing necessary for obstacle avoidance. Another way to view this process is that the uncertainty per classification is not individually compared to a threshold, but in the context of spatially near features. Relevant techniques may be morphological operations or Markov random fields.

In certain embodiments, the method allows a robot to quickly learn to recognize obstacles in its own environment, and gives a dense image of obstacle pixels. This can be used in various ways for obstacle avoidance. For example, one can assume that obstacles touch a ground plane and use the y-coordinates of obstacles to represent distances. Such a scheme is a computationally efficient and robust way of avoiding obstacles, especially suitable for resource-constrained robots. It also forms a useful extra visual cue for larger robots.

A possible advantage of the obstacle detection techniques disclosed herein may be that the learning problem is relatively easy and can be done by the robot in its own environment. For this, it is advantageous if the robot has an estimate of its attitude.

The advantage of the proposed horizon approach to obstacle detection, is that any robot with a sensor or measurement unit providing sufficient information to determine the horizon line, such as an Inertial Measurement Unit (IMU), can continuously learn by itself what objects intersect with the horizon line. Namely, the robot determines its attitude based on the IMU measurements, which in turn means that given a known placement of the camera with respect to the IMU, the horizon line can be projected in the image. Hence, the robot can employ self-supervised learning in which it labels all pixels in the image with respect to being above or below the horizon line. The advantages of self-supervised learning may include: (1) it can happen in the environment in which the robot operates, so that there are only small discrepancies between the training and testing distributions, (2) supervised learning can be fast, and (3) a large amounts of labeled examples will be available to the robot, enabling the optional use of machine learning methods with great representational complexity such as deep neural networks.

In certain embodiments, the robot's interest is not in the classification into above/below horizon itself, but in the uncertainty with which the classification can be performed.

In general, the uncertainty of a classification below or above the horizon serves to recognize obstacles. Of course, uncertainty can have multiple causes. For instance, a classifier can be uncertain about a classification, because the observed features occur both above and below the horizon (indicating an obstacle). Uncertain

classifications can also be caused by features that are very dissimilar from the features observed by the classifier until now (unknown features). If a classifier can discriminate between different types of uncertainty, then this can be used in the application of the proposed method. Uncertainty related to unknown features can for example trigger a curiosity response of the object, so that robot can learn whether the features represent an obstacle or not.

Appearance features that appear both above and below the horizon line should have a higher uncertainty than features that only appear on either side. This means that a classifier has to be used that also outputs an (un)certainty value. Many such methods are available, including deep neural networks that can use dropout during test time, such as described in for example Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059, 2016.

Alternatively, another type of classifier can be used, such as a random forest classifier, such as disclosed in L. Breiman, Random forests, Machine learning, 45(1):5-32, 2001. Per pixel a simple feature vector may be constructed, concatenating hue, saturation, value (HSV)-color values with texture features, such as 1 Local Binary Pattern, LBP, of radius 3 with 24 sampling points, such as disclosed in T. Maenpaa, The local binary pattern approach to texture analysis: extensions and applications. Oulun yliopisto, 2003, and the 9 Laws masks Laws, such as disclosed in K. I. Laws, Texture energy measures, In Proc. Image understanding workshop, pages 47-51 , 1979. The 'simplicity' of this implementation actually may be an advantage for both training speed and the computational effort during training and testing - when executed on computationally restricted flying robots. In this disclosure, a, horizon approach to obstacle detection has been described. Robots that know their attitude can learn in a self-supervised way the visual appearance of objects above and below the horizon. The approach is based on the hypothesis that obstacles, defined as objects that intersect with the horizon, will lead to more uncertain classifications than non-obstacles such as a flat ground plane or the sky. A preliminary validation of this hypothesis has been performed with the help of simple visual features, a random forest classifier, and several different data sets.

Some or all aspects of the invention may be suitable for being implemented in form of software, in particular a computer program product. The computer program product may comprise a computer program stored on a non-transitory computer- readable media. Also, the computer program may be represented by a signal, such as an optic signal or an electro-magnetic signal, carried by a transmission medium such as an optic fiber cable or the air. The computer program may partly or entirely have the form of source code, object code, or pseudo code, suitable for being executed by a computer system. For example, the code may be executable by one or more processors.

The examples and embodiments described herein serve to illustrate rather than limit the invention. The person skilled in the art will be able to design alternative embodiments without departing from the spirit and scope of the present disclosure, as defined by the appended claims and their equivalents. Reference signs placed in parentheses in the claims shall not be interpreted to limit the scope of the claims. Items described as separate entities in the claims or the description may be

implemented as a single hardware or software item combining the features of the items described.