Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VEHICLE PARKING AND OBSTACLE AVOIDANCE SYSTEM USING A SINGLE PANORAMIC CAMERA
Document Type and Number:
WIPO Patent Application WO/2011/087354
Kind Code:
A2
Abstract:
The present invention provides for an imaging apparatus, which is used in a novel visual processing device, which provides assistance in parking or obstacle avoidance to the vehicle user.

Inventors:
HON HOCK WOON (MY)
Application Number:
PCT/MY2010/000233
Publication Date:
July 21, 2011
Filing Date:
October 29, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MIMOS BERHAD (MY)
HON HOCK WOON (MY)
Foreign References:
JP2000128031A2000-05-09
JP2005217883A2005-08-11
US20080133136A12008-06-05
US20080046150A12008-02-21
Attorney, Agent or Firm:
SIAW, Yean, Hwa, Timothy (7th Floor Wisma Hamzah-Kwong Hing,No., Leboh Ampang Kuala Lumpur, MY)
Download PDF:
Claims:
Claims

1. A vehicle assist system comprising:

at least one imaging device for capturing video signals of an area surrounding a vehicle;

an visual processing device for receiving the captured video signals from the imaging device; and

a visual display device for displaying the processed video signal from the visual processing device,

wherein

the visual signal captured by the imaging device are mirror reflected video , signal captured by a single panoramic camera.

2. A system according to claim 1, wherein the visual processing device are in the steps of:

(a) video signal received from the device is fed into a

frame extractor for extracting the video signal into sequence of frames; and

to a display unit for displaying real time video of an area surrounding a vehicle

(b) output of frame extractor is fed into

i. optical flow determining means,

ii. background subtraction means

iii. texture detection means

iv. edge detection means; and

v. color detection means

(c) the output of step (b)(i) is fed into Direction Detector for comparing the direction and velocity of the vehicle with one or more obstacle

(d) the output of step (b)(ii) is fed into a motion tracking means for determining the position of one or more obstacle

(e) the output of step (d) is fed into a Hough transform for extracting the

primitive shapes of the images processed at step (d) for

(f) the output of step (b)(iii) is fed into a texture extraction means for

detennining texture content that is different from the rest of the image (g) the output of step (b)(iv) is fed into a edge detection means for determining the edge of one or more obstacle

(h) the output of step (b)(v) is fed into a color detection means for determining the area with different colour content than the overall image

(i) the output of steps (f), (g) and (h) are fed into a road condition analyser for determining the presence of one or more static object in the path of the vehicle motion

3. A visual processing device according to claim 2, wherein the output of step (c) determines the travel direction of a obstacle

4. A visual processing device according to claim 2, wherein the output of step (d) determines the presence of a mobile obstacle

5. A visual processing device according to claim 2, wherein the output of step (i) determines the presence of a static object.

6. A visual processing device according to claim 2, wherein the output of step (e) determines the lane marks

7. A system according to claim 1, wherein the visual processing device uses the video signal to provide collision and obstacle avoidance means.

8. A system according to claim 1, wherein the visual processing device uses the video signal to provide parking assistant means.

Description:
Vehicle Parking And Obstacle Avoidance System Using A Single Panoramic Camera

Field of Invention The present invention relates to an automated vehicle assist device. In particular, but not exclusively, the invention relates to a vehicle assist device that assists drives in avoiding collision and also when moving in and out of a parking bay.

Background of Invention

In recent years there has been a considerable amount of increase in the vehicles that are on the road at any one time. It is quite common for a driver to encounter various types of obstacles when one is traveling on the road. The obstacles can be of many forms, varying from a pothole to a fallen tree.

Furthermore, with the increase in the number of vehicles, there is a sharp increase in the need for a space to park the vehicle when not in use. Parking bays are available in many places such as a supermarket parking area or a paid public parking area. With the increase in the number of vehicles, one has no choice but to face the constant challenge of traffic congestion, scarcity of parking space and increasing number of vehicle related accidents.

Most of the vehicles are equipped with sensors that enable them to detect the presence of an obstacle. However, the driver would not know what the obstacle is as there are only guided by the sound of the sensor since no visual aid is provided. Whereas, when the driver wishes to park the vehicle, the sensor would only be good to indicate the presence of another vehicle or an obstacle such as a beam. It would however, not be able to assist the driver to determine whether the vehicle is within the allocated parking space.

Therefore, there arise a need for a vehicle assistant d system which will be capable of providing both audio and visual aid to the driver in indicating the presence of an obstacle or when parking the vehicle in a parking space. Summary of Invention

It is an objective of the present invention to provide for a system, which provides obstacle, and collision avoidance means to a moving vehicle.

It is also an objective of the present invention to provide for a system, which provides assistance for parking a vehicle.

It is also an objective of the present invention to provide for a system, which provides driving assistance to a moving vehicle.

It is also an objective of the present invention to provide for a system which functions in both day and night conditions. Description of Drawings

Figure 1 Arrangement of device on a car

Figure 2 Example of view modes

Figure 3 View of the visual display unit

Figure 4 Examples of parking types

Figure 5 Overall view of visual processing device

Detailed Description The present invention will now be detailed with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. The invention may, however be embodied in other different forms and should not be construed as being limited to the embodiments discussed herein, rather these embodiments are provided so that this disclosure will convey the concept of the invention to those skilled in the art. In the following description of the present invention, known methods and functions will be omitted as it would be already known to those skilled in the art.

The present invention comprises of a imaging device and visual processing device. The imaging device of the present invention is described in brief below and is described in further detail in co-owned and co-pending Malaysian application no. PI (to be furnished once made available). The imaging device comprises an image capturing apparatus, a wide-angle lens, a detachable reflective means, a detachable illumination means, vertical adjustment means and support means

The imaging device is mounted detachably on a support means such as a camera holder or a desktop stand.

The entire imaging arrangement as can be seen in Figure 1 is mounted on a vehicle roof.

The video captured by the device are fed into the visual processing device for processing. The panoramic images are also displayed on a visual display unit, which is fixed to a surface inside the vehicle, which is easily visible to the driver, such as the dashboard. The panoramic video can be viewed in two different modes. The modes of view are as illustrated in figure 2.

Figure 2 (a) illustrates the surround panoramic view in a single image. In this mode of viewing the displays shows the view of the car in the sequence of front view, right view, rear view and left view of the car. Figure 2 (b) illustrates the second mode of viewing, where the upper part of the display shows the front view of the car, whereas the bottom part of the display shows the rear view of the vehicle.

The view of the surrounding area around the vehicle as displayed in the visual display unit can be interpreted to estimate the distance between an object and the vehicle. Figure 3 is a view of the visual display unit of the present invention. The dotted lines represent the reference lines, which indicate the distance from the center of the vehicle. The bottom most line (Y axis) indicates the center of the vehicle. The height of a vehicle is preset. Therefore, the height of the vehicle and the distance information is calibrated in the initial stage. A few points around the vehicle within the field of view of the device; at varying distance are collected and then are associated with the distance between the reference lines. The reference angle is calculated by dividing the pixel distance from the vertical line divided by the total number of pixel times 360 degrees. The system of the present invention can be used as an obstacle and collision avoidance assistant or a parking assistant, both uses of which will be explained in detail. Obstacles and Collision Avoidance

When a vehicle is on the move, it may be faced with moving or static obstacles. Examples of moving obstacles are other vehicles or animals, whereas examples of static objects are boulders, trees or other vehicle.

The visual processing device of the present invention serves the purpose as the detection and warning indicator in the event that one or more obstacles are detected within the field of view of the imaging device. The visual processing device provides warning signal if an obstacle becomes too close to the vehicle i.e. exceeds the predefined safe distance.

The edges of the ground level surface such as the road is detected using the panoramic video. Edge detection is applied to obstacles that are detected within the field of view. The optical flow of the obstacles is calculated and compared with the optical flow of the vehicle of the present invention, to calculate the distance and speed comparison information.

The system is set with a predefined normal distribution curve, which defines the acceptable range of the deviation. The distribution curved is built through machine learning process, wherein information various types of obstacles, both mobile and static, at various distance from the vehicle is fed into the machine-learning tool. The distribution curve obtained is then set as the reference curve. Any deviation from the standard deviation curve is considered as a danger, and an alarm will be raised. Anything outside the single standard deviation is considered within the safety-driving zone, wherein the single standard deviation here refers to the fixed distance from the edge of the car. When the camera is fixed on top of the car, the view is fixed and the actual physical distance can be calibrated from the image, for example, 10 pixel away from the edge of the car can be equivalent to 15 cm in actual distance. But 15 pixels away from the edge of the care might represent 45 cm away from the car. This values can be calibrated

Parking assistant In the instances when the driver wishes to park the vehicle in a parking bay, very often the driver is faced with the difficulty of estimating the distances between the vehicles parked in the front or the rear of the available parking bay as illustrated in Figure 4 (a) or the vehicles parked on either sides of the vehicle as illustrated in Figure 4 (b).

Therefore, the panoramic view provided by the imaging device of the present invention provides the user the means to detect the presence of another vehicle around the vehicle and also indicate if the distance between the other vehicle(s) and the user's vehicle is safe or not. To achieve this, the vehicle's velocity and optical flow is calculated for each movement of the vehicle. The distance between the vehicle and any other vehicle is calculated in real time. The minimum distance is monitored constantly so that warning signs can be issued if the minimum distance exceeds the distance predefined by the user. Furthermore, the system also assists the user to ensure that the car is parked within the defined parking lines. It also assists the user to determine if the parking space is sufficient to fit the vehicle. Edge detection is applied to detect the parking lines. Upon detecting the parallel parking lines, the system also determines the centerline of the parking bay. This is used to ensure that the center of the vehicle is aligned with the center of the parking bay. The system then calculates the width of the parking pay to determine if the vehicle will be able to be fit within the area in between the parallel parking lines. As the driver moves the vehicle into the parking bay, the system issues a warning signal if the tires of the vehicle are places above at least one of the parking lines. The overall architecture of the visual processing device is as illustrated in figure 5. The visual processing device as claimed in the present invention will now be discussed in details referring to Figure 5.

The video obtained from the imaging device is fed into a frame extractor. The frame extractor extracts the video into sequence of images/frames so that they can be processed. The output of the frame extractor is fed into 5 different blocks i.e.

a. Optical flow block

b. Background subtraction block

c. Edge detector block d. Texture detector block; and

e. Color detector block.

The output of the optical flow block is fed into Direction detector where, the direction and velocity of the obstacle is compared with the direction and velocity of the vehicle. In order to achieve this, the direction traveled by the vehicle is determined, then the angle and the vector of the two inputs i.e. the vehicle and the obstacle are taken into consideration in the compensation calculation. The compensation calculation here is a mathematical computation to determine the direction of the obstacle relative to the vehicle by using the velocity information of the vehicle and of the object. The output of Direction Detector is fed into relative velocity flow

compensation block. The output of this block is Dl, which the summary of the travel direction of the object in the field of view.

In the background subtraction block, the output of the optical flow block is fed into motion tracking block and also Hough transform block. The output of the motion-tracking block is then used to determine the co-ordinate of a moving obstacle. This information is available at D2. Dl, D2 and D3 are output ports that provide the first level output from the system for decision making In the present invention the Hough transform is used to extract the primitive shape of the image e.g. boxes, lines etc. It provides information to the B4 to determine if the information detected is a lane mark. The output of the Hough transform block is used to determine the position of the lane marks on the road. D4 provides the final lane marks information. The output of frame extractor is fed into the edge detector block, where the edge information of any obstacle is extracted.

In the texture detector block the video signal obtained from the frame detector is used to extract the texture content that is different from the rest of the image i.e. singling out the outliner. In the present invention outliner defines the object that does not belong to the same group of the majority of the object. For example, when viewing a road, the texture of the road, which is feature of the majority of the scene, will be similar wherever a road is viewed. But if there is. an object, the texture of the object will differ from the texture of the road. Therefore, the texture of the obj ect is the outliner. One of the way to differentiate how close an object is similar to the others is to plot out the distribution of the all object collected. In this case, most of the road texture will fall under the N standard distribution, and N is set as 2 for the purpose of the current discussion. The decision is based on any object that has similarity within the 2 standard deviation under the normal

distribution curve will be considered as road

In the color detector block the video signal obtained from the frame detector is used to extract the area, which has the color content different from the rest of the image.

The road condition analyzer block receives inputs from the texture detector block, color detector. > ; ,■ < block and edge detection block. In this block the received inputs are combined and analyzed to determine if a static object is present in the line of movement of the vehicle. While a limited number of embodiments of the invention have been described, these

embodiments are not intended to limit the scope of the invention as otherwise described and claimed herein. Those of ordinary skill in the art will recognize that variations and modifications from the described embodiments exist. Moreover, unless otherwise specified, the steps of the methods described herein are not limited to any particular order or sequence.