Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD FOR VEHICLE RECOGNITION, MEASUREMENT OF RELATIVE SPEED AND DISTANCE WITH A SINGLE CAMERA
Document Type and Number:
WIPO Patent Application WO/2015/147764
Kind Code:
A1
Abstract:
Invention is about a method that performs recognition of other vehicles which are driving in front of the current vehicle and estimation of relative speed and relative distance of the closest vehicle in front by using two dimensional digital images captured-while the vehicle in the traffic- by using a single camera fixed to vehicle's console.

Inventors:
KISA MUSTAFA (TR)
BOTSALI FATIH MEHMET (TR)
Application Number:
PCT/TR2015/000070
Publication Date:
October 01, 2015
Filing Date:
February 25, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KISA MUSTAFA (TR)
BOTSALI FATIH MEHMET (TR)
International Classes:
G06V10/25
Domestic Patent References:
WO2010007392A12010-01-21
Foreign References:
US20120269398A12012-10-25
EP2416115A12012-02-08
DE102011055441A12013-05-23
JPH1096626A1998-04-14
CN101105893A2008-01-16
Other References:
E. BAS, THESIS STUDY, 2007
J.Y.CHANG; C.W.CHO, INTELLIGENT TRANSPORTATION SYSTEM (ITS, 2006
K.FURUKAWA; R.OKADA; TANGUCHI, K. ONOGUCHI, INNER CAR INTEGRATED OBSERVATION SYSTEM, 2004
MOON, USA CALIFORNIA JET PROPULSION LABORATORY, 1964
Attorney, Agent or Firm:
AKKAS, Ahmet (No:F-212, Konya, TR)
Download PDF:
Claims:
C L A I M S

1. It is a method for vehicle recognition and relative speed and distance measurement with a single camera, comprises of the following steps,

- Placing the camera which is going to take the road images on the vehicle's front console,

- Transforming the acquired image into grayscale,

- Applying Gaussian filter to the taken road image,

- Edge detection is performed on the transformed image by applying thresholding and Canny edge detection algorithm,

- Detection of the candidate objects in the image that can be a vehicle by using horizontal and vertical lines acquired by applying Hough algorithm,

- The detected objects as a vehicle are framed on the screen by the RDSE (Recognition, Distance and Speed Estimation) algorithm which is running real-time in the computer placed in the vehicle to inform the driver.

2. The Recognition, Distance and Speed Estimation algorithm mentioned in Claim 1 is characterized by being the system which can calculate the distance of the vehicles ahead by taking the license plate height of the vehicle ahead as reference.

3. It is the method for vehicle recognition and relative speed and distance measurement with a single camera which is characterized by being the system which can calculate the relative speed between the current vehicle and the vehicle ahead by taking the license plate height of the vehicle ahead as reference in consecutive images that are taken in certain tyime intervals by the camera.

Description:
DESCRIPTION

A METHOD FOR VEHICLE RECOGNITION, MEASUREMENT OF RELATIVE

SPEED AND DISTANCE WITH A SINGLE CAMERA TECHNICAL FIELD

Invention is about a method that performs recognition of other vehicles which are driving in front of the current vehicle and estimation of relative speed and relative distance of the closest vehicle in front by using two dimensional digital images captured-while the vehicle in the traffic- by using a single camera fixed to vehicle's console. Vehicle detection algorithm is based on detecting licence plates in the image. Relative distance determination algorithm is based on comparing height of licence plate in the image with height of licence plate images in calibration images taken before from known distances. Relative velocity determination algorithm is based on determining the change in height of the license plate in the consecutive images taken in certain intervals.

PRIOR ART

The safety cautions in the vehicles of today are getting more detailed by the days pass and instant warnings for the drivers about the dangers and accidents are targeted during the driving. With this, in the future vehicles, it is predicted that implementations such as detecting the vehicles which drive in front of the current car within the active safety cautions, warning the driver according to those vehicles distance and speed or controlling the vehicle's speed are going to be common. Yet, there is not a common standard for the detecting vehicles that drive in traffic in automotive industry. For that reason, researches about the detection of the vehicles in the traffic are still going on intensively. Within this scope, many researchers do researches about detecting the vehicles in the traffic by using the camera images.

There are different implementations by using fixed camera, movable camera, stereo cameras and single camera for recognizing the vehicles in the traffic, relative speed of the car driving in front and detecting speed of it with image processing techniques. In the literature there are studies which estimate the relative speed of the car that drives in front by using movable stereo cameras but there is not a study that predicts the speed with using the images from a single camera.

In the studies for recognizing the vehicles that drive in the traffic by using the image processing techniques, generally vehicle recognition and tracking is performed. In prediction of the distance of the car in front to the camera by using the image processing methods constant measurements such as length of the road lines, width of the lines, vehicle sizes, track width of the vehicles and the distance between two headlights are taken as reference or the objects with the known dimensions in the images which is taken while the vehicle is stopping.

In some studies, to have three dimensional image from two dimensional image reverse projection is used. Most of the methods that are used for estimation of the distance of the vehicle in front to the camera by image processing methods can use the images which are taken in daylight or in enough lightning. Some methods cannot operate if side/road lanes do not exist or corrupted, some methods need a very large image database about vehicle recognition and some methods cannot operate if there is no shadow of the vehicle on the road. Except the methods which are using image processing; sonar, radar and laser implementations are used for estimating the distance of the vehicle in front.

In plate recognition systems real-time recognitions can be performed with fixed cameras or with the cameras that is fixed on the vehicle. It is also can be performed with the pre-recorded video or photo images. In the most of the studies, images captured with fixed cameras are used. Even the different approaches are used, the common feature of the all plate recognition systems is detection of the plate's position first and then identification of the plate's content. The most important difficulties in recognition process are the changes in the viewpoint of the camera and unclear or interrupted image of the plate (because of the snow, mud or erased content of the plate e.g.).

The main purpose of the studies about lane detection systems is determining the borders of the road for a safe drive, preventing the vehicle from exposing to a danger by going outside of the lane involuntarily. For that purpose, warns the driver or puts the active safety cautions in use. In the developed algorithms lane borders are mainly determined by using road lines, then the relative position of the vehicle is determined.

The purpose of the studies about vehicle detection, vehicle tracking and vehicle recognition with a fixed camera is processing the images, determining the traffic density and violations, providing the feedback data to traffic signalization system on the high density roads and crossroads for the traffic safety by using the image processing techniques. Image capturing is done with the single and multiple cameras which are placed on certain points. Target images are acquired by removing the scenes from the real-time captured images. On some studies, manufacturer recognition, model recognition, plate recognition etc. methods are used. In the vehicle recognition studies with fixed a camera, %88,5 - %92 of success is achieved.

Stereo camera images generally are used for distance obstacle recognition and distance estimation. In the studies, image capturing is done with a double camera to provide stereo view that is mounted on the vehicle console. In those implementations, the distance between the cameras and the focus distances of the cameras are the main factors for the success of the process. Distance estimation is done by making triangulation on the images captured with a stereo camera. One of the most important challenges in this implementation is removing the parts which outside of the highway from the image. Another challenge in the real-time implementations which stereo cameras are used in is the necessity of calibration to detect the image size when the camera position and angle is changed. Calibrating the camera is a time taking procedure and it needs to be repeated frequently. In the real time implementations which stereo images are processed, naturally, the processing speed gets lower. Besides, in the implementations which stereo cameras are used in, the success ratio has reached to %90.

In the vehicle detection, vehicle tracking and recognizing studies with a single moveable camera, image processing system are designed for; active safety systems like driver assistance system, active speed control, lane violation warning systems, recognition of the vehicles in the front, side or behind, recognition vehicles and obstacles for providing the driver's active safety to make the drivers realize potential dangers, recognizing the figures in the traffic signs and defining traffic rules with the gathered data to increase the traffic safety. In these studies histogram analysis method, genetic algorithm method, ADAS algorithm, AdaBoost learning algorithm, Gabor filter, LSI algorithm, Hough algorithm and Hausdorff algorithm are used for the image processing procedures.

In the studies which the camera is mounted to the vehicle, in the images stationary objects appear like they are in motion because of the camera is in motion. Depending on that, scene image always changes. This situation disables the scene removing procedure which is generally used in the methods with a fixed camera. Researchers are in a search of different methods for detecting the objects in motion because of the scene is always in a change. In the literature, images of the vehicles from different angles, shadows of the vehicles on the road, images of the vehicle tires, the logo of the manufacturer on the vehicle, pantone colors which is produced especially for the vehicle are used in detection and classification processes. .Y.Chang and C.W.Cho (2006) offers a scene segmentation algorithm which recognizes the objects. But to apply this method vehicle must be on an asphalt road, images must be captured on the day light and on the road, shadow of the car must exist, the vehicles which are going to be detected must have a width/height ratio that periodically changes. Techniques like; removing the images which is outside of the edge lines to detect the potential vehicles, removing the consecutive frames from each other (assignment of the previous frame to the next frame as a scene), converting the image to gray scale to threshold, blocking the capturing of the unwanted images by narrowing down the camera's point of view are implemented. The vehicle recognition studies have reached to %85 - %95 accuracy.

Very few of the studies about the detection and tracking of the vehicles ahead in the literature has estimated the distance between the vehicle ahead with the images taken by a single camera which is mounted to the vehicle. In the estimation of the distance between the vehicle ahead and the camera with the image processing methods; constant dimensions such as lengths of the road lines, vehicle sizes, width of the vehicle's projection, distance between the headlights of the vehicle are used. Also the images with the known sizes are taken as basis in the images which are captured while the vehicle is not moving.

Except the image processing methods; sonar, radar and laser implementations are used in the estimation of the vehicle ahead.

1. Plate Recognition Systems

J.K. Chang, S. Ryoo, H. Lim, (2013) introduced a real time license plate recognition method with the images captured by a camera that is mounted to the vehicle for the tracking of the vehicles in the traffic. With that method, named "Real-time license plate recognition" (RLPR), detection of the license plates in real time is targeted. In this method, first the license plate which going to be recognized is selected then the selected license plate is located and vehicle tracking is done by reading the license plate.

G.S.Hsu, J.C.Chen, Yu.Z. Chung (2013) introduced a license plate recognition system to use three different purposes; access control of the vehicle recognition implementations, tracking of the traffic rules implementations and road patrolling. In that study, in every implementation a change between the image and the relevant variable (pan angle, tilt angle, width/height ratio, distance, average light value and projection gradient) is described in a characteristic range. Depending on that, a solution is suggested with an implementation tendency. In the suggested solution, parameters are arranged regarding to the targeted implementation. A new classifier is used which uses the side cluster, a segmentation based on character MSER (Maximally Stable External Regions) segmentation, a class made of empty elements without any character (bad segmented). For the purpose of confirming the proposed method, a vehicle license plate database which has 2049 images (Application Oriented License Plate - AOLP) taken from different angles are served to researchers for making tests or benchmarking. It is stated that the application based method is more successful comparing to any other methods. S. Lee, J. Gwak, M. Jeon, (2013) developed an algorithm which recognizes the license plates from video images. Recognition process is getting harder in the video images when they are captured from even with narrow angles. To overcome that problem, image rectification that uses symmetry is used in that algorithm. With taking good results by the proposed method, in some situations it is stated that algorithm does not work. Suggestions are made to overcome that problem.

A. ousa (2012) proposed a method which provides the recognition of the vehicle license plates in different environmental and meteorological situations. In the proposed method, the images that contains license image are resized, converted to gray scale before filters are applied, edges are detected by using the Canny Edge Detection Algorithm and Gaussian Filter, little objects which are not a character are removed by applying filtration. 2. Lane Control Systems

M. B. Paula, C. R. Jung (2013) proposed the real time lane recognition and warning system, it is stated that %78.Q7 of success is achieved in the study with the variable light conditions.

M. Caner Kurtul (2010) performed a study in his thesis which makes lane recognition and sign analysis procedures at the same time by using Hough Algorithm. The study has a high success in sign analyzing with the circular and triangle signs but not with the other shaped signs. Overcast weather or insufficient light lowers the success of the method.

W. Zhu and friends (2008) proposed a study in which detection and tracking of straight and curved lanes are accomplished in day light and overcast weathers with the lane detection and lane tracking algorithm. The performance of the proposed method is affected in the overcast weathers.

Xiangjing An, oWu, Hangen He (2006) developed a system which gives a warning while leaving lane with a single camera mounted to the vehicle. Developed system produces warnings for avoiding the vehicle to leave its lane involuntary. Shih-Shinh Huang and friends (2004) can detect the road lines which limits the road that vehicle drives on and detect the presence of the vehicle ahead in the limited area. In the research, it is stated that the success ratio of vehicle recognition is between %92 - %99.

M. Bellino and friends (2005) performed the lane detection and lane tracking by using the images captured with a camera within the scope of SPARC project. The performance of the developed method is getting low in the overcast weathers and insufficient light. 3. Vehicle Detection with Fixed Camera

M. F. Hashmi, A. G. Keskar (2012) proposed a tracking of traffic flow and traffic analysis based on processing of images captured with a fixed camera. The method is developed for making statistical estimations on the crossroads in which traffic density is high. In the developed method, the success ratio of vehicle detection is reached up to %92.

A. A. Alvarez and friends (2012) developed a method based on image processing which can detect the traffic density by the images captured with a fixed camera.

With the method developed by S. Sivaraman, M. M. Trivedi (2012), vehicles on the crossroads are detected by processing the images captured with the fixed cameras placed on the crossroads. Success of the vehicle recognition is occurred between %88.5 - %90.2 percentages.

C.C. Chiu, M.Y. Ku, C.Y. Wang (2010) proposed a method in which vehicle detection and tracking procedure is done by fixed camera with scene removing method. In the study, recognition is done by the width and height of the recognized vehicles depending on the pixels from the constant distance.

Bing-Fei Wu and friends (2005) proposed an image processing method which provides the real time recognition of multiple cars. Proposed method detects and tracks the vehicles by using scene describing with the lane mask. 4. Vehicle Detection and Distance Estimation by Stereo Camera images

Y.Sung and friends (2012) proposed a method to detection of the obstacles on the road and around and estimation of the distance between the vehicle and the obstacle with the stereo seeing application. The proposed method has more than %90 percentages of success in the vehicle recognition.

D. F. Llorca and friends (2010) proposed a method which based on stereo seeing to detection of the pedestrians on highways. "RTK-DGPS" algorithm is used in the proposed method and it is stated that the method gives reliable results on 30 Km/s but the reliability reduces on high speeds.

B. Kormann, A. Neve, G. Klinker, W. Stechele (2010) proposed a stereo seeing based and 3 rd degree model used method in which vehicles and objects are detected on the highway.

K. Huh and friends (2008) proposed a method in which obstacles are recognized and the distance between the obstacle and the driving vehicle is estimated by a process based on capturing images with a camera mounted on the vehicle. In the attempts, the distance of the obstacle is estimated with %5 error to 45 meters. Proposed method makes serious mistakes after 70 meters in distance estimation.

G. Toulminet and friends (2006) proposed a vehicle detection method which uses the stereo camera images mounted to driving vehicle, based on obstacle removal and monocular model analysis. In the study, a stereo image based algorithm is proposed to vehicle detection and distance calculation. System works in two steps. In step one, the 3-D features of the image is acquired by stereo seeing based algorithm. It is examined with the algorithm which is developed for detecting vertical objects that is not belonged to road or background. Horizontal and vertical sides are analyzed and gray level image is examined. From the analysis results and the symmetry operator, 3-D features are acquired. While driving only one car is detected and distance estimation is given with two numbers and intervals of two meters.

M. Bertozzi and friends (2000) proposed a vehicle and pedestrian detection method based on image processing by using the stereo camera images. The distance between the vehicles ahead is estimated by using the stereo images. In the study, the shadows of the vehicles and the images of non- vehicle objects are reduced the success of the method.

Nedevschi, and the others (2004) detected the distance between the vehicles driving on the highway by processing the stereo images captured with a high resolution camera mounted on the vehicle. The non-vehicle objects are also detected in the study. The purpose of the project named "Advanced Driver Assistance Systems - ADAS" is preventing the traffic accidents by recognizing the vehicles ahead.

5. Vehicle Detection from the images Captured by a Single Moving Camera

X.Li, X. Guo,(2013) proposed an image based method for recognition of the vehicles ahead from the images captured with a single camera mounted to vehicle. With the proposed method, for detecting the vehicles in the daylight, the segmentation of the image of the vehicle's shadow on the road is done with the histogram analysis method. First candidates are detected by combining the vertical and horizontal edge fines and they are affirmed by a vehicle classifier which uses the gradient histogram and support vector machine. To improve the system performance, Kalman filter is used for tracking the detected vehicles. The results showed that the proposed method can be adjusted to different daylight conditions and in the normal daylight, the system can detect right with a %95.78 percentage of possibility and can be mistaken with a %1.97 of percentage.

C. F. Wu and friends (2013) developed a driver assistant system based on image processing. They placed a camera to the left rear mirror of the vehicle and established the detection and distance estimation of the vehicles coming from behind. The captured image is transformed to gray scale. The Lane Based Transformation method is applied to the transformed image. After the method is applied, size examination is done to the candidate vehicle on the left side of the lane by checking its width, height and time period in the image. Candidates are decided if they are a vehicle or not with that examination. The detection method of the candidate vehicle in the image is a modified version of the Mei's method. The developed method examines the angular area by carrying it to another rectangular area. In that study, RFNFN artificial intelligence algorithm has developed to make distance estimations. That algorithm makes distance estimations by connecting the pixel values in the image with the real distance. To make the distance estimation, real distance values are entered to the developed web with the images captured starting from two meters distance and increasing by two meters, up to thirty meters between the candidate vehicle and vehicle that the camera is mounted on. Those values are taken as reference and distance estimation is done by the pixel values. In the experiments it is seen that the estimation success reduces in short distances and increases if the distance is increased.

In the study of K. Kaplan, C. Kurtul, H. L. Akin (2012), a real time traffic sign detection, tracking and classification method is proposed. To develop the different stages of sign detection and classification, Affine Transformation Matrix injected to the Genetic Algorithm method. Thus, the method is become immune to the rotation of the traffic signs. Two different algorithms are used for classifying the signs, Artificial Neural Network and Support Vector Machine (SVM). The proposed method is stated as one level higher comparing to the other methods in process time. In the study, only the circular and triangle signs can be recognized and %87 - %95 percentage of accuracy has been achieved.

Y.C.Kuo, N.S. Pai, Y.F. Li (201 1 ) performed lane detection, vehicle recognition and distance estimation by capturing images with a CMOS camera in the RGB color mode and using Refined Vehicle Detection algorithm and Sobel filter. In the study, to limit the processed image with the road limits, lines of the lanes must be detected. The right and the left limit lines of the lanes were detected by using the Sobel filter. The detected limits were also accepted as the limit of the study. There wasn't any detection except the lane range, and the presence of the candidate vehicles were assumed in the acquired range. To remove noise pollution, the crest line of the vehicle which entered to the image was assumed as the upper limit, the latitudinal line that occurred five meters in front of the car which the camera was mounted, was assumed as the lower limit. The other unnecessary effects disregarded except the highway. If the tire tracks of the car was seen (vehicle's shadow) in the image, the vehicle's existence was accepted. By accepting that vehicle's tracks as lower limit, recognition process was performed by taking the vehicle into a frame made of horizontal and vertical lines with Refined Vehicle Detection algorithm. Candidate car's horizontal length is calculated by subtracting the vertical corner limits from each other and the height is calculated by multiplying the horizontal length by 0,8. The location of the tire tracks (vehicle's shadow) and the parameters of the camera were used to estimate the longitudinal distance to the vehicle ahead.

In a study by H.Pazhoumand-dar, M.Yaghoobi (2013), a method is proposed for detection, recognition of traffic signs and tracking them. In the detection of traffic signs, thresholding method is used, the detected signs are classified regarding to their shapes and the signs in the consecutive frames are tracked according to a new similarity condition. The candidate signs that passed the similarity criteria are tracked and their classification features are listed. The support vector machine with kernel function is used in final decision. In the detection of traffic signs %96 of accuracy level is reached. The detection level of the method is distinguishably reduced by the decreased light intensity and the dark.

Y. Kanzawa,1 H. Kobayashi, T.Ohkawa, T. Ito (2010) acquired high resolution images by signal processing method from the low resolution images of a distant vehicle which are taken with a single camera. In the method, a new high resolution reconstruction of the moving images is done regarding the frame composition. For that purpose, moving images are used which are taken with a single camera mounted to vehicle. In this method, a high resolution image can be acquired from so many low resolution images by using signal processing methods. The results of the proposed method are confirmed with the results of the vehicle edge detection. Real road situations are considered in the study.

A.Psyllos, C.N. Anagnostopoulos, E. Kayafas (2011) proposed a method which is about the detection of the manufacturer of the vehicle and the model of the car from the images taken with a fixed camera. There are eight different modules in the method, named "License Plate Recognition", "Vehicle Front View Segmentation", "Color Recognition, Phase Similarity Calculation", "Vehicle Logo Segmentation", "Probabilistic Artificial Nerve Web which Recognizes the Manufacturer", "Vehicle SIFT (Scale Invariant Feature Transform)", "Finger Print Calculation", "Probabilistic Artificial Nerve Web which Recognizes the Model of the Car". In the study %85 percentage of accuracy has been achieved and benefited from artificial nerve webs for identifying manufacturers and the model of the car. RGB Histogram Analysis has been used to empower the trademark identification. %90 percentage of accuracy has been achieved in the recognition of the car color. In different lightning conditions %89,1 percentage of accuracy in the identification of the license content and %96,5 percentage of accuracy recognition of license plates has been achieved in the study.

Advanced Driving Assistance System (ADAS), which is developed by M.Y. Fu, Y.S. Huang (2010) has developed a method that identifies the rules for drivers to have a better road control and realize the potential dangers by recognizing the traffic signs and the gathered information. The method which is developed to increase the traffic safety is using image processing and artificial intelligence techniques. In the study, traffic signs are identified with the help of their shapes and colors. The content of the traffic sign is identified with the developed algorithm. Template matching, artificial nerve webs, support vector machine are used for classification of the traffic signs.

D. Gao, W. Li, J. Duan, B. Zheng (2009) proposed a method which can scan the road, detect the vehicle and estimates the distance. An image process operation has been done by using the Sobel and Kernel operators on the captured images. Hough Transformation and Susan Algorithm are used to make the lane lines seem clearly in the image. They developed the Improved Sobel Operator by modifying the Sobel Operator to process the image according to characteristic of the road lines and the vehicles. With that operator, they have provided the lines of the candidate cars and the lane lines appear clearly. The method has been proposed to make algorithm stronger in the preprocessing part. Unwanted images have been cleaned out by the Adaptive Double Threshold method to make the study applicable to the different lightning and road situations. Vehicle's tire track and shadow are used to detect the candidate vehicles. The lower boundary line which is used for the candidate car's distance estimation is subtracted from the vehicle's shadow and from the tire track boundaries by the Susan algorithm. The vehicle has been framed by connecting the lower boundary lines and the vertical edge lines. In the study, distance estimation has been performed by carrying the two dimensional model of the vehicle, which is occurred with the camera's focal distance and the height of the camera from the floor in the mounted vehicle, to the three dimensional display screen. In that study, the calculation of the outer lane areas and the parts that does not contain vehicle track could not be done.

G. Y. Song, K. Y. Lee, J. W. Lee (2008) developed a method to detect the vehicles ahead and behind. The developed method comprises edge based candidate detection and image based candidate classification procedures. In the edge based candidate detection procedure, objects that has candidate features are detected by using the vertical and horizontal lines and symmetry factors. In the second phase, by the AdaBoost learning algorithm which uses the taken images is used to get results about the candidates if they are a vehicle or not. in the candidate assignment process the upper and bottom edges of the vehicle is detected by detecting the left and right vertical edge lines in the image then detecting the contact point of the vehicle's tire to the road.

A. Koncar, H.JanBen, S.Halgamuge (2007) developed a hierarchical classifier to use in recognition of the traffic signs. In the study, it is emphasized that the hierarchical classifiers have important advantage against the single phase classifiers regarding the classification accuracy and complexity of the classification features. The proposed method uses the features gathered by Gabor wavelets to make similarity maps, thus classification space is divided into smaller and more distinctive sets. It was stated that, the proposed method had better results than the k-means algorithm in the detection of the traffic signs.

In his thesis study in 2007, E. Bas has developed a traffic analysts system with two different video bases. One of them was traffic observation application with a fixed camera, and the other one was a safety system which warns the driver by using the images taken from a camera mounted to vehicle. In the traffic observation system which has used a fixed camera, Gaussian Mixture Model (GMM) based background removal method has used for detection and tracking of the vehicles. To reduce the calculation complication, automatically subtrahend road mask has used and a new overlap algorithm has generated for vehicle tracking and counting by using vehicle sizes. For driver warning system, localization of the vehicle from the road lane, distance estimation between the vehicles and two step lane signs detection depending on the features have been performed. Two modes have been defined depending to tracking results; inter lane mode and passing mode. Distance estimation has been performed by comparing the pixels between the lanes in the image plane and the pixels of the line which has drawn by combining the vehicle's corners in inter lane mode. It has been stated that, with the developed algorithm the estimation has been done with a %1 ,25 error margin. That method, which uses the road lines, cannot give trustabie results if the road lines are erased or the distance between the lines is variable.

J.Y.Chang, C.W.Cho,(2006) made the study named Intelligent

Transportation System (ITS) which detects the closest vehicle ahead to the vehicle which camera is mounted on and estimates the distance. In the study, they developed an image processing algorithm named Adaptive Resonance Theory (ART). In that algorithm, fuzzy rule base was used for classification. In that algorithm, the captured image was divided into segments to detect the candidate vehicle. Highway, sky and the nature between were divided into segments separately. Image segmentation was done based on the neighborhood structure of the pixels in the image and the pixel values. The first rule to detect the presence of the candidate vehicle is accepting the vehicle is on the highway. The second rule is vehicle must have a shadow. It has been accepted that the height and the width of the vehicle was between constant values. The shadow of the candidate vehicle has been accepted as the edge line. And it has been combined with the vertical lines for detection of the vehicle. For successful distance estimation, camera angles must be arranged regularly. In the developed method, the camera parameters of the chosen vehicle (camera's focal distance and the height of the camera from the floor in the mounted vehicle) is used for distance estimation by carrying the two dimensional model of the vehicle to three dimensional display screen. The estimated distance has been compared with the values of the radar sensors. While comparing the values, %0 to %11,07 error margins has been detected. In the preconditions of the study, the limitation of the road type with just high ways is reducing the reliability of the study and it is also a negative situation.

K.Furukawa, R.Okada, Tanguchi, K. Onoguchi (2004) developed an application for the automobiles, named "Inner Car Integrated Observation System". In the developed system, an image processor LSI has been used. In the proposed method, there were three cameras and the first one was placed on the console to view the front end. And the other two cameras were placed as they could view rear right and left. It identifies the vehicles as obstacles which are coming from ahead and behind. In that study, obstacles were identified by testing the actions of the horizontal line parts in the consecutive images if they provided a ground plane constraint or vehicle surface movement constraint. The developed algorithm was tried by using a new LSI. The used LSI can detect the vehicles which are ahead or behind, with a 10 - 50 ms/frame capacity. The efficiency of the technique was affirmed with the tests in the different road situations of the developed method.

U. Handmann and friends (2000) has developed a new driver assistance system which uses the image processing techniques. In the study, a CCD camera on the rear mirror and sensors on the left and right side of the vehicle had been used. In the system, the camera captures the images. The identification of the objects has been targeted by combining the captured image by CCD camera and the received signals by the sensors with the developed algorithm as a part of the study. The object detection has been performed in three stages; object detection, object tracking and object recognition. Object hypothesis were used to generate motion sensitive simulations and obtain detailed information about the objects in front of the vehicle. BRIEF DESCRIPTION OF THE INVENTION

Vehicle recognition, relative distance and speed estimation procedures are performed with the "Recognition, Distance and Speed Estimation" (RDSE) algorithm which has been developed as a part of this invention. In the RDSE algorithm, Gaussian filter is applied to the digital image captured with the CCD camera to remove the noise, then the image is converted to gray scale, thresholding and Canny Edge Detection algorithms are applied to the acquired image for the segmentation of the parts which can be a vehicle.

To clear the non-road objects in the captured digital image, camera view angle is narrowed by removing the lanes with a specific width from both sides. After the edge detection process, the vertical and horizontal lines are detected by using Hough algorithm and the possibility of a vehicle's presence is detected in the rectangles which are shaped by the detected lines. Candidate images are decided if they are a vehicle or not by classifying them according to their pixels, width - height ratio, if they have a license plate. RDSE is a real time working algorithm in a computer which is in the car. It marks and takes in to a frame the objects which have a possibility of being a vehicle, in the road image on the screen. The developed method can estimate the relative distance of the closest vehicle ahead. The relative distance of the vehicle ahead is determined just by determining the height of the license plate in the image in pixels. Relative distance of the vehicle ahead is determined by comparing the height of the licence plate in pixels in captured image by height of the licence plate in pixels in calibration images. Calibration images contain pictures of licence plates taken from predetermined(known) relative distances from the camera. License plate detection algorithm works like the vehicle detection algorithm with Hough algorithm. A licence plate is detected by classification of candidate rectangles in the main image determined by Hough algorithm. Classification od candidate images is made by using their width/height ratio and size of the rectangles. RDSE algorithm estimates the relative speed of the vehicle ahead by using the change in the height of the license plate (measured in pixels) in consecutive images taken in certain time intervals.

MEANINGS OF THE FIGURES

Figure 1. Schematic View of the System

Figure 2. Working Principle of the System Figure 3. An Example of Gaussian Filter Application to the Acquired Image

Figure 4. Calculating the Average Pixel Values in Edge Detection

Figure 5. An Example of Edge Detection Procedure

Figure 6. An Example of Edge Detection Filter Applied Camera Image

Figure 7. Generation of the Corners by Circled Coordinates

Figure 8. Application of Edge Detection Algorithm

Figure 9. Marking the Candidate Vehicle and License Plate

Figure 10. View of the Height of the License Plate

Figure 11. The Flow Chart of the Developed Software to Apply the

Method

Figure 12. Results of the Distance Estimation While Vehicles are not

Moving

Figure 13. Distance and Relative Speed Estimations for a Vehicle

Which has a Constant Speed of 50 Km/h

Figure 14. Distance and Relative Speed Estimations for Random

Vehicles in the Traffic DETAILED DESCRIPTION OF THE INVENTION

The first implementation in the image processing field was started in the beginning of the twentieth century. But the first modern implementation was started to fix the stains occurred by the distortions and effects of the noises with computer techniques on the images sent from a satellite which was turning around the Moon, in 1964, USA California Jet Propulsion Laboratory. That implementation was the basis of the image processing methods which were used in the images taken from Mariner test flights of the Surveyor space vehicle to the Mars, Apollo space vehicle which was sent to the Moon and some other space vehicles.

Image processing comprises all the phases about acquiring, forming, saving and processing of the image in the real life. The studies about forming the measured or saved digital images suitably for the purpose is called image processing.

Analog images must be turned into digital format to be processed in the computer. Turning an analog image into a digital image is called digitization. Digital image is made of units called pixels. In the digital image, a number between 0 and 255 is assigned to each pixel and it describes the gray level at that point. Digital image can be accepted as a matrix and its elements have gray values of the pixels in the image. Each element of the matrix, in other words every numeric value of the each pixel, is equal to a gray level value on the points corresponding to itself.

The quality of a digital image is closely related with the numbers of the pixels and with the gray tone level value. When these values of the parameters increase, in other words the number of the pixels and the values of the gray tone level rise, the quality of the image gets higher but in that situation the size of the image is also gets bigger. And the colored images can be stated as the combination of the primary colors according to each pixel's color value. For example, in the RGB color space, primary colors are red, green and blue and for each pixel the values of combination of the three primary colors are determined separately.

During the digitization of the analog images, there will be losses in the images. That situation causes to obtain lower quality images in the result of the digitization comparing to the original image. Therefore, different algorithms, filters and techniques are developed to enhance and repair the images. With that methods, removal of the problems such as noises, blur, lack of light is provided.

In recent years, image processing implementations are commonly used in different areas such as astronomy, medicine, biology, archeology, industry, travel, traffic and automotive for different purposes. Implementations are performed like clarifying the electron microscope images, restoring the original image from the holographic records by using computer and image processing methods. The primary areas which image processing methods are used in traffic are traffic safety and implementations in automotive industry. In recent years, because of the raises in the vehicle population and the importance of the traffic safety, the necessity to vehicle recognition and tracking systems comprise image processing methods is also raising. Within this scope, safety implementations which recognize the license plate, avoid the accident, detect the lane change, lack of attention and sleep situations of the driver are the basis of the studies.

Invention performs the detection of the candidate images which can be a vehicle, segmentation of the vehicle images by vehicle recognition, detection of the license plates location, estimation of the relative speed and the distance of the vehicle ahead to the camera with the images captured by a single camera mounted to the vehicle console. The feature of the method is; license plate presence is taken as a basis for detection of the vehicles in the candidate images, vehicle license plate height is taken as a reference in estimation of the distance of the vehicle ahead to the camera, the change in the size of the license plate in the consecutive images is used for estimation of the relative speed of the vehicle ahead.

In the implementation of the subject matter method, the image of the road is taken with a CCD camera mounted on the console of the vehicle which is driving at 10-60 km speed (Figure 1 and Figure 2). The taken image is Gaussian filtered first (Figure 3), the obtained image is transformed in to gray scale (Figure 4) then edge detection is performed by applying the thresholding and Canny Edge Detection algorithm to that image (Figure 5 and Figure 6) and the candidate objects are detected in the image which can be a vehicle by using the vertical and horizontal lines obtained with the implementation of the Hough algorithm (Figure 8). The license plate presence is a requirement for detecting the vehicles in the candidate objects.

RDSE (Recognition, Distance and Speed Estimation) algorithm works in real time in the computer that is in the car and it frames and marks the objects which can be a vehicle in the road image which appears in the computer screen (Figure 9). License plate height is taken as a reference in estimation of the distance of the vehicles ahead to the camera (Figure 10) and the change in the size of the license plate in the consecutive images is used for estimation of the relative speed. The classification of license plates according to their width/height ratio and their size is used for selecting them out of the rectangles made of vertical and horizontal lines which are drew by using Hough algorithm.

VEHICLE AND LICENSE PLATE RECOGNITION SYSTEMS

1. Display of the Moving Objects

Displaying is a representation event of true features of three- dimensional objects in a two-dimensional planary space. The true features of displayed objects in three-dimensional world can be only identified by using several image processing and analyzing method.

The first step in the processing of moving images is obtaining the image. For that, the transfer of the image from the real world to a storage unit or a film layer is needed. That process is performed with cameras. Cameras comprise an image detector and a digitizer unit which converts the detected image to a digital image. If the image sensor does not convert the image to a digital image directly, the obtained analog image is converted to a digital image.

Viewing systems comprise several functional components. Those are described as light source, camera, sensor, operation unit and actors. (Jahne and Others, 2000) Camera is a unit for collecting the radiation energy which is given off by objects. Lens catches a part of the light coming from the object and focuses it on the viewing sensor. Here, radiation energy is transformed to an image mark. Focusing on the camera is provided by the physical movement of the lens towards or apart from the viewing sensor. In a typical camera, the change ratio of the light entrance opening can be increased to 300 times more regarding the smallest opening. (Smith, 1999)

For image detection in electronic cameras, electrical charge coupled devices (CCD) are commonly used. CCD cameras are a kind of video cameras. Instead of film, a CCD chip is placed behind the lenses which transforms the light intensity to electronic signals and transfers it directly to the computer. CCD camera is a preferred equipment because of its compactness, sensitivity, stability, low price and long service life. CCD cameras typically operate on a silicon plate which has a few millimeters height. In that system, image is read by successive parallel or serial transmitting of the electrons to the output amplifier which are stored in the charge storages named "well". The image data in the two dimensional area of CCD camera is converted to a serial data in a special alignment. An image frame is taken and after the storing process is completed in all the rows, all the rows are shifted parallelly to an upper row in the same time. Then all the rows are read starting from the first row. Thus, all the rows are alternately transferred into the horizontal recorder. Charges are quickly transmitted from here to charge detection amplifier serially. (Smith, 1999)

Camera lens focuses only on an exact distance in the image. The focused spots come out as curve couples. Those curves are almost shaped as a sphere in three-dimensional space. Objects appear blurry if they are far from the exact focus surface.

In the viewing of the objects with a camera, depth measurements can be performed in very different ways. A laser beam is used in the measurements which a Iaser is used in. Apart from that, measurements are performed by using the flight time of the sent beam and the impact signals which are sent to the object.

A Iaser beam is sent to the areas which are wanted to be measured. And that Iaser beam is calculated automatically by multiplying its round trip time and its speed. Coordinates are gathered by a coordinate determiner. In CCD cameras, data is digitalized after they are obtained. In consideration of Iaser moves linearly, it is ideal for soft surfaced plates named straight-surface.

CCD cameras catches the Iaser beams light which is reflected from an object. The X, Y and Z coordinates of the Iaser line can be calculated trigonometrica!ly.

More than 650 independent data point can be gathered on a single Iaser line depending on the used sensor and software options. Scanned object is represented by those data points named point clouds which are made up from a few hundred to millions of points. Camera is used in the calculation of the depth. The fixed digitalization head in the systems with a camera is placed 70-100 centimeters ahead from the targeted object.

During the digitalization, reflection on the parts surface of the projection of the edge formations is provided. And those projections are recorded by camera which is fixed in the measurement head. Three-dimensional coordinates are sensitively calculated with the help of the digital image processor.

The digitalization of the whole object is obtained by bringing several measurements together. And sometimes, more than one view point or in other words more than one camera has to be used.

The formula of the calculating depth with a single camera is; z: Distance of a point on the t moment in the image,

Δζ: Distance of the camera takes during the t 0 and t x

2. License Plate Recognition Algorithms

2.1. Hough Transformation

With the Hough Transformation, vectorization is occurred by converting the spatial extended patterns in the black and white image to a shorter parameter space. That transformation converts the detection problem in the image space to the local peak detection problem in the parameter space which can be over came more easily. Parameterizing the lines regarding to their curves and intersection spots is a way for detection of the straight lines of the Hough Transformation Algorithms. Straight lines are identified in the below equality.

y= mx+c

Every line in the (x,y) coordinate axis corresponds a point in the (m,c) axis. And infinite lines may pass from a point in the (x,y) axis. Gradients and finishing points of these tines compose a line in the (m,c) plane. And that line is identified by the second equality.

c= - mx+y

(m,c) plane is divided into boxes shaped as a rectangular. Those boxes gather every black pixel in the (x,y) plane which takes place on the second equality's line. After drawing second equality for every black pixel, the cells are increased which the line is passed from. After calculating every pixel with that method in the image space, the peak points in the transformation space describe the lines. If the noise is considered, each peak which is above the determined threshold is used for generating a line which is described in the first equality. In the implementation, that line can be the resultant of many lines in the same direction. Thus, ending points of those parts are found by tracking the default line pixels in the original image.

Line width is also determined during the line tracking process by checking each pixels width. Hough transformation method can detect the lines in noisy image because of "m" and "c" values are expected to generate peaks on the (m,c) plane which are belonged to points of broken and noisy lines in original image. (Liu and others, 1999)

The simplest version of that algorithm can detect the lines, but it can also be adapted to more complex shapes. The method has a long calculation time because it operates on every pixel at least one time.

Hough transformation algorithm has been used in our invention. Angles, which are generated for the implementations in different distances, have been named as α, β and Θ. And Hough transformation algorithm is applied to each of them.

Algorithm is formulized as,

y=ax+b

x.cos Θ+y.sin 0-p =0 2.2. Thinning Based Algorithms

Thinning-based methods are used in the sub process of medial axis points sampling to obtain the one-pixel wide skeletal pixels before the line tracking sub process takes place. Thinning, which may also be referred to as skeletonisation, skeletonising, core-line detection, medial axis transformation or symmetric axis transformation in the literature, is a process that applies morphological operations on the input raster image and outputs one-pixel wide skeletons of black pixel areas. The skeleton of a black area is the set of pixels whose amount is the smailest, but whose topological structure is identical to the original image shape. Hence, it is much easier to operate and analyze than the original image.

The thinning algorithms are of three groups: iterative boundary erosion, distance transform and adequate skeleton.

Iterative thinning methods employ the idea of iteratively removing the outside layer of pixels from the line object, until only the skeleton or medial axis remains.

The main procedure in this method is moving a 3 χ 3 window over the image and applying a set of rules to mark the center of the window. On completion of each scan, all marked points are deleted. The scan is repeated until no more points can be removed.

Distance transform and adequate skeleton algorithms define the distance transform of a binary image as replacing each pixel by a number indicating the minimum distance from that pixel to a white point. The distance between two points is defined by the number of pixels in the shortest four- connected chain between the points. This transform is calculated by evaluating a function sequentially in a raster scan over the image, followed by a second function in a reverse scan. Once the distance function has been calculated, a local maximum operation is used to find the skeleton. It has been shown to be the smallest set of points needed to reconstruct the image exactly.

The objective of thinning algorithms is to reduce the data volume, such that only the topological shape of the image remains. The result usually requires further processing. Most thinning algorithms are capable of maintaining connectivity. However, the main disadvantages are high time complexities, loss of shape information (such as line width), distortions at junctions, and false and spurious branches.

These algorithms may be used in the vectorization of line drawings, their main application is in the domain of OCR, in which the image size is usually small and the line width is not critical.

Performance evaluations of thinning algorithms have been carried out. Different algorithms may be featuring good speed, fair connectivity but they have a poor quality of skeleton. That kind of an algorithm can be used for Optical Character Recognition.

The skeletons produced by the thinning procedure are still in bitmapped form, and need to be vectorized for further processing. The one-pixel wide skeletal pixels are followed and linked to a chain by a line tracking sub process. Polygonalisation can then be applied to convert the pixel chain to a polyline which contains only the critical points. (Liu and others, 1999)

2.3. Run Graph Based Methods

Run graph based methods examine the raster image in either row or column direction to calculate the run length encoding. The runs then analyzed to create a graph structure. The midpoint of runs in a line-like area is polygonalized to form a point chain, which becomes an edge in the graph structure, and a non-line-like area becomes a node connecting the adjacent edges. (Song and others, 2002) Run graph based methods are enough for structural presentation and efficient in line extraction and obtaining information and it is also easy to process.

The procedure of constructing a run graph of an image is as follows.

The first step is to build both horizontal and vertical simple run graphs, which consist of only horizontal runs and vertical runs, respectively. Secondly, edges are built as sets of adjacent regular short runs. The remaining pieces of the image, encoded as lists of vertical runs and subruns, are the node areas. The line extraction procedure takes as input such a run graph. The shape of each node is then refined by a heuristic procedure that attempts to minimize the area of the node and maximize the lengths of the connected edges. The midpoints of the short runs in the edge areas are taken as the skeleton points, which further undergo polygonalisation. (Liu and others, 1999)

2.4. Figure Based Algorithms

In figure based algorithms, first figures are extracted from the raster images and matchable figures are detected to describe areas like lines. Mid axises which are mostly represented by the point chains are created from that figure pairs (Song and others, 2002). Figure based algorithms are different from the thinning based algorithms because they can perform sampling and finding mid-axis at the same time. That procedure is performed by finding all the mid- axises and then the line tracking in thinning based methods. The edges can easily be extracted by edge extracter algorithms. 2.5. Mesh Pattern-Based Algorithms

The basic idea of the mesh pattern-based methods is to divide the entire image using a certain mesh, and to detect characteristic patterns by only checking the distribution of the black pixels on the border of each unit of the mesh. A control map for the image is then prepared using these patterns. Finally, the extraction of long straight-line segments is performed by analyzing the control map.

The image is divided into square meshes, which are defined by an equal proper mesh size, x. Each unit mesh is analyzed according to the pixels on the one-pixel wide border. It is then labeled according to its characteristic pattern, identified in comparison to a known one in the pattern database.

The image is then represented by a control map, in which each unit mesh in the original image is replaced with its characteristic pattern label. The proper mesh size should be larger than the maximum width of the line segments, but smaller than the minimum interspace between two line segments on the image. Mesh size should be smaller than the smallest line segment of the image for line detection. Dot segments may be missed during line tracking. This may be an advantage if the dots are noises, but a big disadvantage in other situations. (Liu and others, 1999)

2.6. Sparse Pixel-Based Methods

Sparse Pixel-Based methods are performed by Dori which is inspired from the Orthogonal Zig-Zag method that is also founded by Dori. (Dori and others, 1999)

The basic idea of OZZ is to track the course of a one pixel wide 'beam of light', which turns orthogonally each time it hits the edge of the area covered by the black pixels. The midpoint of each run, which is the intersection of the light beam and the area within the area, is recorded. If a run is longer than a predefined threshold the run stops there, an orthogonal run is made and its midpoint is recorded. This may happen when the tracked line is horizontal or vertical.

The Sparse Pixel-Based algorithm improves the OZZ method in the following ways:

• The general tracking procedure starts from a reliable starting medial axis point found by a special procedure for each black area.

• A general tracking procedure is used to handle all three cases of OZZ, i.e. horizontal, vertical and slant. Therefore, only one pass of screening is needed, and the combination of the two passes is avoided, making Sparse Pixel-Based algorithm faster than OZZ.

• A junction recovery procedure is applied wherever a junction is encountered during line tracking.

3. Detection of the License Plate Area

3.1. Transforming into Gray Scaled image

In the license plate recognition systems, colored images contain a lot of unnecessary details. Colored images can restrain those systems to give successful results. In Turkish license plate systems, license plates consist of black letters and numbers on white surface and a frame. The colored parts of the picture or image can be thought as a unnecessary detail.

In the studies, which involve Turkish license plates, the first step is transforming the colored image in to gray form.

3.2. Histogram Equalization

Gray tone variance is different in every image, for that reason histogram equalization is used for providing the homogeneity in the gray tone variation scale.

In the histogram equalization process, regular variation is applied over the image's cumulative gray level variation scale. That variation equalizes the gray level variation. (Huang and others, 1999)

Applying preprocessing to the image which has an increased contrast is very important. Histogram equalization is used for increasing the contrast of the gray scale image in the computer.

3.3. Transforming into Black-White Mode

To reach the character information on the license plate more easily, license plate must be cleaned out from the environmental objects in the image. For that, the determination of which object is the foreground and which object is the background must be provided. That process is called transforming into black-white mode (binarization).

Foreground and background separation is performed with a true threshold value determination. The pixels, which their values cannot pass the threshold level stand on the background, the ones which their values pass the threshold level stand on the foreground. Image is transformed into black-white mode with that classification.

The threshold level must be chosen very carefully for obtaining an image in black-white form. If a very high valued threshold level is selected, an image is shaped with overlapped edges.

There are lots of methods for transforming process into black-white mode. But the common used method among them is thresholding with median and standard deviation. That method is one of the simplest method for separating the foreground and background. In that method, median value is calculated on the whole candidate license piate area. 3.4. Horizontal and Vertical Division

One of methods for locating the license plate is closed quadrangles. With that method, the location of the license plate is detected by using horizontal and vertical edge informations. The image is transformed into black- white mode. Then the edge information extraction procedure is applied. After that application, horizontal and vertical lines are detected with the gradient tolerance which is determined with the Hough algorithm. The most rectangular shaped horizontal and vertical line groups are evaluated among that detected horizontal and vertical lines. That method is unsuccessful in the images which do not include the edge information of the license plates. Day times or places with insufficient lightening avoid the success of that method. (Chun and others, 1993)

3.5. Scaling of License Plate Area

Scaling of license plate area is necessary for the character decomposition process. Scaling is implemented with interpolation. Interpolation is determining the valuations between the points which are not obtained with the obtained points.

4. License Plate Selection Algorithms

One of the hardest processes in license plate recognition is locating the license plate. There are different methods for locating the license plate. Edge detection and thresholding come first in those methods called "algorithm". Another method is related with the license plate's colour, shape and texture.

Gabor filter is another method for locating the license plate. Feature vectors can be obtained which are independent from rotation and scale with the cores of that filter having different directions. The Gabor filter answers, which are acquired after the process, are used directly in locating the license plate. The areas which do not have the license plate characteristics are restrained after the Gabor filters are applied. (Kahraman and others, 2003)

Hough transformation is another used method. It can be used for detection of the border lines of the license plate. First, edge detection is applied in that method. Thresholding is applied depending on average brightness of the image. Hough transformation is applied to whole image and end points are detected in Hough space.

5. Running of the System

The image of the road is taken with a CCD camera mounted on the console of the test vehicle which has a speed of 10-60 km. A software is developed in C++ language for implementing the proposed method. That software runs in real time and frames and marks the objects which can be a vehicle. Developed software uses the OpenCV function library, which is developed for real time seeing implementations for computers by Intel that is open to everyone's use.

Webcam is fixed to the front console of the test vehicle. With the assumption of the vehicles ahead will be parallel to the road axis, the webcam is mounted as its lens plane is perpendicular to road axis (lens axis is parallel to road axis). The reason of that is obtaining the images perpendicular as can be to the license plate for capturing the license plate image of the vehicles ahead in real size.

A test system consists of vehicles to be tracked, a test vehicle carrying the camera to take images and the computer which runs software that is formed to apply the method. Schematic view of the formed test system is given in Figure 1 and Figure 2.

In the measurement, Bosch DLE 150 type distance measurement device with laser is used for validating the distance and relative speed values acquired with the proposed method. 5.1. Recordings for Vatidation

To prove that there is a relation with the license plates height in two- dimensional image and the distance between the test car and the tracked car, images are taken while the both vehicles are not moving for the different values of the distance between two cars (1 m, 2m, 3m, 4m, 5m, 7m, 10m, 15m, 25m, 35m, 80m, 90m). In these tests, to validate the values of distance between the vehicles which are going to be obtained by using two-dimensional images, the distance has been measured with a ruler and with the Bosch DLE 150 type lasermeter.

In the second phase, distance estimation has been done between the vehicles with the taken images while the tracked vehicle's speed is constant at 50km/h and test vehicle's speed is changing between 10-60km/h to determine the software's efficiency on calculating the distance. In these recordings, the relative speed of the tracked vehicle has been calculated by according the speed value that was read from the vehicle's speedometer.

These recordings have been taken in real road situations, in day light, in light rain and in overcast weather for recognition of the vehicles ahead and estimation of the distance and the relative speed of it. The developed method does not give trustable results in heavy rainy weather because of the windshield wipers cause abrupt in images.

6. Detection of the Vehicles Ahead and Estimation of the Distance and Speed of Them

The number of vehicles in the highways is in a rising trend. That situation brings a lot of problems, such as; substructure deficiency, traffic accidents, parking space necessity, safety weakness etc.

Manufacturers, government, consumers etc. care about the reducement of the traffic accidents most. For that reason, in recent years important studies have been going on about the development of the mechatronic systems which can avoid traffic accident caused by reverie, sleeping, misdetection etc.

The developed system within the scope of the invention, the method for recognition of the vehicle ahead and estimation of the distance to the camera and the relative speed of it, can contribute to driving safety precautions in automobiles and prevent the traffic accidents.

In the method related to the invention, the software, which is developed for recognition of the vehicle ahead and the estimation of its distance and speed runs real time in the computer placed in the vehicle and it frames and marks the objects in the computer's screen that can be a vehicle.

The distance of the vehicles ahead to the camera is estimated by taking the license plate's height as reference. And for the relative speed estimation license plate's size change in consecutive images is used. To apply the method, first the detection of the vehicles ahead has been performed.

The developed software picks from the real-time images taken with the camera in at certain intervals then the picked images are transformed from the RGB color space to gray scale.

To clean unnecessary objects on the side of the road out from the image, cropping procedure is applied to the outer sides of the image which is about %10 percentage (%20 in total).

The image after the cropping procedure consists of mainly the road image that is ahead of the vehicle. Constant and unnecessary objects such as traffic signs, trees, advertisements etc. are cleaned out from the image with the cropping procedure.

Canny Edge Detection algorithm is used for detection of the edges in the image. Canny Edge Detection algorithm is applied in four stages; first the image gets blurry by applying Gaussian filter. A softened image is acquired by applying Gaussian filter. The reason for using Canny Edge Detection algorithm in our method is it is one of the most compatible edge detection algorithms with the Gaussian filter. Canny Edge Detection algorithm is designed for optimizing the signal noise ratio (Canny, 1986). Canny Edge Detection algorithm is applied in four stages. These are softening, gradient calculation, compressing the spots which are not at maximum and thresholding.

6.1. Softening

A softening procedure is applied to the image by Gaussian filter. if is original image, G[i,j,a] is Gaussian softening filter and σ is the standard deviation of the Gaussian filter (Softening Level), the acquired softened image is shown as S[i,j] after the original image and G[i,j,a] filter convulution (Arsian, 2011). The filter applied version of the taken image can be seen in Figure 3.

6.2. Gradient Calculation

Gradient size and direction is calculated by using finite difference approaches for partial differentiation. First, the partial differentiations of S[i, , /] are obtained (Arsian, 2011).

These are written as: P[i ] * {S[i + l] - S[i,j] + S[i + l,j + l] - S[i + l ])/2

Q[ j) * (S[i ] - S[i + 1. /] + S[i,j + l] -S[i + + 1])/2

The partial differentiations of x and y is calculated by the median of the finite differences on a 2x2 square matrix. According to that, the size value of the gradient is:

and its angle is:

Q[i ] = aictan(Q[i,j],P[i,j])

6.3. Compressing the Point Which Are Not Maximum

In the gradient algorithm, the detection of the edge pixels is able by taking gradient. The edge point according to Canny Algorithm can be thought as a point which has a volume of locally maximum in the direction of gradient vector. This is a very limiting situation. This procedure is used for thinning the lines which are made of edge pixels found by thresholding method and it's called compressing the points which are not maximum.

The N[i,j] view obtained after that procedure is given below:

N[i ] = nms(M[i,j]^[i,j})

At the points, which are accepted as local maximum points that value is equal to zero.

6.4. Thresholding

To find edge pixels, double thresholding algorithm is used. Canny edge detection algorithm is designed for optimizing the mark noise ratio. Even softening procedure is applied in the first step, in the N[i,j] image, which in the points are compressed which are not maximum, it is possible to occur some mistaken edge points because of the noises. That kind of mistaken edge points do not have so much effect. To reduce the mistaken edge points in that image a threshold level is applied to N[/,y ' ] and all the values of the points which are below that level are equaled to zero.

E(i, j) is a view that has clarified edges which is obtained by applying thresholding procedure to the image which has points that are not maximum. But finding the proper threshold level is important in that method. Usually the proper threshold level is found by trial and error. If the threshold level is too low, unwanted false edges are occurred in the Eii ) image. If the threshold level is too high, some edges can disappear. For that reason, for a more efficient thresholding, two thresholds are used by applying double-thresholding method.

Threshold levels are taken as Ti and T 2 double-thresholding method which is applied to the N[i,j} to obtain clarified image. After that procedure T^ijj and T 2 [i,j] threshold images are acquired. In the T 2 image there are opening on the line that is made of edge pixels. But the number of the wrong edges which is outside is not that much. For that reason, by using T x image, the openings on the T 2 edge line are closed and the optimum correction is established.

Edge detection is method based on grouping the pixel levels on the image. In that method, first the pixels which have close RGB values are grouped and a single value is assigned to them. Thus, pixel groups are formed in the image. Then the rough transitions between those pixel groups are identified as edges. Canny Edge Detection algorithm is used for identifying the objects in the image. The edges which complete each other can identify an object (Figure 4).

As seen in Figure 4, 4-5-7-6 (average 5,5) pixel values are identified as a group, and 152-148-149 (average 149) pixel values are identified as a different group. There is a serious average difference between those two groups and depending on that the presence of an edge between those groups is concluded.

In Figure 5, the transitions can clearly be seen in the edge detection applied image. As seen in the Figure 5, when the edge detection is applied to the image the edges around the child can be obtained cleariy without getting effected by the background. The success of the obtaining edges with edge detection procedure depends on enough lightening on the first image.

Hough transformation is used for detection of the rectangular shapes which are formed by horizontal and vertical lines on the road image. Hough transformation works with the logic of voting the suitability of detected edges to geometrical shapes.

The detection of the rectangular shaped edges by using Hough transformation can be summarized by the steps below:

• The edges are detected on the source image.

• The image is transformed to black-white mode with a threshold method.

• Possible shapes are voted for each edge pixel on an accumulator matrix which in polar coordinate values of possible geometrical shapes which can be on the point for every edge pixel are used. • The shapes which have the highest accumulator value has a high possibility to be on the image or to be clear because they are the most voted shapes.

The found shapes can be printed on the image optionally.

The closed rectangles formed by the horizontal and vertical lines which are obtained by applying the Hough algorithm are accepted as candidate vehicle images and license plate images. The comments on the library which Open CV consists are used for the detection of candidate vehicles and license plate images.

The coordinates which are circled show corners in Figure 7. In that image, the line between the (10,70) and (10,10) points can be identified as a vertical line, the line between the (10,70) and (70,70) points can be identified as a horizontal line and the (10, 10) can be identified as the corner point. The horizontal and vertical lines that are belonged to candidate rectangles in the image are merged at the corner points.

A corner detection method has to be used to detect candidate rectangles formed by horizontal and vertical fines after Canny Edge Detection algorithm is applied. Harris and Stephens Corner Detection algorithm is one of the most used algorithms in corner detection.

The closed shapes that are nearly a rectangular are accepted as candidate vehicle and license plate images which are determined based on the corner points detected by using Harris and Stephens Corner Detection algorithm.

During the detection of horizontal and vertical lines, the lines are selected which have an angle of 180±15° and 90±15° with the horizon. The lines except the selected horizontal and vertical ones are discarded. The lines which are placed at the top and the bottom of the candidate image are accepted as the border lines.

The obtained candidate license plate images are classified by using the standard license plate ratios (1/4.5-1/5) and the license plate images are acquired as shown in Figure 9. The rectangles that are belonged to the candidate vehicles in the image are classified with their width/height ratio (0.6- 1.2) and the vehicle images are obtained in the image.

6.5. Relation between the License Plate's Height and the Distance

The method is designed based on using the license plate to detect the distance of the vehicles ahead. In the used method, it is accepted that the distance of the vehicle ahead is proportional to the license plate's height which is in the image taken with the calibrated camera. In the method, it is also possible to estimate the speed of the vehicle ahead from the change of the license plate's height in consecutive images according to time.

In Turkey, vehicles have standardized license plate sizes. The standard license plate height is 1 1x52 cm and 21x32 cm according to the Highway Traffic Regulation's Article 6, Item 2 which was published in the 21 March 2012 dated and 28240 numbered Official Gazette.

It is accepted that the camera which takes the road image has its focus axis always parallel to the road surface and perpendicular to the license plate surface. For that reason, "h" height is taken as reference on the candidate vehicles license plates (Figure 10). The license plate sizes can vary according to the countries and it can be identified to the system, thus the system can work according to the present country rules.

In the developed method, the distance of the vehicle ahead is estimated with formula below:

Means;

q : The license plate's height in pixels in the image of the vehicle ahead which is taken during the road test,

p : The license plate's height in pixels in the image of the vehicle ahead which is taken during the calibration,

m : The measured value of the distance to the camera of the vehicle ahead during the calibration [cm], "M" : The estimated value of the distance to the camera of the vehicle ahead [cm]

6.6. The Relation between the Change in License Plate's Height in Consecutive images and the Relative Speed

In the proposed method, the relative speed of the vehicle ahead is estimated with the formula below:

Means;

At = t2-ti describes; if ti and t2 is the time, the time difference between two taken images;

Δχ = M1-M2 describes the distance change between the vehicles in At time period.

While the time is ti and t2 , the estimated distance of the vehicle ahead is, respectively;

m 1 .p 1 2 =

The flow chart of the software which is developed for implementing the proposed method is given in Figure 11.

7. Tests and Results

7.1. Verification Purposed Tests

To prove that there is a relation with the license plates height in two- dimensional image and the distance between the test car and the tracked car, images are taken while the both vehicles are not moving for the different values of the distance between two cars (1m,2m, 3m, 4m, 5m, 7m, 10m, 15m, 25m, 35m, 80m, 90m). In these tests, to validate the values of distance between the vehicles which are going to be obtained by using two-dimensional images, the distance has been measured with a ruler and with the Bosch DLE 150 type lasermeter.

The results which are obtained in the verification purposed recording are shown in Figure 12.

As seen in Figure 12, in the distance estimation with the proposed method, the mistake ratio changes in the range of %0.4-%4.96 while the distance between two vehicles rises up from 1 to 25 meters. If the distance rises, the mistake ratio raises too. The mistake ratio of the measurement with the lasermeter changes in the range of %0. -0.71 in the same distances.

The distance estimation with the proposed method consists of more mistake possibility of %0.3-%4.25 according to the measurements with the lasermeter but that ratios are in an acceptable range.

When the distance between two vehicles is more than 25 meters, the mistake ratio passes %5 and from 35 meters, the detection of the license plate can not be possible. But the vehicle still can be detectable and can be framed.

7.2. Road Tests

Road tests were taken in various weather situations (rainy, sunny, overcast weather), in day light time and in real road situations for recognizing the vehicles ahead of the test car and for estimation of the distance between them. The developed method does not give trustable results in heavy rainy weather because of the windshield wipers cause abrupts in images. During the tests, the test vehicle's speed has been changed from 0 to 60 km/h and the speed of the car has been recorded by reading the speed from speedometer. Before the measurements, the speedometer of the test car has been verified by the authorized technical service and they have stated that the test car has -%5 of mistaken ratio in 0-100 km speed range.

First, the distance and the relative speed measurements of a vehicle ahead with a constant speed have been performed for estimation. Those test has been taken to prove that how successful is the developed method in detection of distance and relative speed of the vehicle ahead. In the tests, relative speed was known by the test staff because the test car's and the followed car's speed were also known. The validity of the developed method has been examined with the gathered results of mistake ratio in speed estimation of the vehicle ahead.

In the second phase, test were taken for the purpose of detecting the random vehicles in highway and the estimating the distance and relative speed of them with the proposed method. The usability of the developed method has been examined with the gathered results.

7.2.1. Distance and Relative Estimation of a Vehicle in Traffic with Constant Speed

In the tests of estimating the distance and the relative speed of a vehicle ahead, the estimation was successfully completed. In these tests, the tracked vehicle had a constant speed of 50 km/h with cruise control. And the tracking vehicle had a speed range from 10 to 60 km/h.

The estimations of distance and reiative speed of the tracked vehicle with the developed system is given in Figure 13. As seen, the relative error ratio in the distance estimations between two vehicle is changing between %2- %3,80. That is an acceptable result. The relative error ratio increases when the distance between two vehicles is increased. But it does not change according to relative speed. It is thought that the reason for that is, distance estimation method changes according to license plate's height. When the distance between two vehicles is increased, license plate's value in pixels is decreases. Because of that, according to rounding errors, the relative error in distance estimation shows an increasing trend. On the other side, the relative error in estimation of the relative speed between the vehicles depends on the distance between the vehicles. 7.2.2. Speed and Distance Estimation of the Random Vehicles in Traffic

Distance and relative speed estimations has been performed with the test vehicle for the vehicles that is selected randomly from the traffic on intercity highways. Even the vehicles has been selected randomly, selections have been paid attention to include single vehicle, more than a vehicle, pickup, truck or automobile during the tests. Tests were taken in daylight and in open and forecast weather situation.

The obtained distance and relative speed estimation for the vehicle ahead during the tests are shown in Figure 14.

As seen in Figure 14, relative error in the estimations of the distance between the vehicles depends on the distance between the vehicles. It is thought that the reason for that is, distance estimation method changes according to license plate's height. When the distance between two vehicles is increased, license plate's value in pixels is decreases. Because of that, according to rounding errors, the relative error in distance estimation shows an increasing trend. The relative error ratio in the distance estimations between two vehicle is changing between %1,86-%4,90. That is an acceptable result. As seen in Figure 12, if the distance between the vehicles is more than 35 meters, the license plate detection becomes unavailable.

(n the proposed method in our invention; detection of the candidate objects which can be a vehicle in the images taken with a single camera mounted on the vehicle's console, segmentation of the vehicle images by the recognition of the vehicles, locating the license plate, estimation of distance to the camera of the vehicle ahead and estimation of relative speed of the vehicle ahead can be done.

The difference of our method from the other studies belongs the other inventors is, license plate presence is taken as a basis for the detection of the vehicles in the candidate objects which can be a vehicle, license plate size taken as reference for estimation of the distance to the camera of the vehicle ahead and the change of the license plate's size in consecute images is used for the estimation of the relative speed.

In the scope of the study, road images are taken with a CCD camera that is mounted on the console of the vehicle, which runs at 10-60 km/h in traffic. First, Gaussian filter is applied to the taken image, the obtained image is transformed into gray scale, then the edge detection is performed by applying thresholding and Canny edge detection algorithm to that image and the candidate objects are detected that can be a vehicle by using horizontal and vertical lines which are acquired by applying Hough algorithm. Presence of license plate is taken as rule for determining the vehicles from the candidate objects.

RDSE is a real time working algorithm in a computer which is in the car. It marks and takes in to a frame the objects which have a possibility of being a vehicle, in the road image on the screen. License plate height is taken as a reference in estimation of the distance of the vehicle ahead to the camera; the change in the size of the license plate in the consecutive images is used for estimation of the relative speed of the vehicle ahead. License plate detection is performed similar to vehicle detection. It is detected by the classification of license plates by using width/height ratios and sizes of them from the rectangles formed by horizontal and vertical lines after applying Hough algorithm.