Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE PROCESSING TECHNIQUES FOR A VIDEO BASED TRAFFIC MONITORING SYSTEM AND METHODS THEREFOR
Document Type and Number:
WIPO Patent Application WO/2001/033503
Kind Code:
A1
Abstract:
The present disclosure relates to a number of inventions directed, generally, to the application of image processing techniques to traffic data acquisition using video images. The inventions reside in a traffic monitoring system, the basic function of which is for traffic data acquisition and incident detection. More specifically, the application of image processing techniques for the detection of vehicle, from sequence of video images, as well as the acquisition of traffic data and detection of traffic incident. In one aspect, the present invention provides a method of processing images received from a video based traffic monitoring system. In another aspect, the present invention is directed to a Region Of Interest (ROI) for detection of a moving vehicle and a further aspect is directed to a method of detecting day or night status in a traffic monitoring system. The application of various algorithms to a video based traffic monitoring system is also considered inventive. Other inventive aspects of the present traffic monitoring system are outlined in the claims.

Inventors:
NG YEW LIAM (SG)
ANG KIM SIAH (SG)
CHONG CHEE CHUNG (SG)
GU MING KUN (SG)
Application Number:
PCT/SG1999/000115
Publication Date:
May 10, 2001
Filing Date:
November 03, 1999
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CET TECHNOLOGIES PTE LTD (SG)
NG YEW LIAM (SG)
ANG KIM SIAH (SG)
CHONG CHEE CHUNG (SG)
GU MING KUN (SG)
International Classes:
G08G1/04; (IPC1-7): G06T7/00; G08G1/04
Domestic Patent References:
WO1996022588A11996-07-25
Foreign References:
US5761326A1998-06-02
EP0807914A11997-11-19
US5590217A1996-12-31
EP0403193A21990-12-19
EP0654774A11995-05-24
Attorney, Agent or Firm:
SHOOK LIN & BOK (1 Robinson Road #18-00 AIA Tower Singapore 2, SG)
Download PDF:
Claims:
WHAT IS CLAIMED IS :
1. A method of processing images received from a video based traffic monitoring system, the method comprising the steps of : receiving input from at least one video source, storing at least a portion of the input, forming digital data by applying a digitization process to the input, analysing the data, including analysing the data for detection of a vehicle, associated predetermined parameters and/or analysing the data for detection of a predetermined incident, providing, as an output, information corresponding to the analysis step.
2. A method as claimed in claim 1, further including the step of retrieving the stored input in the event of the analysis detecting the incident.
3. A method as claimed in claim 1 or 2, wherein the stored input is stored in a compressed format.
4. A method as claimed in any one of claims 1 to 3, in which the input is received from 4 sources and is processed to be viewed as one video display.
5. A method as claimed in any one of claims 1 to 4, in which the incident is one of congestion, stopped vehicle or wrong way traffic.
6. A traffic monitoring system adapted to implement the method as claimed in any one of claims 1 to 5.
7. In a traffic monitoring system, a Region Of Interest (ROI) for detection of a moving vehicle, the ROI having : two sections, a profilespeedzone (PSZ) and a vehicledetectionwindow (VDW), the two sections being substantially aligned with a respective lane of traffic to be monitored, the PSZ being used for the extraction of vehicle speed if a vehicle is detected at the VDW, and the VDW being used for the detection of the presence of the vehicle on the window, the VDW partially overlapping the PSZ.
8. In a traffic monitoring system, a Region Of Interest (ROI) for detection of a stopped vehicle at shoulder or chevron, the ROI consisting of a vehicledetection window (VDW), the VDW being used for the detection of the presence of the vehicle on the window.
9. A method of detecting day or night status in a traffic monitoring system, the method including the steps of : a. calculating parameters/ave and VstS within the ROI using Eqn. 1 and Eqn. 2, respectively, as described in the specification, b. determining whether any one of conditions Vsts > VTH OR (lave </TH AND < VTH) are met, c. if any one of the conditions of step b are met, then the status is determined to be'night', and if neither of the conditions of step b are met, then the status is determined to be'day'.
10. A method of detecting the presence of a vehicle in a ROI in a traffic monitoring system, the method including the step of using weighted horizontal and vertical edge intensity information determined in accordance with Eqn. 7, as described in the specification.
11. A method as claimed in claim 11, further including the steps of : for all pixels (x, y) within the VDW, computing the pixel edge E (x, y) from the original pixel intensity I (xy) using Eqns. 3, 4 and 5, as described in the specification, and obtaining the average edge density value of the VDW E)/D using Eqn. 6, as described in the specification.
12. A method of continuous dynamic updating of the reference edge of the VDW to the actual edge density of the road surface, in a traffic monitoring system, the method including the step of updating by using Eqn. 8.
13. A method of reducing false detection of a vehicle due to reflection of the vehicle's headlight, in a traffic monitoring system, the method including : minimising edges attributed to headlight reflection by applying Eqn. 6, as described in the specification.
14. A method of reducing the visual impact of a shadow in the processing of images in a video based traffic monitoring system, the method including the steps of : determining at least one area of the image having a relatively constant intensity value, designating the area as a portion of a shadow area, removing visual impact of the area from the image.
15. A method as claimed in claim 14, in which, in the step of determining, the at least one area has a substantially homogenous texture.
16. A method as claimed in claim 14, in which the visual impact is removed by substantially equalising the contrast of the area with that of another area proximate the area.
17. A method as claimed in claim 14, further including the step of : applying equation 5 (as described in the specification) in order to further reduce the visual impact of the areas from the image.
18. A method of detecting a vehicle headlight in the processing of images from a video based traffic monitoring system, the method comprising the steps of : a. computing the accumulated intensity profile/ACC (Y) within the ROI, b. calculating the gradient GH, using Eqn. 10, as described in the specification, from the accumulated intensity profile/ACC (Y), c. determining if a steep gradient is obtained at y=y, where GH (Y1) > GT, and, if so, then searching for the local peak of/ACC (Y) at Ymax, and obtain/ACCmax and WH To obtain/ACCmax assign y=y, WHILE (IACC(y) < IACC(y+1) increase y by 1 /ACCmax = ccM and y = Y To obtain the width of the peak WH for (lacs (y) > (IACCmax K)), where K is a constant which defines the minimum intensity different between the vehicle headlight and the background, and d. determining the presence of a vehicle by using Eqn. 11, as described in the specification.
19. A method of detecting a vehicle within a chevron area in a traffic monitoring system, the method including the steps of : applying a gray level cooccurrence matrix (GLCM) for the extraction of texture features of the ROI, calculating, using the GLCM, two texture measurements, namely angular second moment (ASM) and contrast (CON).
20. A method as claimed in claim 19, further including the step of : from all pixels (x, y), within the ROI, generating the gray level cooccurrence matrix (GLCM) using Eqn. 12, as described in the specification.
21. A method as claimed in claim 19 or 20, further including the step of : obtaining input texture features : ASM and CON for the ROI using Eqn. 13 and Eqn. 14, respectively, as described in the specification.
22. A method as claimed in claim 19, 20 or 21, further including the step of : comparing input texture features with background features (no vehicle) : ASMB, CONB determining If ( #ASMB ASM# < ASMTh AND ICONBCONI < CONTE THEN vehicle not present Else vehicle present.
23. A method as claimed in claim 22, further including the step of : If vehicle not present, update background features : ASMB = ASMB + (ASMASMB)/RASM CONB = CONB + (CONCONB)/RcoN.
24. A method of calculating vehicle speed in a traffic monitoring system, the method including the step of : using convolution between two edgeprofiles for the extraction of the vehicle speed.
25. A method as claimed in claim 24, in which the two edgeprofiles are obtained from consecutive frames from the system.
26. A method as claimed in claim 24 or 25, further including the step of : obtaining edge values E (x, y) for all pixels (x, y), within the PSZ using Eqn. 3, 4 and 5, as described in the specification.
27. A method as claimed in claim 24, 25 or 26, further including the step of : generating edgeprofile in accordance with Eqn. 15, as described in the specification.
28. A method as claimed in any one of claims 24 to 27, further including the step of : determining speed if state of VDW is Activate.
29. A method as claimed in any one of claims 24 to 28, wherein the convolution used is by performing convolution for functions EAvE (ylframe=f) and EAvE (ylframe=f1) and Eqn. 16, as described in the specification.
30. A method as claimed in claim 29, further wherein for all z, finding maximum of C (z), Cmax(z): # vehicle speed # zmax # C(zmax)=Cmax(z).
31. A method as claimed in claim 30, further including the step of : Updating EAVE (y I frame=f for all y : EAvE (y frame=fl) = EAvE (y I frame=o.
32. A method as claimed in any one of claims 24 to 31, an intensity profile of the ROI is used in place of the edgeprofile.
33. A method of determining vehicle direction in a traffic monitoring system, the method including the step of : using the polarity sign of an offset obtained from the convolution method as claimed in any one of claims 24 to 32.
34. A method as claimed in claim in 33, wherein a positive polarity indicates the vehicle is travelling in a direction of traffic flow, and wherein a negative polarity indicates the vehicle is travelling in a direction opposite to the direction of traffic flow.
35. A traffic monitoring system incorporating a method as claimed in any one of claims 1 to 5 or 11 to 34.
36. A traffic monitoring system incorporating a ROI as claimed in claim 7 or 8.
37. A traffic monitoring system as herein disclosed.
Description:
IMAGE PROCESSING TECHNIQUES FOR A VIDEO BASED TRAFFIC MONITORING SYSTEM AND METHODS THEREFOR 1. FIELD OF THE INVENTION The present disclosure relates to a number of inventions directed, generally, to the application of image processing techniques to traffic data acquisition using video images. More specifically, the application of image processing techniques for the detection of vehicle, from sequence of video images, as well as the acquisition of traffic data and detection of traffic incident.

2. BACKGROUND OF THE INVENTION 2. 1 IMAGE PROCESSING TECHNIQUES FOR TRAFFIC ANALYSIS Fig. 1 shows the overview of the operation of a video-based traffic monitoring system. A camera mounted on a structure, such as a streetlight pole, looking over the traffic scene serves as the sensor device for the capturing of traffic images. The captured analogue video images are then transmitted to a processor which converts the analogue video into digital form. The digitized images will then be processed and analyzed for the extraction of traffic information using image processing techniques.

The extracted information can then be transmitted to an external user, such as a traffic control center, for traffic monitoring/control.

Generally, application of image processing techniques for video-based traffic monitoring system can be divided into four stages : 1. Image acquisition 2. Digitization 3. Vehicle detection 4. Traffic parameter extraction Stages 1 and 2 are basically the same for most of the existing video based traffic monitoring systems. The fundamental differences between individual systems are in stages 3 and 4.

During the vehicle detection process, the input video image is processed whereby the presence of vehicle in the Region of Interest (ROI) is determined. The ROI can be a single pixel, a line of pixels or a cluster of pixels. During the traffic parameter

extraction stage, traffic parameters are obtained by comparing the vehicle detection status of the ROI at difference frames (time interval).

2. 2 VEHICLE DETECTION The fundamental requirement of a video-based traffic monitoring system is the capability to detect the presence of vehicle in the ROI. Most video-based traffic monitoring systems employed the background-differencing approach for vehicle detection. This is a process that detects vehicles by subtracting an input image from a background image created in advance. The background image is that one, where only the road section depicted but no vehicle appears, and is served as a reference.

2. 2. 1 Problem 2. 2. 1. 1 Dynamic update of background scene The basic requirement for using this method is the need of a background reference image to be generated. The background image must also be constant updated so as to reflect the dynamic changes in ambient lighting condition of the road section, such as during the transition from day to night and vice-versa. Such variation of light intensity could cause the system to"false trigger"the presence of vehicle.

However, the main problem when using the background-differencing approach is the difficulty in obtaining an updated background image if the road section is packed with heavy traffic or the lighting condition changes rapidly. The changes in lighting condition could be due to passing cloud or shadow of the nearby building structure cause by the changes in altitude of the sun.

2. 2. 1. 2 Moving shadow Another problem of using the background differencing approach is that during a bright sunny day, vehicle can cast a"moving"shadow onto the next lane, as shown in Fig. 2. This shadow may cause false detection on the affected lane.

2. 2. 1. 3 Night detection (headlight reflection) One other factor contributing to false detection, when using the background differencing approach, is the headlight of the vehicles at night, as shown in Fig. 3.

2. 2. 1. 4 Detection at Chevron Detection of a vehicle is generally performed on a roadway where the vehicle is travelling. However, there are circumstances where detection of vehicles at locations, other than the roadway, is required. For example, detection of a stopped vehicle at a shoulder or chevron (region consists of white stripes which occurs mainly at the joining point between entrance/exits and the expressway as shown in Fig. 4). Detection of a vehicle at a shoulder can usually be performed using a similar technique as the detection of a vehicle on the roadway. The detection of a vehicle on the chevron, however, becomes problematic when using the conventional background differencing approach.

The difficulty in detection of a vehicle on the chevron area, as compared to a normal roadway region, is that the background is not homogeneous. When using the conventional background differencing technique, the input image is compared with a background image pixel-by-pixel within the ROI. The comparison output will be high if a vehicle is present. However, when the ROI is within the chevron area, which consists of black and white stripes, a slight movement of the camera will result in a high output even when no vehicle is actually present. When using the edge density information for the detection of vehicle within the chevron region, the detection becomes insensitive.

This is because the background edge density of the ROI is relatively high due to the black/white stripes, hence, it becomes difficult to distinguish the vehicle from the background based on the edge density.

2. 2. 2 Known Solution to Problem 2. 2. 2. 1 Dynamic update of background scene One solution to update the background image is by looking at different frames in the image sequence. In any one frame, parts of the road are covered by cars. As time goes on, the cars will move and reveal the covered road. If the sequence is long enough, a clear picture of the car-free road can be found. The background image is generated pixel by pixel. The intensity of each point is observed in several initialization frames. The intensity value that occurred most often can be chosen to be the background value at that point. Another approach is by using the interpolation (over

several frames) method, in a way it is by taking the average value of the pixel at different frames.

The shortcoming of using these two approaches, however, is that the process of selecting the most often occurred intensity value for each pixel (or the average value) over a sequence of frame can be intensive in computation if the sequence is long. If the sequence is short, it may be difficult to get enough background pixel intensity values in a congested traffic condition. Such dynamic update of the background scene is also not effective if the change of light intensity is too abrupt such as the shadow cast by a moving cloud.

2. 2. 2. 2 Night detection When using the background differencing approach for the detection of vehicle in the night, false detection could arise due to problems such as headlight reflection. To overcome such problem, a technique that has been adopted is using the headlight as the indication of the presence of vehicle. The direct approach of using this method is that the vehicle's headlight is detected if a group of pixels'intensity values are greater than its surrounding pixels by a threshold value. The problem of using such technique is that it is difficult to establish the threshold value separating the headlight intensity from the surrounding pixels. Since the absoute intensity values of the headlight and the surrounding pixels can vary dynamically depending on the overall intensity of the road section. It is also computationally intensive to perform such two dimensional search in real time.

2. 2. 2. 3 Day-Night-Transition Since the night detection employs a different process for the detection of vehicle from that of the day detection. Inevitably, there is the requirement of automated switching from one detection process to another during the transition between day and night. The solution lies in the automatic detection of the day/night status of the traffic scene. However, this can be difficult since the transition between day and night, or vice versa, is gradual. Analyzing the overall average intensity value of the image, to distinguish between day and night, does not provide a reliable solution. This is because in a heavy traffic condition, the headlight of vehicles could significantly increase the overall intensity of the image. One way of avoiding the vehicle headlight is

to select a detection region lies"outside"the traffic lane. However, since the traffic scene is an uncontrolled outdoor environment, there is no assurance that the condition of the detection region remains unchanged over a long period of time.

2. 3 TRAFFIC PARAMETERS EXTRACTION During the parameter extraction stage, traffic parameters are extracted by comparing the vehicle detection status of the ROI at difference image frames of different time interval. Traffic parameters, generally, can be divided into two types, traffic data and incident. Depending on the method of parameter extraction employed, generally, the basic traffic data includes vehicle count, speed, vehicle length, average occupancy and others. Using the basic traffic data, other data such as gap-way and density can be easily derived. Traffic incident consists of congestion, stopped vehicle (on traffic lane or shoulder), wrong-direction traffic and others.

2. 3. 1 Known Solution and Problem Existing method for the extraction of traffic parameters, generally, inclues the window technique (or trip-line) and the tracking technique as shown in Fig. 5 and 6, respectively.

2. 3. 1. 1 Window technique and problem Using the window technique, the ROI is usually defined as isolated sets of window (rectangular box) as illustrated in Fig. 5. The basic function of each window is for the detection of vehicle and hence counting the number of vehicles. In order to measure the vehicle speed two windows are required. By obtaining the time taken for the vehicle to travel from one window to the other, knowing the physical distance between the two, enable the system to determine the vehicle speed. Then, by obtaining the length of time the detected vehicle present on one window and the vehicle speed will yield the vehicle length. The advantage of the window technique is that it is computationally simple.

Error due to frame rate resolution The disadvantages of the window technique is that its accuracy, for length and speed measurement, is affected by the resolution of the processing frame rate and the actual speed of the vehicle. In Fig. 7, vehicle A first activated window x at frame f. At frame f+n, of Fig. 8, vehicle A activates window y. To calculate the vehicle speed, it is

assumed that the vehicle had travelled a distance of dw in the time period of n frames. dw is the physical distance between the two windows. However, due to the limited frame rate resolution, the actual distance which vehicle A had travelled is dv (compare Fig. 7 and 8). Therefore the error rate can be as much as (dv-dw) ldw. The boundary of this error increases as the frame rate decreases.

Error due to occlusion When using two windows for speed measurement, the distance between the two windows must be maximized in order to reduce the error due to frame rate resolution.

However, increasing the distance between the two windows will increase the possibility of occlusion at the window to the upper part of the image. The occlusion can be illustrated as shown in Fig. 9 and 10, which show two successive frames of video images. These two figures also show the typical angle of camera view for traffic data extraction. Due to the error of perspective, vehicle B appeared to be"joined"to vehicle A at frame f of Fig. 9, hence, window x will not be able to detect the time when vehicle B presents (at window x). At frame f+n (Fig. 10), however, window y can successfully detect vehicle B since the error of perspective is minimal at the lower extreme of the image. When window y is used as the counting"sensor", its counting error due to occlusion will be minimized. However, the accuracy of the vehicle speed measurement (and hence vehicle length) will be affected by the occlusion problem at window x. The occlusion will be even more apparent in the event of congestion.

2. 3. 1. 2 Tracking technique and problem When using the tracking technique, a search is first performed along a"tracking zone"of ROI as shown in Fig. 6. When a vehicle is detected, its location is determined.

This vehicle will then be tracked, along the tracking zone, in subsequent frames. By tracking the vehicle in each frame, with its location, the vehicle speed is measured. The vehicle length can be measured directly by detecting the front and end of the vehicle.

The advantage of using the tracking method is that it is theoretically more accurate than the window technique in terms of speed measurement. Since the exact location of the tracked vehicle is determined at each frame, accuracy of its speed measurement is, therefore, not affected by the frame rate resolution. The disadvantage of the tracking method, as compare to the window technique, is that it is more intensive

in computation. However, with the advance of computer processing power, this shortcoming is becoming less significant.

Error due to occlusion For direct length measurement using the tracking technique, that is by detecting the vehicle's front and end, the vehicle must be isolated from both preceding and succeeding vehicles for at least one frame. However, due to the angle of perspective, it may be difficult to isolate the vehicle from succeeding vehicle such as that shown in Fig. 11 (vehicles A and B). In Fig. 12, thought vehicle A can be isolated from B, its front however is out of the camera field of view, hence, unable to determine its length.

3. SUMMARY OF THE INVENTION In one aspect, the present invention provides a method of processing images received from a video based traffic monitoring system, the method comprising the steps of : receiving input from at least one video source, storing at least a portion of the input, forming digital data by applying a digitization process to the input, analysing the data, including analysing the data for detection of a vehicle, associated predetermined parameters and/or analysing the data for detection of a predetermined incident, providing, as an output, information corresponding to the analysis step.

Preferable, the method further includes the step of retrieving the stored input in the event of the analysis detecting the incident.

In another aspect, the present invention provides, in a traffic monitoring system, a Region Of Interest (ROI) for detection of a moving vehicle, the ROI having : two sections, a profile-speed-zone (PSZ) and a vehicle-detection-window (VDW), the two sections being substantially aligned with a respective lane of traffic to be monitored, the PSZ being used for the extraction of vehicle speed if a vehicle is detected at the VDW, and the VDW being used for the detection of the presence of the vehicle on the window, the VDW partially overlapping the PSZ.

In yet another aspect, there is provided, in a traffic monitoring system, a Region Of Interest (ROI) for detection of a stopped vehicle at shoulder or chevron, the ROI consisting of a vehicle-detection-window (VDW), the VDW being used for the detection of the presence of the vehicle on the window.

A further aspect is directed to a method of detecting day or night status in a traffic monitoring system, as set out in the claims.

Other inventive aspects of the present traffic monitoring system are outlined in the claims.

The present disclosure relates to a number of aspects of a traffic monitoring system. In particular the inventive aspects employ various advanced image processing algorithms for traffic monitoring system using video images. The basic function of the system is for traffic data acquisition and incident detection. The present inventive aspects, generally, focuses on the vehicle detection and traffic parameters extraction processes of the traffic monitoring system.

In essence, during the vehicle detection process, two different image processing techniques are employed for the detection of vehicle in the day and night. For the day- detection, edge-density information is proposed to detect the present of vehicle within the ROI. The advantage of the proposed technique is that it allows the elimination of noise such as headlight reflection. Vehicle's shadow of the neighbouring lane can also be eliminated by taking into consideration the directional edge characteristic of the vehicle's shadow. Using edge-density information, the process becomes more robust under the dynamic ambient lighting condition. For the night-detection, the headlight detection approach is employed for the detection of vehicles. The intensity-profile approach is proposed for the detection of vehicle headlight. Using this approach the system becomes more stable where fault detection due to headlight reflection is minimized. The other advantage of this approach is that it is less intensive in computation. To provide an automatic switching of the detection algorithms between the day and night, we combined the use of the average intensity value as well as the contrast level of the pixels'intensities within the ROI for the detection of day and night.

For the traffic parameter extraction stage, the inventive aspects focus on the acquisition of vehicle count, speed, length as well as time-occupancy for the traffic data

extraction since other traffic data such as density, headway and others can be easily derived from these basic traffic data. The traffic data is then used for the detection of various types of traffic incidents. In one aspect of the present invention, a combination of the window and tracking technique is employed for the traffic parameter extraction.

Using this approach, measurement errors due to frame-rate resolution as well as occlusion are minimized.

The application of various algorithms to a video based traffic monitoring system is also considered inventive.

4. BRIEF DESCRIPTION OF THE DRAWINGS Fig 1 shows the over view of video-based traffic monitoring system Fig 2 illustrates the moving shadow due to the neighbouring vehicles Fig 3 illustrates the headlight reflection of vehicles Fig 4 illustrates the chevron area Fig 5 illustrates the basic ideal of window technique Fig 6 illustrates the basic ideal of tracking technique Fig 7 illustrates the measurement error, of window technique, due to frame-rate resolution-frame f Fig 8 illustrates the measurement error, of window technique, due to frame-rate resolution-frame f+n Fig 9 illustrates the speed/length measurement error, of window technique, due to occlusion-frame f Fig 10 illustrates the speed/length measurement error, of window technique, due to occlusion-frame f+n Fig 11 illustrates the length measurement error, of tracking technique, due to occlusion-frame f Fig 12 illustrates the length measurement error, of tracking technique, due to occlusion-frame f+n Fig 13 is the schematic block diagram of the image processing process for the traffic monitoring system.

Fig. 14 shows the flow of the image processing process for the traffic monitoring system.

Fig. 15 shows the flow of the vehicle detection process

Fig. 16 illustrates the definition of the ROI adopted in the present invention Fig. 17 shows the ROI where average intensity value and variance of pixels' intensities are obtained for the detection of day/night status of the traffic scene.

Fig. 18 shows the intensity distribution functions of the ROI for three different traffic conditions.

Fig. 19 is the flow diagram for the vehicle-day-detection process Fig. 20 shows the effect of headlight reflection Fig. 21 illustrates the removal of vehicle headlight reflection using the edge density information Fig. 22 shows the effect of moving shadow due to the neighbouring vehicle Fig. 23 shows how moving shadow is reduced when only the edge density is used Fig. 24 shows how the edges due to the shadow's boundaries can be further reduce by using the weighted directional edge information.

Fig. 25 illustrates the distinct features of the headlight through the projection of the intensity profile of the ROI.

Fig. 26 is the flow diagram for the vehicle-night-detection process Fig. 27 is the flow diagram for the vehicle-chevron-detection process Fig. 28 is the flow diagram of the traffic parameters extraction process Fig. 29 is the flow diagram of the process to obtain vehicle speed using the technique of profile speed extraction.

Fig. 30 shows the generation of edge profile of the profile-speed-zone at frame f Fig. 31 shows the generation of edge profile of the profile-speed-zone at frame f+Y Fig. 32 illustration the extraction of the vehicle speed through the convolution process of the two edge-profiles obtained at consecutive frames.

5. DETAILED DESCRIPTION OF THE INVENTION The following detailed description describes the invention, which is particularly well suited for traffic data extraction using video images under dynamic ambient lighting conditions. The description will be divided into three sections. First, the overall system architecture, as well as the flow of the image processing process, of the invention will be described. In the second section, the vehicle detection process of the

invention will be described in further detailed. The traffic parameter extraction process will be described in the third section.

5. 1 OVERALL SYSTEM ARCHITECTURE Fig. 13 shows the schematic block diagram of the image processing process 1300 for the traffic monitoring system. The system is capable of processing upto four video inputs, hence providing simultaneous monitoring of four traffic sites. Video switching module 1302 is responsible to multiplex between four video inputs. During the digitization process 1303, the video signal will be digitized for subsequent processing. At module 1305, the digitized image will be processed for the detection of vehicle at ROls. Upon the detection of a vehicle at module 1307, traffic data will be extracted based on the detection status at different image frames. Occurrence of traffic incident can then be deduced based on the extracted traffic data at module 1308. The extracted traffic parameters (traffic data and incident) can then be output to an output module 1309.

At module 1304, sequence of digitized images will be compressed into smaller images and stored in a set of backup-image-memory. The backup-image-memory has a fixed memory size which can store a fixed number of, say n, images for each of the video input. The image memory is constant being updated with the latest input image.

Such that at any one time the last n images, of the video input, are always stored in the backup-image-memory. The function of this backup-image module is such that when a traffic incident is detected, the backup process will be interrupted. Such that the backup images can then be retrieved for analysis and visual inspection of the traffic images prior to the occurrence of incident.

At module 1306, various traffic information such as traffic images, processed images, traffic parameters and etc. can be stored onto the display memory for video output. One technical advantage of this feature is that it allows all the four digitized images, from four different video sources, to be incorporated into one display video output. Hence, enable four video input images to be transmitted via only one transmission line.

Fig. 14 illustrates the flow of the image processing process of the monitoring system, as designated by reference numeral 1400's but with explanation of each step corresponding to the 1300's module described above.

5. 2. VEHICLE DETECTION PROCESS Due to the different background characteristics of the roadway and chevron region as well as day and night conditions, it is difficult to perform vehicle detection for different conditions using one detection technique. Three different vehicle detection techniques are adopted in the invention, namely, the vehicle-day-detection, vehicle- night-detection and the vehicle-chevron-detection. One for the detection of vehicle on a normal roadway in the day, one for normal roadway in the night and the other for the detection of stopped vehicle at the chevron area in both day and night.

Fig. 15 illustrates the flow of the vehicle detection process 1500. For normal roadway, the vehicle detection process can be divided into two stages, day/night detection and vehicle detection. During the vehicle detection process for roadway, input image will first be processed to determine the day/night status of the traffic scene 1502 at regular interval. Next the image will be processed for the vehicle presence status at the ROI using either the vehicle-day-detection 1505 or the vehicle-night- detection 1506 technique, depending on the status (day or night) of the traffic scene.

For detection of vehicle at the chevron area, the vehicle-chevron-detection technique will be used 1503 for both day and night conditions.

5. 2. 1 Region of Interest-ROI During the vehicle detection process, ROI will be defined for each location where the traffic information is to be obtained. For the extraction of traffic parameters of a roadway, each ROI is generally coincided with each traffic lane as shown in Fig. 16.

Each ROI consists of two regions, the profile-speed-zone PSZ and the vehicle- detection-window VDW as illustrated in Fig. 16. The VDW is to be overlapped onto the PSZ at the lower extreme. The function of the VDW is for the detection of the presence of vehicle on the window. The function of the PSZ is used for the extraction of the vehicle speed if a vehicle is detected at the VDW. For detection of stopped vehicle at shoulder or chevron, the ROI consists of only the VDW.

5. 2. 2 Daylnight detection 1502 The detection of the day/night status of the traffic scene is based on two image parameters, namely the average gray level intensity lave and the statistical variance of the pixels'intensity Vsts. These parameters are to be extracted from the pixels'

intensities within the ROI. Fig. 17 shows a typical traffic scene during the night. As can be seen, during the night when there are vehicles on the road, the ROI will have a high variance of pixels'intensities due to the vehicle headlight and the dark background.

Fig. 18 shows three typical pixels density distribution function for three traffic scene conditions. Function f3 (g) of Fig. 18 shows a typically distribution of the pixel intensity for night scene. The two maxima found in f3 (g) are attributed mainly to the pixels intensities of vehicle headlight and background. For night scene, where no vehicle within the ROI, the distribution function is resembled by f2 (g), where most of the pixels' intensity are low. f, (g) depicts the pixels'intensity distribution function of ROI for a general day scene, where centered maxima is found. To distinguish the different pixels' intensity distribution two image parameters, namely the average gray level intensity lave and the statistical variance of the pixels'intensity Vsts, are measured. For pixel within the ROI, PROI(x,y), the two parameters are obtained as follows : average intensity value : lave statistical variance : VstS where IROI (XY) is the intensity value of pixel PRO, (x, y) within the ROI, Nô, ils the total number of pixel within the ROI. In module 1502 of Fig 15, the procedure for determining whether the input traffic scene is day or night is as follows : 1. Compute the two day/night detection parameters and Vt, within the ROI using Eqn. 1 and Eqn. 2, respectively.

2. IF Vers > VTH OR (lave < ITH AND V., t. < VTH), THEN Label status of traffic scene as"NIGHT" ELSE Label status of traffic scene as"DAY" In step 2, if either one of the two conditions is fulfilled, then the status of the traffic scene is determined as night. The first condition "Vsts > VrH"is met if the traffic scene has a high variance of pixel intensity within the ROI. This is likely to occur if

vehicles are present within the ROI in a night scene. VTH is a constant threshold value dictates the minimum variance of the pixels'intensity of the ROI, with vehicle headlight, during the night. The second condition lave < ITH AND Vsts < Zizis met if the traffic scene has a low average and low variance of pixel intensity within the ROI. This condition is likely to be met if no vehicle is present within the ROI in a night scene. ITH is a constant threshold value dictates the maximum average intensity of the ROI, with no vehicle headlight, during the night. If neither of the two conditions, in step 2, are met, this indicates that the traffic scene has a relatively higher Iave and lower Vsrs, which is the likely condition for a day traffic scene.

5. 2. 3 Vehicle-Day-Detection 1505 In module 1505 of Fig. 15, for the vehicle-day-detection, the edge density information of the input image will be processed to distinguish between the complex texture of a vehicle from the homogeneous texture of the road surface for the detection of the presence of vehicles. One significant advantage of using the edge density information for the detection of vehicles is that there is no requirement of a reference image (background scene) since the detection process is performed using only edge information of the input image. Therefore, the problem of dynamic update of the reference image is eliminated. Consequently, no large memory space is needed for the storage of a reference image. Another advantage is that the edge density information of a traffic scene is less sensitive to the abrupt change of lighting condition such as shadow of passing cloud. This is because edge density is derived from the change of intensity values of neighbouring pixels within the same image, and there is no comparison to a reference image. The system using this approach is also very robust under different lighting conditions and change of lighting condition such as transition from day to night.

Fig. 19 shows the flow of the vehicle-day-detection process. From the input digitized image, the overall edge density EVDW of the VDW is computed at module 1901. First, the pixel values of the horizontal and vertical directional edges (EH (x, y), EV (x, y)) of the VDW is extracted from the original pixels values I (xy) using the Sobel technique [1]. EH (x, y) and EV (x, y) is obtained as follows :

where SH and SV are the 3x3 matrices for the extraction of the horizontal and vertical edge, respectively.

Then the two directional edges are combined to generate the overall edge intensity E (x, y) at pixel (x, y) : E (x, y) = (1-K)* EH(x,y) + K * EV (x, y) (5) K is a constant value between 0 and 1. It is introduced here to give different weight to the horizontal and vertical components of the edges. By assigning K>0. 5 enables the system to further minimize the horizontal edges of the shadow.

The overall edge intensity EVDW of the VDW is then obtained as follow : for all pixel (x, y) within VDW IF (E (xy) > ET) THON EX = EvDw + E (x, y) (6) where ET is the threshold for the elimination of edges attributed to noise such as headlight reflection.

In module 1903, EvDw is compared with a reference value E Refvow, where E_RefVDW is the average edge intensity of the VDW when no vehicle is present. Vehicle is then detected based on the following condition : IF (EVDW > E RefvDw + KT) THEN vehicle present ELSE vehicle not-present (7) where KT is the constant threshold. In an uncontrolled dynamic outdoor environment, the edge density of the background scene E RefvDw varies significantly.

The variation depends on several factors such as, types of road surface texture, pixel resolution and zooming factor of the camera. Therefore, it is not practical to define a

constant value for E RefvDw. In our invention, we adopt an adaptive approach to dynamically update the value of E_RefvDw base on the real-time image edge information. In the detection process, it is assumed that the road surface is relatively "smoother"than the texture of vehicle. If vehicle is not present, E_RefvDw can be dynamically updated based on the following : <BR> <BR> <BR> <BR> IF (vehicle NOT present) {<BR> <BR> <BR> <BR> <BR> <BR> IF (E_RefvDw > EVDW) THENE_RefVDW = E_RefVDW - (E_RefVDW - EVDW)/Rup ELSE IF (E_RefVDW < EVDW AND EVDW < E_RefVDW + K#) THEN E_RefVDW = E_RefVDW + (EVDW - E_RefVDW)/Rup where Rup is a constant to control the rate of update. By initializing a relatively large value for E RefvDw, the above technique can dynamically adjust E RefvDw to the actual edge density of the road surface. Subsequently, this process will continuously adjust the E_RefvDw to the actual road surface edge density.

The procedure for the use of edge information to detect the presence of vehicle as well as the process for the dynamic update of the background edge density is as follows : 1. For all pixels (x, y) within the VDW, compute the pixel edge E (x,y) from the original pixel intensity I (x, y) using Eqn. 3, 4 and 5.

2. Obtain the average edge density value of the VDW Eov using Eqn. 6 3. Vehicle detection: compare EVDW with the reference E-RefvDw for the detection of vehicle using the Eqn. 7 4. Dynamic update of E RefvDw using Eqn. 8.

5. 2. 3. 1 Vehicle headlight removal When using the edge density approach we are able to successfully minimize false detection of vehicle due to the reflection of vehicle headlight. This can be illustrated as shown in Fig. 20 and Fig. 21. Fig. 20 shows the image of a night traffic scene with prominent headlight reflection. However, when using the edge density information, as illustrated in Fig. 19, the reflected headlight is successfully eliminated.

This is because the magnitude of edge is proportional of the gradient of intensity

change between neighbouring pixels. Generally, the change in light intensity of the reflected headlight on the road surface is gradual hence the magnitude of the edge is small. In the vehicle detection process, the edges attributed by the headlight reflection can be minimised using Eqn. 6.

5. 2. 3. 2 Moving shadow removal In the present invention, the detection technique employed is able to minimize the moving shadow due to vehicle on the neighbouring lane. The elimination process can be illustrated in Fig. 22 to 24. Fig. 22 shows a traffic scene with moving shadow of vehicle. In Fig. 23, most part of the moving shadow has been eliminated. This is because magnitude of edge is proportional to the change of intensity value between neighbouring pixels. Since, the intensity values of those pixels within the shadow are constant, their edge density values, hence, minimized. Except that at the boundaries of the shadow where edges are present. To further reduce the remaining edges, more emphasis can be given to the vertical directional edge of the image using Eqn. 5. As shown in Fig. 24, the moving shadow is now successfully eliminated. This technique is also effective in minimizing the effect of stationary shadow of the surrounding structure.

5. 2. 4 Vehicle-Night-Detection In the invention, the presence of vehicle in the night traffic scene is detected by detecting the vehicle headlight within the ROI. The presence of vehicle headlight, in turn, is derived from the intensity profile of the ROI. The generation of the intensity profile, along the length of the traffic lane, can be illustrated as shown in Fig. 25. The intensity-profile function lacc (y) for each value of y (image row) is obtained by accumulating the total intensity value of the pixels at row y within the ROI. From the intensity profile function/ACC (Y), the sharp"peaks"attributed to the headlight can be clearly identified. The peak attributed by the headlight reflection, on the other hand, is much smoother. Using this characteristic as the headlight signatures, the vehicle can be easily detected. The advantage of this technique is that the search for headlight only performs in one dimension, as compared to the direct approach which scan the ROI in both horizontal and vertical direction. Another advantage is that since the accumulated intensity value of each row, of the ROI, is used, the intensity profile generated is more stable and less susceptible to random noise. Two distinct

parameters can be measured for the detection of the sharp peak which indicates the presence of vehicle. The two parameters are the intensity gradient GH and the"width" of the peak WH. GH is defined as : d IqCCY) GH(y) = For image processing, GH can be approximated as follow <BR> <BR> <BR> <BR> IACC(y+S) - IAAC(y)<BR> GH(y) = (10)<BR> S<BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> where S=1 S the pixel separation. WH is width of the"peak"which indicates the width of the headlight. The presence of a vehicle can then be detected based on the followings : IF (GH (Y) > GT AND WH (Y) < WT) THEN vehicle presence (11) where GT and WT are constant threshold.

The procedure for the detection of vehicle at night is as follows : 1. Compute the accumulated intensity profile IACC(y) within the ROI 2. Calculate the gradient GH, using Eqn. 10, from the accumulated intensity profile IACC(y) 3. If a steep gradient is obtained at y=y, where GH(y1)> GT, then search for the local peak of IACC(y) at Ymax, and obtain/ACCmax and WH To obtain/ACmax assign y=y, WHILE (IACC(y) < IACC(y+1) increase y by 1 IACmax = IACC(y) and ymax = y Obtain the width of the peak WH for (IACC (Y) > (/ACCmax-K)), where K is a constant which defines the minimum intensity different between the vehicle headlight and the background.

4. The presence of vehicle is detected using the Eqn. 11.

Fig. 26 shows the flow of the vehicle-night-detection process. In module 2601, the window-intensity-profile is generated. The presence of vehicle headlight is then detected by scanning through the profile 2603.

5. 2. 5 Chevron Vehicle Detection In the present invention, texture measurement is used to characterize the feature of the chevron region. Texture refers to the spatial variation of tonal elements as a function of scale. In the field of pattern recognition, various texture features can be computed statistically for the classification of images with distinct textura characteristic. Since digital image of the same land cover class usually consists of a spatial arrangement of gray levels which are more homogeneous within than between land cover of different classes. The idea of using the texture information for the detection of vehicle is to characterize the ROI, within the chevron area, using texture features. Such that the texture of the ROI, with the present of vehicle, can be distinguished from the unique texture of the ROI when no vehicle is present (reference texture). As can be seen in Fig. 4, the chevron region consists of black and white stripes, therefore it can be characterized by its unique texture. In the present invention, the gray level co-occurrence matrix (GLCM) for the extraction of the texture features of the ROI is employed [2].

The computation of textura measurement, of the ROI, using the GLCM approach involves two steps. First, the variations of intensities of the neighbouring pixels, within the ROI, are extracted using a co-occurrence matrix. This matrix contains frequencies of any combination of gray levels occurring between pixel pairs separated by a specific distance and angular relationship within the window. The second step is to compute statistics from the GLCM to describe the spatial textural information according to the relative position of the matrix elements. Various texture measurements can be computed from the co-occurrence matrix. In our invention, for the detection of vehicle within the chevron area, two texture measurements, namely the angular second moment (ASM) and contrast (CON) are used. Let IROI (xy) be the intensity function within the ROI defined at location (x, y), and let Q be the number of quantized intensity levels. Pjj represents the matrix entry denotes the number of occurrences of two neighbouring pixels within the region, one with intensity level i and the other with intensity level j. The two neighbouring pixels have to be separated by a displacement vector D.

(12) Where # denotes the number of elements in the set. The two computation parameters, Q and D, are selected as : Q = 128 D : magnitude of D = 2, with vertical orientation (0--90 The texture measurements are obtained as follows : Then the texture measurements are match with that of the background texture measurement (ROI with no vehicle present). If the measured parameters are"similar" to the background texture measurements, then the state of the ROI is identified as vehicle not present. If the extracted feature is"different"to the background features, then the state of the ROI is identified as vehicle present. The procedure used in the proposed system is as follows : 1. From all pixels (x, y), within the ROI, generate the gray level co-occurrence matrix (GLCM) using Eqn. 12.

2. Obtained input texture features : ASM and CON for the ROI using Eqn. 13 and Eqn. 14, respectively.

3. Compare input texture features with background features (no vehicle) : ASMB, CONB If (#ASMB - ASM# < ASMTh AND #CONB - CON# < CONTh) THEN vehicle not present Else vehicle present 4. If vehicle not present, update background features : ASMB = ASMB + (ASM-ASMB)/RASM CONB = CONB + (CON-CONB)/RcoN

ASMTh and ASMTh are constant threshold values. RASM and RCON are constant parameters which define the rate of update for the background feature, ASMB and CONB, respectively. Fig. 27 shows the flow of the image processing process for vehicle detection at the chevron area.

5. 3. TRAFFIC PARAMETERS EXTRACTION The extraction of traffic parameters can be separate into two parts, the extraction of traffic data and the detection of traffic incident. Fig. 28 shows the flow of the traffic parameters extraction process. In module 2801, we identify the state of VDW into one of the four different states at each processing frame. These four states are, Activate, De-activate, Active and Idle. When a vehicle reaches the VDW, the window will be in Activate state. After the Activate mode, while the vehicle still presents in the succeeding frames, then the VDW will be in Active state. If the vehicle leaves the VDW, that is if the preceding frame is Active and the vehicle is not present in current frame, then the VDW is in De-activate state. If vehicle is not present in the preceding and current frame, the window is in Idle state.

When the VDW is in the Activate state, that is when a vehicle first activates the VDW, then the vehicle counter will increase 2806. The vehicle speed is then obtained using the profile-speed extraction technique 2807. While the VDW is in Active mode, the number of frames which the vehicle presents in the window, present frame-counter, will be increased. Hence, to determined the length of time when the vehicle presents in the VDW. At 2808, when the vehicle leaves the VDW, the vehicle length will be calculated from three parameters, present frame-counter, vehicle_speed and frame_rate. Frame_rate is the number of processed frame per second for each video input. Together with the frame-rate, the present frame-counter also used to calculate the average time occupancy of the traffic lane for every interval of time.

5. 3. 1 Profile-Speed-Extraction Fig. 29 shows the flow of the profile-speed-extraction process. First the edge- profile within the speed-profile-zone SPZ, is generated 2901. Fig. 30 and Fig. 31 show the generation of the edge-profile functions within the SPZ for two consecutive frames f and f+1. Similar to the intensity-profile as described in the section of night-detection,

the edge-profile along the length of the SPZ is obtained from the average edge-density EAVE (Y) of each row of pixels. The edge-profile is used because it is more stable than the intensity profile since it is not susceptible to variation of ambient lighting condition.

If the VDW is in the Activate state, convolution is performed between the two edge- profile functions obtained from consecutive frames at module 2904. Fig. 32 shows the result of the convolution. At an offset distance dx from the origin, the convolution has a maximum peak which can be translate to the distance which the vehicle has travelled from frame f to frame f+1. Knowing the frame rate and dx, the velocity of the vehicle can be obtained.

The procedure for the speed extraction is as follows : 1. For all pixels (x, y), within the PSZ, obtained the edge values E (x, y) using Eqn. 3, 4 and 5.

2. Generate edge-profile : for all y. for each row of pixels within the PSZ -obtained the average edge value of row y : EAVE (Y I frame=f) N (y) E E (X, Y) X=1 (1 5) EA VE (Y I frame=f) N (Y) 3. If state of VDW is Activate compute speed : 3a : perform convolution for functions EAvE (ylframe=f) and EAVE(y#frame=f-1): C (z) = # EAVE(y # frame=f) *EAVE (y-z # frame=f-1) (15) for all y 3b : for all z, find maximum of C (z), Cmax (z): vehicle speed # zmax I C(zmax)=Cmax(z) 4. Update EAVE (y # frame=f-1): for all y : EAVE (Y § frame=f-1) = EAVE (Y | frame=fl 5. 3. 2 Incident Detection The traffic incident is derived from the traffic data obtained. The types of incidents include congestion, stopped vehicle (on traffic lane or shoulder) and wrong way traffic. For the detection of congestion :

IF (spped<lower_speed_limit AND occupancy > upper occupancy_limit) THEN traffic incident = congestion For the detection of stopped vehicle : IF (speed==0 AND vehicle_stopped_time>stopped_time_limit) THEN traffc incident = stopped vehicle For the detection of wrong way traffic : IF (velocity < 0) THEN traffic incident = wrong way traffic The detection of the wrong way traffic is derived from the velocity (speed) obtained from the profile-speed extraction process. If a vehicle is travelling in the opposite direction, opposing the traffic flow direction, the convolution output of the profile-speed extraction process will have a negative offset of dx. Therefore the sign of the offset can be used as an indication of the vehicle's direction.

References [1] Rafael C Gonzalez and Richard E Woods, Digital Image Processing, Addison- Wesley Publishing Company, 1992.

[2] Danielle J Marceau, Philip J Howarth, Jean-Marie M Dubois and Denis J Gratton,"Evaluation of the Grey-Level Co-Occurrence Matrix Method For Land-Cover Classification Using SPOT Imagery", IEEE Transactions on Geoscience and Remote Sensign, Vol. 28, No. 4, Jul, 1990.