Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISUAL SENSOR FUSION AND DATA SHARING ACROSS CONNECTED VEHICLES FOR ACTIVE SAFETY
Document Type and Number:
WIPO Patent Application WO/2019/147569
Kind Code:
A1
Abstract:
By exchanging basic traffic messages among vehicles for safety applications, a significantly higher level of safety can be achieved when vehicles and designated infrastructure-locations share their sensor data. While cameras installed in one vehicle can provide visual information for mitigating many avoidable accidents, a new safety paradigm is envisioned where visual data captured by multiple vehicles are shared and fused for significantly more optimized active safety and driver assistance systems. The sharing of visual data is motivated by the fact that some critical visual views captured by one vehicle or by an infrastructure-location are not visible or captured by other vehicles in the same environment. Sharing such data in real-time provides an invaluable new level of awareness that can significantly enhance a driver-assistance, connected vehicle, and/or autonomous vehicle's safety-system.

Inventors:
RADHA HAYDER (US)
AL-QASSAB HOTHAIFA (US)
Application Number:
PCT/US2019/014547
Publication Date:
August 01, 2019
Filing Date:
January 22, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV MICHIGAN STATE (US)
International Classes:
G05D1/02; G06V10/764; G05D1/03; G05D3/12
Domestic Patent References:
WO2015134153A12015-09-11
WO2016099443A12016-06-23
Foreign References:
US20150356864A12015-12-10
US9679487B12017-06-13
Attorney, Agent or Firm:
MACINTYRE, Timothy D. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A method for sharing data across vehicles for improved safety, comprising:

detecting, by a processor in a transmitting vehicle, an object in an image captured by an imaging device in a transmitting vehicle;

determining, by the processor in the transmitting vehicle, a first location of the object from the image, where the first location of the object is defined with respect to the transmitting vehicle;

sending the first location of the object from the transmitting vehicle via a dedicated short range communication link to a receiving vehicle;

receiving, by a processor in the receiving vehicle, the first location of the object from the transmitting vehicle;

determining, by the processor in the receiving vehicle, a vehicle location of the transmitting vehicle with respect to the receiving vehicle;

determining, by the processor in the receiving vehicle, a second location of the object using the first location and the vehicle location, where the second location is defined with respect to the receiving vehicle; and

implementing a safety measure in the receiving vehicle based on the second location of the object.

2. The method of claim 1 further comprises detecting an object in an image captured by an imaging device using You Only Look Once object detection algorithm. 3. The method of claim 1 wherein determining a first location of the object further comprises calculating a distance to the object by

where the object is a person, fc is the focal length of the imaging device, Rh is actual height of the person and Ih is height of the person in image pixels.

4. The method of claim 1 further comprises sending the first location of the object from the transmitting vehicle only if the distance between the object and the transmitting vehicle is less than a predefined threshold.

5. The method of claim 1 further comprises determining whether the transmitting vehicle and the receiving vehicle are traveling the same direction and sending the first location of the object to the receiving vehicle in response to a determination that the transmitting vehicle and the receiving vehicle are traveling the same direction.

6. The method of claim 1 further comprises computing at least one of a distance to collision with the object or a time to collision with the object and implementing a safety measure in the receiving vehicle in response to the at least one of the distance to collision or the time to collision being less than a threshold.

7. The method of claim 1 wherein implementing a safety measure includes one of issuing a warning about the object to a driver of the receiving vehicle, displaying the object to the driver of the receiving vehicle or automatically braking the receiving vehicle.

8. The method of claim 1 further comprises sending image data for the object from the transmitting vehicle via a secondary communication link to the receiving vehicle, where the secondary communication link differs from the dedicated short range communication link.

9. The method of claim 8 further comprises

capturing, by a camera disposed in the receiving vehicle, video of a scene;

receiving, by the processor in the receiving vehicle, the image data for the object from the transmitting vehicle;

fusing, by the processor in the receiving vehicle, the image data for the object into the video; and

presenting, by the processor in the receiving vehicle, the video with the image data fused therein to the driver of the receiving vehicle.

10. A method for detecting objects in a moving vehicle, comprising:

receiving, by a processor in a receiving vehicle, a first location of the object from the image, where the first location is communicated via a wireless data link by a transmitting vehicle and the first location of the object is defined with respect to the transmitting vehicle; determining, by the processor in the receiving vehicle, a vehicle location of the transmitting vehicle with respect to the receiving vehicle;

determining, by the processor in the receiving vehicle, a second location of the object using the first location and the vehicle location, where the second location is defined with respect to the receiving vehicle; and

implementing a safety measure in the receiving vehicle based on the second location of the object.

1 1 . The method of claim 10 receiving, by the processor in the receiving vehicle, distance between the transmitting vehicle and the receiving vehicle over a dedicated short range communication link.

12. The method of claim 10 further comprises computing at least one of a distance to collision with the object or a time to collision with the object and implementing a safety measure in the receiving vehicle in response to the at least one of the distance to collision or the time to collision being less than a threshold.

13. The method of claim 10 wherein implementing a safety measure includes one of issuing a warning about the object to a driver of the receiving vehicle, displaying the object to the driver of the receiving vehicle, or automatically braking the receiving vehicle.

14. The method of claim 10 further comprises receiving, by the processor of the receiving vehicle, image data for the object sent by the transmitting vehicle via a secondary communication link, where the secondary communication link differs from the wireless data link.

15. The method of claim 14 further comprises

capturing, by a camera disposed in the receiving vehicle, video of a scene;

fusing, by the processor in the receiving vehicle, the image data for the object into the video; and

presenting, by the processor in the receiving vehicle, the video with the image data fused therein to the driver of the receiving vehicle.

16. A collision avoidance system, comprising:

a first camera disposed in a transmitting vehicle;

a first image processor configured to receive image data from the first camera, the first image processor operates detect an object in the image data and determine a first location for the object from the image data, where the first location is defined with respect to the transmitting vehicle;

a first transceiver interfaced with the first image processor and operates to send the first location for the object via a wireless communication link to a receiving vehicle;

a second transceiver disposed in the receiving vehicle and configured to receive the first location of the object from the transmitting vehicle;

a second image processor interfaced with the second transceiver, the second image processor operates to determine a vehicle location of the transmitting vehicle with respect to the receiving vehicle and determine a second location of the object using the first location and the vehicle location, where the second location is defined with respect to the receiving vehicle.

17. The collision avoidance system of claim 16 wherein the first image processor determines the first location of the object by calculating a distance to the object as follows

where the object is a person, fc is the focal length of the first camera, Rh is actual height of the person and Ih is height of the person in image pixels. 18. The collision avoidance system of claim 17 wherein the first image processor sends the first location of the object from the transmitting vehicle only if the distance between the object and the transmitting vehicle is less than a predefined threshold. 19. The collision avoidance system of claim 17 wherein the first image processor determines whether the transmitting vehicle and the receiving vehicle are traveling the same direction and sends the first location of the object to the receiving vehicle in response to a determination that the transmitting vehicle and the receiving vehicle are traveling the same direction.

20. The collision avoidance system of claim 16 wherein the first location of the object is transmitted in accordance with Dedicated Short-range Communication (DSRC) protocol. 21 . The collision avoidance system of claim 16 wherein the second image processor implements a safety measure in the receiving vehicle based on the second location of the object.

22. The collision avoidance system of claim 21 wherein the second image processor computes at least one of a distance to collision with the object or a time to collision with the object and implements a safety measure in the receiving vehicle in response to the at least one of the distance to collision or the time to collision being less than a threshold. 23. The collision avoidance system of claim 21 further comprises an automatic emergency braking system in the receiving vehicle, wherein the second image processor operates to automatically braking the receiving vehicle based on the second location of the object. 24. The collision avoidance system of claim 21 wherein the first transceiver send image data for the object from the transmitting vehicle via a secondary communication link to the receiving vehicle, where the secondary communication link differs from the wireless communication link. 25. The collision avoidance system of claim 24 further comprises a second camera in the receiving vehicle, wherein the second image processor receiving video from the second camera, fuses the image data for the object with the video, and presents the video with the image data to the driver of the receiving vehicle.

Description:
VISUAL SENSOR FUSION AND DATA SHARING

ACROSS CONNECTED VEHICLES FOR ACTIVE SAFETY

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application No. 62/620,506, filed January 23, 2018. The entire disclosure of the above application is incorporated herein by reference.

FIELD

[0002] The present disclosure relates to visual sensor fusion and data sharing across vehicles for improved safety.

BACKGROUND

[0003] Vehicle-accident related fatalities, especially those caused by human errors exceed more than one million every year worldwide. In response to such statistics, a variety of safety measures have been proposed. In particular, in the United States, the US Department of Transportation (USDOT) in collaboration with state-level DOTs and experts nationwide have pursued the development of the Dedicated Short- Range Communications (DSRC) technology and related standards, which are designed for significantly improving safety measures through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. The USDOT pilot test program concluded that DSRC can reduce vehicle related accident significantly. The USDOT also issued a recommendation that the DSRC technology should be mandated for all new light vehicles in the near future.

[0004] One important category of vehicle-related accidents involves pedestrian- vehicle collision. In the US in 2015, the number of pedestrian fatalities caused by vehicle accidents was 5,376, a 23% increase from 2009. Pedestrians' fatality is one of the few categories that experienced an increase in the past few years. Furthermore, most of the pedestrian accidents happen in urban areas.

[0005] One of the many accident scenarios that involve pedestrians is when a stopping vehicle occludes a crossing pedestrian from being viewed by other vehicles. A second passing vehicle's driver only notices the presence of a crossing pedestrian after the pedestrian is within a very close proximity to the second vehicle as shown in Figure 1 . In such scenario, the passing vehicle driver may fail to stop the vehicle in a timely manner, due to the close proximity to the pedestrian, and this leads to a potential injury or even fatality for the pedestrian.

[0006] A variety of new vehicle models typically include an Advanced Driver Assistant System (ADAS) that helps prevent pedestrian and other forms of accidents. The success of such system usually depends on the distance between the moving vehicle and pedestrian and on the vehicle speed.

[0007] This section provides background information related to the present disclosure which is not necessarily prior art. SUMMARY

[0008] This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.

[0009] A method is presented for sharing data across vehicles for improved safety. In a transmitting vehicle, the method includes: detecting an object in an image captured by an imaging device in a transmitting vehicle; determining a first location of the object from the image, where the first location of the object is defined with respect to the transmitting vehicle; sending the first location of the object from the transmitting vehicle via a dedicated short range communication link to a receiving vehicle. In the receiving vehicle, the method includes: receiving the first location of the object from the transmitting vehicle; determining a vehicle location of the transmitting vehicle with respect to the receiving vehicle; determining a second location of the object using the first location and the vehicle location, where the second location is defined with respect to the receiving vehicle; and implementing a safety measure in the receiving vehicle based on the second location of the object.

[0010] In one embodiment, the object is detected using the You Only Look Once (YOLO) object detection algorithm.

[0011] The first location of the object further can be determined by calculating a distance to the object by

where the object is a person, f c is the focal length of the imaging device, R h is actual height of the person and I h is height of the person in image pixels. In some instance, the first location of the object is sent from the transmitting vehicle only if the distance between the object and the transmitting vehicle is less than a predefined threshold. In other instance, the first location of the object is sent to from the transmitting vehicle to the receiving vehicle when the two vehicles are traveling in the same direction.

[0012] Example safety measure include but are not limited to issuing a warning about the object to a driver of the receiving vehicle, displaying the object to the driver of the receiving vehicle or automatically braking the receiving vehicle.

[0013] In some embodiments, the method further includes capturing, by a camera disposed in the receiving vehicle, video of a scene; receiving the image data for the object from the transmitting vehicle; fusing the image data for the object into the video; and presenting the video with the image data fused therein to the driver of the receiving vehicle.

[0014] A collision avoidance system is also presented. The system includes: a first camera, a first image processor and a first transceiver disposed in a transmitting vehicle. The first image processor is configured to receive image data from the first camera and operates to detect an object in the image data and to determine a first location for the object from the image data, where the first location is defined with respect to the transmitting vehicle. The first transceiver is interfaced with the first image processor and sends the first location for the object via a wireless communication link to a receiving vehicle.

[0015] The system also includes a second transceiver and a second image processor in the receiving vehicle. The second transceiver is configured to receive the first location of the object from the transmitting vehicle. The second image processor is interfaced with the second transceiver, and operates to determine a vehicle location of the transmitting vehicle with respect to the receiving vehicle and to determine a second location of the object using the first location and the vehicle location, where the second location is defined with respect to the receiving vehicle. In some embodiments, the second image processor implements a safety measure in the receiving vehicle based on the second location of the object.

[0016] In an example embodiment, the first location of the object is transmitted from the transmitting vehicle to the receiving vehicle in accordance with Dedicated Short-range Communication (DSRC) protocol.

[0017] The transmitting vehicle may also send image data for the object via a secondary communication link that differs from the primary wireless communication link between the vehicles.

[0018] The collision avoidance system may further include an automatic emergency braking system in the receiving vehicle, wherein the second image processor operates to automatically braking the receiving vehicle based on the second location of the object.

[0019] In some embodiments, the receiving vehicle includes a camera, such that the second image processor receiving video from the second camera, fuses the image data for the object with the video, and presents the video with the image data to the driver of the receiving vehicle.

[0020] Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

DRAWINGS

[0021] The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.

[0022] Figure 1 is a picture of an example pedestrian collision scenario;

[0023] Figure 2 is a diagram of a collision avoidance system;

[0024] Figure 3 is a flowchart illustrating an example process for sharing data by a transmitting vehicle;

[0025] Figure 4 is a flowchart illustrating an example process for fusing data by a receiving vehicle;

[0026] Figure 5 is a schematic of an example collision scenario;

[0027] Figure 6 is a diagram depicting pin hole model and image transpose calculations;

[0028] Figure 7 is a graph illustrating bandwidth between two DSRC units;

[0029] Figure 8 is a graph illustrating packet delay between two DSRC units;

[0030] Figure 9 is a graph illustrating various delay in the proposed collision avoidance system; and

[0031] Figure 10A-10F are an example of fused images shown to the driver of the collision avoidance system.

[0032] Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings. DETAILED DESCRIPTION

[0033] Example embodiments will now be described more fully with reference to the accompanying drawings.

[0034] Figure 2 depicts an example of a collision avoidance system 20. The collision avoidance system 20 is deployed in across vehicles. In this example, the collision avoidance system 20 is operational between a transmitting vehicle 21 and a receiving vehicle 22. Each vehicle 21 , 22 is equipped with an imaging device 23, an image processor 24, and a transceiver 25. The vehicles may also be equipped with other conventional vehicle subsystems, including but not limited to a vehicle navigation system with a display 26 as well as an automatic emergency braking system 27, such as the Pedestrian Collision Avoidance System (PCAS). More or less vehicles may be equipped in a similar manner and comprise part of the system. In some embodiments, designated infrastructure-locations, such as signs, traffic signals, bridges, etc., can also be equipped in a similar manner and comprise part of the system 20.

[0035] In the example embodiment, the imaging device 23 is a camera integrated into a vehicle. The system can be extended to employ any sensor modality including lidars, radars, ultrasonic sensors, etc. A more powerful system can be realized by the fusion of a multimodal-sensor system such as any combination of cameras, lidars, radars, and/or ultrasonic sensors. In cases of sensor modalities that generate a large amount of data, the need for data compression could become necessary. Hence, in the case of using visual sensors, video compression/decompression will be critical for achieving efficient communication among the vehicles and/or infrastructure. Any state- of-the-art video coding standards or technology that is either standalone or built-in within popular cameras can be used.

[0036] In an example embodiment, the image processor 24 is a Nvidia Drive PX 2 processor. It should be understood that the logic for the control of image processor 24 can be implemented in hardware logic, software logic, or a combination of hardware and software logic. In this regard, image processor 24 can be or can include any of a digital signal processor (DSP), microprocessor, microcontroller, or other programmable device which are programmed with software implementing the above described methods. It should be understood that alternatively the controller is or includes other logic devices, such as a Field Programmable Gate Array (FPGA), a complex programmable logic device (CPLD), or application specific integrated circuit (ASIC). When it is stated that image processor 24 performs a function or is configured to perform a function, it should be understood that image processor 24 is configured to do so with appropriate logic (such as in software, logic devices, or a combination thereof).

[0037] In the example embodiment, the wireless network between vehicle is based on underlying DSRC transceivers 25that adhere to the Intelligent Transportation System of America (ITSA) and 802.1 1 p WAVE standards, and which are certified by the US DOT. By default, DSRC equipment periodically sends Basic Safety Messages (BSM). The messages contain vehicle status and applications information. DSRC is merely illustrative of how a wireless data link may be established between vehicles and other communication protocols fall within the broader scope of this disclosure.

[0038] Figure 3 illustrates an example process for sharing data by a transmitting vehicle. Image data is captured at 31 using an imaging device in the transmitting vehicle. Image data may be captured continuously, periodically or in response to a trigger signal. In the example embodiment, the imaging device is a camera although other types of imaging devices are contemplated by this disclosure.

[0039] Image data is then analyzed at 32 to detect and/or identify objects of interest, such as a pedestrian, another vehicle or other potential hazards. In an example embodiment, objects are detected using a You Only Look Once (YOLO) object detection algorithm. For further details regarding YOLO object detection, reference may be had to "YOYLO9000: Better, Faster, Stronger" ArXiv: 1612.08242 Dec. 2016 which is incorporated by reference. It is readily understood that other object detection methods also fall within the scope of this disclosure.

[0040] Next, a determination is made regarding whether to share data about the detected object with other vehicles. In this regard, the location of the object is determined at 33 from the image data. This first location of the object is defined with respect to the location of the transmitting vehicle. That is, the transmitting vehicle serves as the reference frame for this first location. Techniques for determining a distance to an object from the imaging data is readily known in the art. For example, when a vehicle detects a pedestrian crossing, it estimates the pedestrian distance I as follows:

where f c is the focal length and R h and I h are the real pedestrian height in meter and height in image pixels, respectively.

[0041] Two different criteria are applied before sharing object information, including its location, with nearby vehicles. First, a criterion may be applied to determine whether a nearby vehicle is a vehicle of interest (i.e., a vehicle to which the object information is to be sent to) as indicated at 34. An example criterion is that object information should only be sent to vehicles located next to or behind the transmitting vehicle. Vehicles in front of the transmitting vehicle are not of interest and will not be sent object information. Other example criteria are that vehicles of interest should be traveling in the same direction as the transmitting vehicle and/or should be no more than two lanes away from the transmitting vehicle. Other types of vehicle criteria are contemplated by this disclosure.

[0042] Second, a criterion is applied to determine whether the object is of interest to the recipient vehicle as indicated as 35. For example, only objects within a predefined distance (e.g., I < 50 meters) from the transmitting vehicle are deemed to be objects of interest. Objects falling outside of the predefined distance are not of interest and information about these objects will not be shared with other vehicles. Likewise, other types of object criteria are contemplated by this disclosure.

[0043] For each vehicle of interest, object information is sent at 36 via a wireless data link from the transmitting vehicle to the vehicle of interest (i.e., receiving vehicle). In an example embodiment, the wireless network is based on underlying DSRC transceivers that adhere to Intelligent Transportation System of America (ITSA) and 802.1 1 p WAVE standard. In this case, object information is transmitted periodically using Basic Safety Messages (BSM) over the DSRC link. Again, it is only necessary to send information for objects of interest.

[0044] Furthermore, image data for an object of interest (e.g., video segment) is sent to the vehicle of interest. To do so, the transmitting vehicle establishes another secondary data connection between the transmitting vehicle and the receiving vehicle. In one example, the transmitting vehicle may establish a TCP connection with the vehicle of interest. Rather than sending all of the captured image data, the transmitting vehicle can send only data corresponding to the object of interest. For example, the transmitting vehicle sends the image data contained in a boundary box that frames the object as designated by the object detection algorithm. Prior to sending the image data, the image data is preferably compressed as indicated at 37. For example, the image data can be compressed using a compression algorithm, such as Motion JPEG. Different types of compression methods fall within the broader aspects of this disclosure. In any case, the image data for the object is sent at 38 by the transmitting vehicle to the receiving vehicle. It is to be understood that only the relevant steps of the processing by the image processor 24 are discussed in relation to Figure 3, but that other software-implemented instructions may be needed to control and manage the overall operation of the system.

[0045] Figure 4 illustrates how shared data is processed by a receiving vehicle. Table 1 defines the variables that are used in system parameter calculations set forth below.

Table 1

The reported locations could be measured in any distance units. For example, they could be in meters as used in the Universal Transverse Mercator (UTM) coordinate format. Also, the camera location is considered as a vehicle reference location. If more than one pedestrian is detected, the same calculations can be performed for each pedestrian. Meanwhile, it is possible to combine two pedestrians, who are adjacent or in close proximity, as one pedestrian. Here, and for illustrative purposes only, the focus is on a single pedestrian crossing. Each vehicle has Vehicle of Interest (Vol) list that includes all vehicles that may share useful information to the ego-vehicle.

[0046] Object information is received at 41 by the receiving vehicle. Object information received by the receiving vehicle may include a distance between the two vehicles. For example, the exchanged information may include a vertical distance and a horizontal distance between the vehicles. In this way, the receiving vehicle is able to determine the location of the transmitting vehicle in relation to itself. As noted above, this information may be periodically exchanged using messages sent over a DSRC link. Other types of wireless links could also be used by the vehicles.

[0047] Next, the location of the object is determined at 42 by the receiving vehicle. This location of the object is defined with respect to the location of the receiving vehicle. That is, the receiving vehicle serves as the reference frame for this second location of the object. In the example embodiment, this second location is derived using the first location of the object sent by the transmitting vehicle and the distance between the two vehicles as will further described below.

[0048] From the location of the object, a safety concern can be evaluated at 43 by the receiving vehicle. In one embodiment, the receiving vehicle computes an expected collision point, D, between the object and the receiving vehicle as seen in Figure 5. The receiving vehicle can also compute a distance to collision (DTC) and/or a time to collision (TTC) as follows.

where S A is the speed of vehicle A (e.g. , in meters per second). These metrics are merely exemplary.

[0049] Based on the second location of the object, a safety measure can be implemented in the receiving vehicle as indicated at 44. For example, assuming an expected collision point exists, a safety concern can be raised and a warning can be issued to the driver of the receiving vehicle. The warning can be issued at a fixed interval (e.g. , 5 seconds) before an anticipated collision. The warning may a visual, audible and/or haptic indicator. In response to a raised safety issue, the receiving vehicle may also implement an automated preventive measure, such as automatic braking of the vehicle.

[0050] Additionally, video for the detected object is received at 45 by the receiving vehicle. The received video can then be fused at 46 with the video captured by the receiving vehicle. Continuing with the example in Figure 5, the image of the obscured pedestrian can be integrated into the video captured by the receiving vehicle. One technique for fusing the data is set forth below.

[0051] After vehicle B receives a request for video streaming, vehicle B shares only detected pedestrian region of the image, also called Region of Interest (Rol). Before sending the Rol to vehicle A, the Rol is compressed into a video stream. When the vehicle receives the first image of the video stream, it has to determine if it is within the local camera Horizontal Field Of Viewing (HFOV). Hence, angle z a is calculated as shown in Figure 5.

where

Note that r might be negative if is negative, is estimated by vehicle B. A simple way to estimate an object's horizontal angle is by measuring the average horizontal object pixels' locations to the camera Horizontal Field of View (HFOV) as follows:

When z ? is positive, the object is on the left side of the camera and vice versa. Now if za is larger than HFOV of vehicle A, only audible warning is made to the driver.

Otherwise the pedestrian image is transposed on the local video stream image. As shown in Figure 6, using the camera pinhole model, the object is transferred from camera B image plane to camera A image plane as follows:

AX, AY and ΔΖ are the differences in coordinate between the two cameras' locations which are similar to variables shown in Figure 5. Both variables X and Y are estimated from camera B using:

After imposing the detected object on camera A image, the fused image is presented to the driver at 47 on a display. The process is repeated until vehicle B stops sharing detected object information. To avoid sharing unnecessary information, vehicle B stops sharing detected object information when the object is no longer in front of the vehicle and visible to other vehicles It is important to note that share sensors

information might be updated at a different rate. As a result, time (clock) synchronization between the two vehicles is necessary. It is to be understood that only the relevant steps of the processing by the image processor 24 are discussed in relation to Figure 4, but that other software-implemented instructions may be needed to control and manage the overall operation of the system. [0052] Experimental setup and results are now described for the example embodiment of the collision avoidance system 20. The experimental setup consists of two vehicles (e.g., SUV and Sedan). In each vehicle, a Cohda (MK5 DSRC transceiver, Global Navigation Satellite System GNSS) and a dashboard camera (DashCam) is installed. Although DSRC transceivers are equipped with GNSS, this embodiment opted to use a separate Real-Time Kinematic (RTK) GNSS because RTK- GNSS offers a high-accuracy location estimates when compared to standalone GNSS that is used in DSRC transvers. In these experiments, Emlid Reach RTK GNSS receiver is used, which is a low-cost off-the-shelf device. To store the collected data, all sensors on each vehicle are connected to a laptop that has Robotic Operation System (ROS) installed on it. Two vehicles' laptops are connected via DSRC transceivers during the data collection to synchronize laptop clocks. In addition, a bandwidth test experiment was conducted between two vehicles to verify the available bandwidth and to emulate channel performance when conducting the experiment in the lab.

[0053] The RTK-GNSS output was set to the maximum limit of 5Hz and the camera to 24 Frame Per second (FPS). The DSRC data rate channel was set to 6 Mbps. The experiment was conducted on the Michigan State University campus and surrounding areas with wide ranging speed limits up to 55 kilometer-per-hour (kph). All of the experiments were conducted during daytime. In the first part, channel bandwidth test was collected while driving at a speed ranging between 0 and 55 kph; and the distance between the two vehicles' DSRC transceivers ranged from 5 to 100 meters. In the second part, a pedestrian pre-collision scenario was simulated and coordinated by a test team.

[0054] In the lab setup, two ROS supported desktop PC were used and connected with stationary DSRC transceivers. The distance between the two transceivers is fixed to 5 meters. To emulate the moving vehicle, based on the road test findings, a random delay of 5 to 15 milliseconds delay was added to the channel and the maximum channel bandwidth set to 1 .8Mbps. Both PCs have core 17 processor and one PC with NVIDIA GTX 1080ti GPU. The GPU capable PC represents vehicle B while the other PC represents vehicle A. Proposed system components were implemented as ROS nodes. You Only Look Once (YOLO) object detection algorithm was used in the lab experiment, such that the algorithm for pedestrian detection was trained using Visual Object Classes (VOC) data set. Also, Motion JPEG (MJPEG) was used as the video/image encoding/decoding technique. [0055] Figures 7 and 8 show a sample of DSRC bandwidth and packet delay test results, respectively. During this sample results, the distance between the two vehicles was 90 to 120 meters and at a speed of 55 kph. The average bandwidth and delay were 2.85 Mbps and 34.5 ms respectively. It was found that DSRC equipment can carry high quality video stream with minimal delay. Similar findings are found in P. Gomes et al "Making Vehicles" Transparent Through V2V Video Streaming" IEEE Transactions on Intelligent Transportation Systems 13 (2012).

[0056] Object detection algorithm YOLO was able to process 8-10 FPS which is considered acceptable. However, it is possible to achieve higher processing using automotive oriented hardware. As discussed earlier, after a pedestrian is detected, the pedestrian distance and angle is estimated. The Region of Interest (ROI) is extracted from the original image and sent to the video/image encoder. The M-JPEG encoder compresses each image individually as a JPEG image. This compression method saves a significant amount of time compared to other advance video compression techniques. The average compressed image size is 3.5KB which is much smaller than sharing the full image. For example, a high quality H.264 video stream of 640x480 at 10 FPS requires a 1 .029 Mbps while selective sharing at 10 FPS would need only 280 Kbps. However, limit the video streaming rate to 5 Hz similar to GNSS update rate to achieve best accuracy. Pedestrian distance I and < β are sent at the detection rate which is 8 to 10 Hz.

[0057] Figure 9 depicts the delay at every step of operation, where overall delay is between two consecutive image fusions including the display of the final fused image. The average overall delay is 200 ms which is similar to the video sharing rate of 5 Hz, mainly due to the fact that the GNSS update is limited to 5 Hz. The fusion processes delay average is 33 ms and includes the delay caused by calculation, fusion and synchronization between remote and local data. Meanwhile the average channel object detection delays are 10ms and 122ms respectively. It is clear that the sum of the fusion, channel and object detection is less than overall delay, suggesting the 200ms delay is not possible to increase the information sharing rate by a) improving object detection processing rate without decreasing detection accuracy b) increasing GNSS rate. Table 2

[0058] Table 2 shows the calculations that are conducted during our pre-collision interaction which lasted 2.4 seconds. During that interaction, the driver is warned about pedestrian crossing. A sample of the fused images is shown in Figures 10A-1 OF.

[0059] Some portions of the above description present the techniques described herein in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times to refer to these arrangements of operations as modules or by functional names, without loss of generality.

[0060] Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. [0061] Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.

[0062] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a tangible computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

[0063] The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatuses to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present disclosure is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.

[0064] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.