Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR ONLINE PERFORMANCE MONITORING OF THE PERCEPTION SYSTEM OF ROAD VEHICLES
Document Type and Number:
WIPO Patent Application WO/2017/180394
Kind Code:
A1
Abstract:
Systems and methods related to monitoring the performance of the environment perception sensor system during driving in freeways or inter-urban roads with the help of other vehicles, and their location and perception information and the like, received via V2V broadcasts. This enables each vehicle to be continuously aware of its current detection range, which is safety-critical information for the all self-driving functions. This monitoring also reveals continuous sub-performance, in which case the vehicle owner needs to check the sensor system.

Inventors:
PEUSSA PERTTI (FI)
KUTILA MATTI (FI)
TARKIAINEN MIKKO (FI)
VIRTANEN ARI (FI)
Application Number:
PCT/US2017/026183
Publication Date:
October 19, 2017
Filing Date:
April 05, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PCMS HOLDINGS INC (US)
International Classes:
G08G1/16; G01S7/497; G01S7/52
Foreign References:
US20120101704A12012-04-26
DE102012024959A12014-06-26
US9079587B12015-07-14
US20100198513A12010-08-05
US20150066412A12015-03-05
US7162339B22007-01-09
US20100235129A12010-09-16
DE102011017593A12012-10-31
DE102011112243A12012-05-24
Attorney, Agent or Firm:
STECK, Jeffrey Alan (US)
Download PDF:
Claims:
CLAIMS

We claim:

1. A method comprising:

identifying a nearby set of vehicles to a first vehicle based on received wireless messages from a plurality of vehicles;

identifying an observed set of vehicles based on sensors of the first vehicle; and responsive to a determination that at least one vehicle in the nearby set is not in the observed set, performing at least one perception-deficit response action.

2. The method of claim 1, wherein the at least one perception-deficit response comprises presenting an alert to an occupant of the first vehicle.

3. The method of claim 1, wherein the at least one perception-deficit response comprises disengaging an autonomous function of the first vehicle.

4. The method of claim 1, wherein the at least one perception-deficit response comprises adjusting a driving parameter of the first vehicle due to reduced perception capability.

5. The method of claim 1, wherein the at least one perception-deficit response comprises alerting a human occupant to take over control of the first vehicle.

6. The method of claim 1, wherein the wireless messages received by the first vehicle comprise timestamped velocity, heading, and coordinates of a given nearby vehicle.

7. The method of claim 1, wherein the nearby set comprises vehicles from which wireless messages were received by the first vehicle and vehicles observed by those vehicles.

8. The method of claim 1, wherein the wireless messages received by the first vehicle comprise location information of a given nearby vehicle and location information for each vehicle observed by said nearby vehicle.

9. The method of claim 8, further comprising determining a detection certainty value for each of the nearby set of vehicles based on a number of vehicles which have observed said one of the nearby vehicles.

10. The method of claim 9, wherein determining the certainty value further comprises

assigning a maximum certainty value when the number of vehicles which have observed a nearby vehicle is greater than or equal to 2.

11. The method of claim 9, wherein determining the certainty value further comprises

assigning a middle certainty value when the number of vehicles which have observed a nearby vehicle is equal to 1.

12. The method of claim 9, wherein determining the certainty value further comprises

assigning a minimum certainty value when the number of vehicles which have observed a nearby vehicle is equal to 0.

13. The method of claim 9, further comprising comparing the determined certainty values with sensor data of the first vehicle to generate an estimate of the first vehicle's current perception range.

14. The method of claim 1, wherein for determining that at least one vehicle in the nearby set is not in the observed set only nearby vehicles within a previously calculated perception range of the first vehicle are compared to the observed set.

15. The method of claim 1, further comprising broadcasting a request for detection data from the first vehicle via a V2V communication system of the first vehicle.

16. The method of claim 15, wherein the request is broadcast responsive to a determination by the first vehicle that a vehicle has entered a perception range of the first vehicle.

17. The method of claim 15, wherein the request is broadcast responsive to a determination by the first vehicle that a vehicle has left a perception range of the first vehicle.

18. A vehicle comprising a processor and a non-transitory storage medium, the storage medium storing instructions operative to perform functions comprising:

identifying a nearby set of vehicles to the vehicle based on received wireless messages from a plurality of vehicles;

identifying an observed set of vehicles based on sensors of the vehicle; and responsive to a determination that at least one vehicle in the nearby set is not in the observed set, performing at least one perception-deficit-response action.

Description:
METHOD AND SYSTEM FOR ONLINE PERFORMANCE MONITORING

OF THE PERCEPTION SYSTEM OF ROAD VEHICLES

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C §119(c) from, U.S. Provisional Patent Application Serial No. 62/321,457, filed April 12, 2016, entitled "METHOD AND SYSTEM FOR ONLINE PERFORMANCE MONITORING OF THE PERCEPTION SYSTEM OF ROAD VEHICLES," which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure generally relates to automotive perception sensors.

BACKGROUND

[0003] It is becoming increasingly important for the performance of the perception sensors to be regularly monitored and inspected, since more and more vehicle actions are based on automatic decisions, which are very much dependent on sensor readings. Sensor performance may degrade such as in the following cases: head-on collision, or if the sensor area(s) gets a mechanical contact; installing accessory equipment to the front bumper, or to other sensor area(s) at the sides or rear; re-installing or replacing the bumper, grille, or other parts in the vicinity of a sensor; and/or the like.

[0004] Sensor performance degradation is expected to be an even more serious problem in the future, when more and more self-driving cars are expected to work 24/7 in various weather conditions. Continuous monitoring of sensor performance is needed during driving, and especially in situations such as when sensors become covered with snow, ice, or dirt; or when driving in heavy rain, snowstorms, hailstorms, dense fog, or dust.

[0005] As sensors are getting covered with snow, ice, or dirt, this reduces the signal strength received, and as a consequence the sensing distance is reduced, or the fineness of detail is reduced. Some sensors may detect this condition in their self-diagnostic phase, some do not.

[0006] When driving in heavy rain, snowstorm, hailstorm, dense fog, or dust, an increased amount of the sensing signal is scattered in the air, and the received signal is weaker. As a consequence, the reliable sensing distance is reduced. Since scattering is stronger with optical wavelengths, some sensors such as LIDAR, cameras, and the like are more sensitive to adverse weather conditions. The self-diagnostics of sensors may have severe difficulties in picking up these conditions. [0007] Many sensor manufacturers have a self-diagnostics functionality built in to each sensor.

However, such functionality generally can only detect very obvious errors. For example, a radar can tell whether its internal power-on checks were successful, or a laser scanner can detect whether its front cover is totally blocked.

[0008] Sensor monitoring and calibration is described in, for example US2015/0066412A1, US7162339B2, US2010/0235129A1, DE102011017593A1, and DE10201 1112243 Al .

SUMMARY

[0009] It is becoming increasingly important that the performance of the perception sensors is regularly monitored, since more and more vehicle actions are based on automatic decisions, which are very much dependent on situational awareness given mainly by the sensor set of the vehicle.

[0010] Currently there are not many methods for online sensor performance checks. Using other vehicles as test targets as disclosed herein may have benefits over stationary objects at the roadside, including but not limited to: test targets can be available also at very feature-poor environment; vehicle perception system is tuned to find and track these targets; they are well defined, and detectable despite adverse conditions, such as heavy snowing; many of them communicate, allowing an additional channel to assure detection; they can appear 360° around the vehicle, and throughout the detection range in varying velocities, meaning that they can be used to test several of the perception sensors of the vehicle. Additionally, velocity is useful in sensor performance assessment, especially for radars. Moving vehicles may often have lights, which is further information for sensor performance analysis, such as assessing the performance of a camera system.

[0011] In one embodiment, disclosed herein is a method comprising: identifying a nearby set of vehicles to a first vehicle based on received wireless messages from a plurality of vehicles; identifying an observed set of vehicles based on sensors of the first vehicle; and responsive to a determination that at least one vehicle in the nearby set is not in the observed set, performing at least one perception-deficit-response action.

[0012] In one embodiment, there is a system comprising a processor and a non-transitory storage medium, the storage medium storing instructions operative to perform functions such as, but not limited to, those set forth above.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] A more detailed understanding may be had from the following description, presented by way of example in conjunction with the accompanying drawings, wherein: [0014] FIG. 1 depicts an exemplary embodiment of the detection range of a vehicle having 360 degree perception capability.

[0015] FIG. 2 depicts an exemplary embodiment of the sensor detection range of a vehicle in ideal conditions and in adverse conditions.

[0016] FIG. 3 depicts an exemplary embodiment the sensor detection range of a vehicle with a damaged right front sensor.

[0017] FIG. 4A depicts an exemplary embodiment of a first vehicle detecting a second vehicle, and the associated perceived information.

[0018] FIG. 4B depicts an exemplary embodiment of the second vehicle detecting the first vehicle, and the associated perceived information.

[0019] FIG. 5 depicts an exemplary embodiment of a typical traffic situation on a freeway, from the perspective at a given moment in time of an egovehicle.

[0020] FIG. 6 is a flow diagram of an exemplary embodiment of a method of sensor performance monitoring operation.

[0021] FIG. 7 depicts an exemplary embodiment of an architecture of data structures and information as used in the present disclosure.

[0022] FIG. 8 illustrates an embodiment of the communication process of sharing detections between vehicles.

[0023] FIG. 9 illustrates an exemplary embodiment of a perception capabilities database (forward looking perception system).

[0024] FIG. 10 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed in some embodiments.

[0025] FIG. 11 illustrates an exemplary network entity that may be employed in some embodiments.

DETAILED DESCRIPTION

[0026] A detailed description of illustrative embodiments will now be provided with reference to the various Figures. Although this description provides detailed examples of possible implementations, it should be noted that the provided details are intended to be by way of example and in no way limit the scope of the application.

[0027] Note that various hardware elements of one or more of the described embodiments are referred to as "modules" that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits

(ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer- readable medium or media, such as commonly referred to as RAM, ROM, etc.

[0028] Modern vehicles have radars, cameras, laser scanners and ultrasonic perception sensors. They mainly provide the distance and angle to the nearest object(s), typically to surrounding vehicle(s). Most ADAS and V2V equipped cars have at least forward looking sensors. New vehicles are often equipped with a sensor system capable of 360° perception around the vehicle. FIG. 1 illustrates a simplified depiction of the detection (or perception) range 1007 of a vehicle 1005, which has 360° perception capability. In FIG. 1, the vehicle 1005 is heading to the right. In an exemplary scenario, if the vehicle is equipped with an adaptive cruise control (ACC) system, the forward looking distance may be about 200 meters. Confirmed objects 1010 may each have a distance 1015 and angle 1020 from the vehicle 1005.

[0029] It is becoming increasingly important that the performance of the perception sensors is constantly monitored, since more and more vehicle actions are based on automatic decisions, which are highly dependent on the vehicle's situational awareness, which in turn is highly dependent on sensor readings.

[0030] Especially when driving in adverse weather conditions, the sensor performance may vary with the weather. Therefore, sensor performance monitoring should be carried out frequently and automatically during normal driving.

[0031] FIG. 2 illustrates a simplified depiction of the sensor detection range of a vehicle 205 in ideal conditions (range 210), and in adverse conditions (range 215), such as dirt on the sensors. Since adverse weather conditions (e.g., heavy snow) or driving situations (e.g., oncoming car is raising dust in the air) can change rapidly, it is important to frequently monitor the performance of the perception system during driving.

[0032] The self-diagnostics of a perception sensor generally cannot alone detect any sensor heading errors. Also, conditions such as rain, snow, fog, or the like are not easily detectable with the sensor alone. Larger sensor heading errors (such as resulting from a possible bumper contact) or malfunction can only be fixed by mechanical adjustment or repairs. [0033] FIG. 3 illustrates a simplified depiction of the sensor detection range 310 of a vehicle 305 with a damaged right front sensor (or otherwise impaired sensor). While some objects 315 are detected and have their distance and angle relative to vehicle 305 confirmed, along a portion 320 the vehicle 305 is unable to detect anything. In an alternative example, the sensor view could be blocked, such as by a retrofitted fog light, or the like. In other examples, the sensor may be so dirty that it is practically nonfunctioning. In any case the vehicle 305 is partly blind (in non-detection area 320) and it should be inspected and repaired soon, because its functions, potentially including self-driving, are severely compromised.

[0034] V2V communication equipped vehicles may be repeatedly broadcasting information (such as their location, heading, velocity, etc.) to other vehicles in the area. In typical conditions, this message may carry 300 meters or more. This means that a particular vehicle can often be aware of other vehicles beyond its sensor detection range.

[0035] It is becoming increasingly important that the performance of the perception sensors is regularly monitored, since more and more vehicle actions are based on automatic decisions, which are very much dependent on situational awareness given mainly by the sensor set of the vehicle.

[0036] Currently there are not many methods for online sensor performance checks. Using other vehicles as test targets has a number of benefits over stationary objects at the roadside, including but not limited to: test targets can be available also at very feature-poor environment; vehicle perception system is tuned to find and track these targets; they are well defined, and detectable despite adverse conditions, such as heavy snowing; many of them communicate, allowing an additional channel to assure detection; they can appear 360° around the vehicle, and throughout the detection range in varying velocities, meaning that they can be used to test all the perception sensors of the vehicle. Additionally, velocity is useful in sensor performance assessment, especially for radars. Moving vehicles may often have lights, which is further information for sensor performance analysis, such as assessing the performance of a camera system.

[0037] In an exemplary embodiment, two oncoming vehicles - both having perception capability - measure how far and in which angle the other vehicle is, and share the information. This allows both vehicles to get an independent estimation of their respective perception ranges and angular correctness.

[0038] FIGS. 4A-4B illustrate simplified depictions of the interaction of two oncoming vehicles. A first vehicle 405 (referred to hereafter as the "egovehicle") detects a second vehicle 410 at a distance 422a and angle 424a (in the egovehicle's coordinate system, relative to direction of travel 420a) at a certain time. When the egovehicle 405 receives the second vehicle detections (e.g., distance 422b and angle 424b in vehicle 410s coordinate system, relative to its direction of travel 420b), the egovehicle 405can deduce how far its sensors were able to 'see' second vehicle 410 at this point. Since the detections will likely take place at different times, also included in the information exchange may be a measurement time stamp, and vehicle speed and heading (e.g., 407 and 412), and/or the like, to allow calculations to match the times of the measurements.

[0039] In a more general case (see FIG. 5), the egovehicle broadcasts the request to share detected objects with all vehicles in the communication range. When this is done frequently enough - initiated by the egovehicle or some other vehicle in the area - the vehicles can maintain an up-to-date measure of what their actual detection range is.

[0040] In some embodiments of the present disclosure, the disclosed approach is applied only in freeways or inter-urban roads, as in dense cities the vehicles may often be so close to each other that it is not always clear which of the detected objects is which vehicle - especially in traffic jams. Further, in many instances it is more important to monitor the true range of the perception in situations involving highways, freeways, and inter-urban roads, for example due to the higher speeds used.

[0041] An exemplary embodiment of the present disclosure is set forth with two vehicles exchanging their perceived information, as in FIGS. 4A-4B and 5. Generally, several entities may be involved: the egovehicle, one or more other V2V equipped vehicles (including both self-driven and manually driven vehicles), and in some instances one or more non-V2V equipped vehicles.

[0042] In one embodiment, there is a Vehicle 0 (egovehicle), which may be self-driven or manually driven. In some embodiments, the egovehicle may further comprise: a time management module satisfying the requirements given in DSRC or C-ITS specifications; one or more environment perception sensors; a perception system, which monitors the perception sensors and keeps track of the detected objects; a V2V communication system including BSM (DSRC) or CAM (C-ITS) message reception; a module for location tracking of other vehicles based on received BSM/CAM messages; processing capability to match and quantify received perceptions quickly; and an updatable database for storing information about sensor perception capability of the egovehicle. Additional features may be included, or the recited features may be modified as known to one of ordinary skill in the art.

[0043] Other V2V equipped self-driven or manually driven vehicles may have similar functionalities to the egovehicle as described above. In some embodiments, those other vehicles may be provided with only: a time management module satisfying the requirements given in DSRC or C-ITS specifications; one or more environment perception sensors; a perception system, which monitors the perception sensors and keeps track of the detected objects; and a V2V communication system including BSM (DSRC) or CAM (C-ITS) message reception.

[0044] Non-V2V equipped vehicles need no specific features, as they are only used as potential common landmarks possibly tracked by the egovehicle and/or other V2V vehicles.

[0045] Generally, it is preferred that the detection range of the egovehicle is tested when other vehicles are about to enter or leave its perception range. This enables sensor performance testing near its limits. Since 250 meters is the typical upper limit of most perception sensors, an interurban road or highway with a 250-meter line-of-sight is ideal for this. However, the present disclosure is applicable with shorter line-of-sight as well, but may not test the upper limit of the sensor system.

[0046] FIG. 5 illustrates a simplified depiction of a typical traffic situation on a freeway (as illustrated in FIG. 5, there is a four-lane freeway scenario, however alternative scenarios such as urban 2 lane roads are also envisioned herein). An egovehicle 515 knows or receives the locations, heading, speed, and vehicle type of other V2V equipped vehicles within its communication range, and tracks some vehicles within its perception range. In some instances, there may also be vehicles capable of Visible Light Communication. Such communication is included among the forms of V2V communication in this disclosure.

[0047] In FIG. 5, there are a plurality of vehicles, where vehicles 520, 526, 534, 536, 540, 552, and 554 do not have V2V functionality, while vehicles 522, 524, 528, 530, 532, 538, 542, 544, 546, 548, and 550 do have V2V functionality. Also, egovehicle 515 has a currently assumed perception range 505, and a current awareness range via V2V communications 510.

[0048] In the moment depicted in FIG. 5, the egovehicle 515 cannot see vehicle 538 due to the bushes, nor vehicle 540 behind vehicle 536.

[0049] In one embodiment, the egovehicle 515 may identify a nearby set of vehicles based on received wireless messages from a plurality of vehicles (e.g., vehicles within the awareness range via V2V 510). The egovehicle 515 may then identify an observed set of vehicles based on its sensors (such as radar, LiDAR, optical sensors, camera, etc.) within a currently assumed perception range 505. In the embodiment of FIG. 5, egovehicle 515 may observe a set of vehicles including vehicles 536 and 542. Among the nearby set of vehicles is also vehicle 532, which has sent a wireless message including its location to egovehicle 515. In an embodiment, egovehicle 515 only evaluates vehicles in the nearby set which are within a previously calculated perception range of the egovehicle 505. Here, because the egovehicle 515 "knows" that vehicle 532 is in the nearby set, and is within its perception range 505 but is not observed by the egovehicle' s sensors, egovehicle 515 may determine that it has a perception deficit. Given this perception deficit of a vehicle in the nearby set not being observed by egovehicle 515, the egovehicle may perform at least one perception-deficit response action.

[0050] In some embodiments, the perception-deficit response action may comprise one or more of (but is not limited to): presenting an alert to an occupant of the vehicle; disengaging an autonomous functionality of the vehicle; adjusting a driving parameter of the vehicle, such as due to a reduced perception capability; alerting a human occupant to take over control of the vehicle; generating an alert that a sensor of the vehicle may be malfunctioning; and/or the like. Types of driving parameters may include speed of the vehicle, distance from adjacent vehicles, driving route of the vehicle, whether an autonomous or semi-autonomous vehicle may operate in an autonomous mode, and/or the like.

[0051] In another exemplary embodiment related to FIG. 5, the disclosed process may be as follows, and is also depicted as a general flow diagram in FIG. 6.

[0052] If the time from the last received Detection Data Response is longer than a predetermined value (605), and the egovehicle 515 has noticed a vehicle entering or leaving its perception range (610) (e.g., vehicle 544 in FIG. 5), the egovehicle 515 may broadcast a Detection Data Request via its V2V communication device (615), and then wait some period of time for responses (620). In the case of FIG. 5, the egovehicle 515 may broadcast the request to vehicles 522, 524, 528, 530, 532, 538, 542, 544, 546,548, and 550, which are each V2V equipped vehicles within the V2V communications range 510 of the egovehicle 515. Each of these other vehicles may have similar functionality to the egovehicle.

[0053] In an exemplary embodiment, the message structure of a Detection Data Request may comprise, but is not limited to: a Command, such as "Return your detection and egomotion data"; (Angle, Distance, RelativeVelocity) triples of all objects detected and tracked by the egovehicle; a Last DetectionTimeO, and the corresponding VelocityO, HeadingO, CoordinatesO of the egovehicle; VechCategoryO (e.g., "moped", "motorcycle", "passengerCar", "bus", "lightTruck", "heavyTruck", "trailer", "specialVehicle", or "tram") of the egovehicle.

[0054] Each of those V2V equipped vehicles who can receive the broadcast may respond with a Detection Data Response broadcast. For example, in FIG. 5, the egovehicle may broadcast the request to vehicles 522, 524, 528, 530, 532, 538, 542, 544, 546,548, and 550. The number of responding broadcasts may depend on the number of V2V vehicles in the vicinity of the egovehicle (e.g., V2V communications range 510 of egovehicle 515). For example, in the scenario of FIG. 5, the egovehicle would optimally receive a response from each of vehicles 522, 524, 528, 530, 532, 538, 542, 544, 546,548, and 550. [0055] In an exemplary embodiment, the message structure of a Detection Data Response may comprise, but is not limited to: a Command, such as "Returning my detection and egomotion data"; (Angle, Distance, Relative Velocity) triples of all objects detected and tracked by the responding vehicle; a Last DetectionTimeX, and the corresponding VelocityX, HeadingX, CoordinatesX of the responding vehicle; VechCategoryX (e.g., "moped," "motorcycle," "passengerCar," "bus," "lightTruck," "heavyTruck," "trailer," "specialVehicle," or "tram") of the responding vehicle.

[0056] The egovehicle 515 may then calculate what was the location of all the received obj ects at the time of its last detection (625). For this it uses the received detections together with the coordinates, heading, speed, and detection time found in each broadcasted message.

[0057] The egovehicle 515 may match the objects detected by the egovehicle' s sensors with the refined object locations calculated above (630). Based on a mathematical analysis, each egovehicle detected object may be assigned a detection certainty value (635), which may be stored in a perception capability database (640) together with Angle, Distance and relative velocity. For example, the certainty value may be at a maximum value if two or more other vehicles have detected the object (within an error tolerance) at the same location as the egovehicle. If only one other vehicle has detected the object at the same location as the egovehicle, the certainty may be halfway between the maximum value and a minimum value. If no other vehicle has responded indicating detection of the object, the certainty may be the minimum value. Based on these factors, the egovehicle may generate a best estimate of the current perception range (645). In some embodiments, the estimate of the perception range may include utilization of several detection certainties generated over a period of time. For example, in one embodiment, the estimated perception range may modify the manufacturer's given sensor coverage area so that the range is shape scaled such that the farthest object detected by egovehicle and also detected by two other vehicles (e.g., at max confidence) just fits within the shape scaled perception range. In other embodiments, for example, the estimated perception range may be evaluated for specific sets of objects, such as groupings by vehicle category, and/or the like.

[0058] In one embodiment, the egovehicle may generate a point cloud using egovehicle' s detections and time-calibrated detections received from other vehicles. In one embodiment, the egovehicle may calculate a best fit point to best represent each cloud, and determine how many detection points fall within an error tolerance of the best fit point for each cloud. For example, if two or more received points and the egovehicle detection are within the error tolerance, the detection distance may receive the maximum certainty value. If only one received point and the egovehicle detection are within the range, the detection distance may receive a middle value certainty. If only the egovehicle detection is within the range, the detection distance may receive a minimum certainty value.

[0059] In some embodiments, the received detections may be used to diagnose the sensor system. For example, a large number of obj ects detected by other vehicles but never the egovehicle within its certain detection sector may indicate limitations on the egovehicle' s sensors (e.g., blocked sensor, malfunctioning sensor, range limited sensor, etc.), such as shown in FIG. 3. As such, the egovehicle continuously tracks its perception capability, which can be utilized in various safety features. Safety features may include, but is not limited to, features such as requesting the driver to take over control of the vehicle due to substandard perception capability, indicating a sensor re-calibration need, or indicating a serious malfunction in the sensor system.

[0060] In some embodiments, if the current perception range is an iteratively updated estimation, then temporary inability of the egovehicle to track particular objects (e.g., vehicles 538 and 540 in FIG. 5) may not indicate limited sensors, because a threshold time lapse between last detection by the egovehicle of an object has not been surpassed (e.g., to allow time for a passing vehicle, passing a cluster of trees, etc.).

[0061] FIG. 7 depicts an exemplary embodiment of an architecture of data structures and information as used in some embodiments. As shown, the system 702 of the egovehicle (Vehicle 0) may maintain a list of tracked objects 705, maintain structured data 710, record information related to the egomotion of the egovehicle 715 (e.g., heading, velocity, coordinates, vehicle category, etc.), utilize a communication module for V2V and/or other communications 720, utilize a clock/timing module 725, maintain a database of current perception capabilities 730, and iteratively refine with through perception refinement 735 those perception capabilities based on its own sensor readings and data responses received from other vehicles through communications 740, and/or the like.

[0062] Other vehicles (such as vehicles 1, 2, 3, N), may be expected to have systems (702b, 702n) having comparable features to the egovehicle, such as lists of tracked objects (705b, 705n), structured data (710b, 71 On), record information (715b, 715n), communications (720b, 720n), clocks (725b, 725n), and/or the like.

[0063] The egovehicle may then make this updated perception range (or any other current perception capabilities) available to all self-driving and/or safety applications present in the egovehicle (650). In some embodiments, such as for a self-driven egovehicle, if the updated perception range is below a predefined threshold permitted for the present speed of the self-driven egovehicle, a system safety function may determine that the driver must take over the control of the vehicle from a self-driving functionality. [0064] In some embodiments, the egovehicle may delete from the perception database any material which is older than a predetermined value (655) (e.g., any obj ect that has not been detected for 2 seconds, keep 25 newest high certainty detections; delete the oldest after a new one has been received, etc.).

[0065] FIG. 8 illustrates an embodiment of the communication process of sharing detections between vehicles. In embodiments using broadcasted messages, all V2V equipped vehicles will receive the current perceptions of V2V equipped vehicles in the vicinity. This data enables maximum situational awareness among all V2V equipped vehicles, as potentially hidden objects are likely to be identified thanks to overlapping perception ranges from various angles among the plurality of vehicles. For example, egovehicle 805 may broadcast a perception data request message 810 (including its own detection data) to other vehicles 807. Egovehicle 805 may then enter a waiting period 815 in which it may receive detection data responses 820a, 820b, 820n from any or all other vehicles 807. The detection data responses 820a, 820b, ... , 820n each include detection data from the transmitting other vehicle. After the end of the waiting period 815, egovehicle 805 may perform various operations 830 (such as those discussed above in relation to FIG. 6), including but not limited to matching detections and updating the perception capability database.

[0066] The present disclosure may also be adapted to additional situations, including but not limited to the following.

[0067] In one embodiment, instead of vehicles measuring Distance and Angle readings asynchronously, a new broadcast message may be defined, which triggers perceptions simultaneously in all the V2V equipped vehicles within a geographical area defined in the triggering message (e.g., within 250 meters of the egovehicle). In such embodiments, the time stamps and heading information could be omitted from messages, since all Distance, Angle and Velocity relate to the 'snapshot' time, and there is no need for the egovehicle to correct or refine the received response data for time differences between sensor readings for various vehicles.

[0068] In one embodiment, if the 'Current perception capabilities' parameters are logged with time and vehicle coordinates, the sensor performance as a function of weather conditions may be estimated for that vehicle model and make. This information could be used instead of an updated perception capabilities database, in case the egovehicle is driving for long periods without any other vehicles around, but weather information available.

[0069] In some embodiments, the changes in the detection range may also be transmitted to other vehicles as digital data that complements weather info of the area. Since different car makes and models will have different sensors installed, the 'weather info' should be in a normative form (e.g., what is the performance degradation in various wavelengths and frequencies). Therefore, the consequences of weather on sensor performance of each vehicle may be estimated at the sensor level, which requires dealing with the functions below the perception system level.

[0070] In some embodiments, a vehicle estimates the sensor system's performance without any Detection data request by comparing the distance & angle in the list of tracked objects with the location of matching vehicle in the dynamic data storage. The result will have lower confidence levels, since in this method the algorithm cannot utilize the detections of other vehicles to confirm the object.

[0071] Real World Examples. Some cases which cause sensor performance degradation may create constant performance degradation, which can be repaired only by redirecting the sensor(s), removing the hindering obstacle(s), and possibly calibrating the sensors after the fix. For example, mechanical contact with a sensor area; installation of accessory equipment on a front bumper or to other sensor areas at the sides or rear; re-installing or replacing the bumper, grille, or other parts at sensor areas; a rough carwash; or the like.

[0072] Some cases may cause performance to degrade steadily. For example, the sensors gradually becoming covered with snow, ice, dirt, etc. This may be especially true if degradation is caused by gradually accumulating dirt. Snow and ice can thaw away, and therefore the sensor performance may recover at least partially (partially because in road conditions snow and/or ice is generally accompanied with dirt) in warmer conditions. If the sensor is behind the windshield in an area cleaned by wipers, it is more protected from blockage by snow, ice, dirt, or the like, unless the windshield wipers fail. The same applies if the sensor is behind a similarly cleaned area, such as perhaps within the headlights (in the case of headlight washers and/or wipers).

[0073] In the case of driving in heavy rain, snowstorms, hailstorms, dense fog, dust, or the like, certain densities of rain, snow, hail, fog, or dust may cause multiple incorrect readings, or shorten the reliable sensing distance, especially with optical sensors. Typically, a higher density weather condition causes greater sensor performance degradation. The weather conditions may often vary, which further means that the sensor performance changes often. Therefore, the vehicle should be able to carry out sensor performance checks frequently, in order to understand how much it can rely on its sensors at any given time.

[0074] FIG. 9 illustrates an exemplary embodiment of a perception capabilities database (forward looking perception system). As can be seen in FIG. 9, an egovehicle database may record the real-time or actual perception capabilities of the egovehicle 905. For example, despite having a maximum detection range given by the manufacturer 940, the egovehicle sensors as monitored by the methods disclosed may determine their actual detection range, including detection ranges based on the class of the object detected. As shown in FIG. 9, the methods disclosed herein may result in the egovehicle 905 determining functional detection ranges for different classes of vehicles, such as motorcycles (functional range 910), cars (functional range 920), and trucks (functional range 930). In some embodiments, such ranges may be determined by the maximum ranges at which different classes of vehicles have been detected.

[0075] Additional Embodiments. In one embodiment, there is a method of adjusting vehicle operation responsive to a detected deficit in perception sensing, comprising: receiving, at a first vehicle, from one or more vehicles of a plurality of vehicles within a predefined range of the first vehicle, information regarding the location of the one or more vehicles; comparing the information regarding the location of each of the one or more vehicles with information derived from sensors of the first vehicle; responsive to a determination that at least one vehicle of the plurality of vehicles for which the received information regarding location of the vehicle corresponds to a location within a predetermined sensing region around the first vehicle is not indicated to be at that location by the information derived from sensors of the first vehicle: adjusting the function of at least one driving function of the first vehicle. The method may include wherein adjusting the function comprises at least one of: disengaging the function, slowing the vehicle, or alerting a human occupant to take over control of the vehicle. The method may include wherein the information received at the first vehicle is received by a V2V communication system. The method may include wherein the V2V communication system comprises a WTRU. The method may include wherein the first vehicle and at least one of the one or more vehicles have V2V functionality.

[0076] In an embodiment, there is a vehicle comprising a processor and a non-transitory storage medium, the storage medium storing instructions operative to perform functions comprising: receiving, at a first vehicle, from one or more vehicles of a plurality of vehicles within a predefined range of the first vehicle, information regarding the location of the one or more vehicles; comparing the information regarding the location of each of the one or more vehicles with information derived from sensors of the first vehicle; responsive to a determination that at least one vehicle of the plurality of vehicles for which the received information regarding location of the vehicle corresponds to a location within a predetermined sensing region around the first vehicle is not indicated to be at that location by the information derived from sensors of the first vehicle: adjusting the function of at least one driving function of the first vehicle.

[0077] In an embodiment, there is a method comprising: receiving, at an autonomous vehicle, a message indicating a first location estimate of a nearby vehicle; operating at least one sensor of the autonomous vehicle to generate a second location estimate of the nearby vehicle; comparing the first location estimate with the second location estimate to determine whether a difference between the first and second location estimates exceeds a threshold; in response to a determination that a difference between the first and second location estimates exceeds the threshold, initiating a transition from an autonomous driving mode to a manual driving mode. The method may also include wherein the threshold is an absolute distance threshold. The method may also include wherein the threshold is a percentage distance threshold. The method may also include wherein the message indicating the first location estimate of the nearby vehicle is received from the nearby vehicle. The method may also include wherein the message indicating the first location estimate of the nearby vehicle is received from a vehicle other than the nearby vehicle.

[0078] In an embodiment, there is an autonomous vehicle comprising a processor and a non- transitory storage medium, the storage medium storing instructions operative to perform functions comprising: receiving, at an autonomous vehicle, a message indicating a first location estimate of a nearby vehicle; operating at least one sensor of the autonomous vehicle to generate a second location estimate of the nearby vehicle; comparing the first location estimate with the second location estimate to determine whether a difference between the first and second location estimates exceeds a threshold; in response to a determination that a difference between the first and second location estimates exceeds the threshold, initiating a transition from a self-driving mode to a manual driving mode.

[0079] In an embodiment, there is a method comprising: sending a detection data request from a first vehicle; receiving at least a first response to the detection data request from at least a second vehicle; determining the location of a plurality of vehicles within a detection range of the first vehicle based on sensor data from at least a first sensor measured the first vehicle and the at least first response to the detection data request; determining a certainty value for each of the plurality of vehicles based on a comparison of the sensor data gathered by the first vehicle and the at least first response to the detection data request; and responsive to a determination that a range for one or more vehicles detected through the at least first sensor and a range determined from the at least first response is different: determining that a functional range of the at least first sensor is restricted. The method may further comprise providing an alert that an autonomous driving mode of the first vehicle cannot be used. The method may further comprise disabling an autonomous mode of the first vehicle. The method may further comprise, after sending the detection data request, waiting for a defined waiting period for responses from the at least second vehicle. The method may include wherein the detection data request is sent by a V2V communication system. The method may include wherein the V2V communication system comprises a WTRU. The method may include wherein the first vehicle and the at least second vehicle have V2V functionality. The method may include wherein determining the certainty value for each of the plurality of nearby vehicles further comprises: determining a number of vehicles which have detected one of the plurality of nearby vehicles; and assigning a certainty value to detection data related to said one of the plurality of nearby, wherein the certainty value is based on the number of vehicles which detected said one of the plurality of nearby vehicles. The method may include wherein assigning the certainty value further comprises assigning a maximum certainty value when the number of vehicles which have detected one of the plurality of vehicles is greater than or equal to 2. The method may include wherein assigning the certainty value further comprises assigning a middle value when the number of vehicles which have detected one of the plurality of vehicles is equal to 1. The method may include wherein assigning the certainty value further comprises assigning a minimum value when the number of vehicles which have detected one of the plurality of vehicles is equal to 0. The method may include wherein determining whether vehicles have detected the same vehicle includes an error tolerance factor. The method may further comprise comparing the certainty value with the sensor data of the first vehicle to generate an estimate of the first vehicle's current perception range. The method may include wherein if the current perception range is below a predefined threshold for a current speed of the first vehicle, requiring a driver of the first vehicle to resume control from a self-driving functionality of the first vehicle.

[0080] In an embodiment, there is a method of sensor performance monitoring for a self- driving vehicle, comprising: broadcasting a detection data request; waiting for at least a first response from at least one nearby vehicle; calculating locations of all received objects at the time of a last self-driving vehicle perception time; matching objects with the calculated locations; generating a certainty value for each object; storing the objects in a perception capability database of the self-driving vehicle; generating an estimate of the self-driving vehicle's current perception range; and making the estimated current perception range available to all self-driving and safety applications onboard the self-driving vehicle. The method may further comprise deleting over-age readings from the perception capability database. The method may further comprise, prior to broadcasting, determining whether a most recent detected data response is older than a defined update time window. The method may further comprise, prior to broadcasting, determining whether a vehicle is entering or leaving the perception range of the self-driving vehicle.

[0081] In an embodiment, there is a method for estimating and broadcasting range of perception in a first vehicle, comprising: sending a detection data request; determining the location of all vehicles within a communication range based on the response messages; determining a certainty value for each vehicle based on a comparison of the first vehicle's measurements and received measurements; and responsive to determination that an object's range detected through perception sensors and said object's range determined using receiving V2V messages is different: determining that the range of the perception sensor is restricted; and alerting the user that the autonomous mode cannot be used.

[0082] In an embodiment, there is a system comprising a processor and a non-transitory storage medium, the storage medium storing instructions operative to perform functions comprising: sending a detection data request from a first vehicle; receiving at least a first response to the detection data request from at least a second vehicle; determining the location of a plurality of vehicles within a detection range of the first vehicle based on sensor data from at least a first sensor measured the first vehicle and the at least first response to the detection data request; determining a certainty value for each of the plurality of vehicles based on a comparison of the sensor data gathered by the first vehicle and the at least first response to the detection data request; and responsive to a determination that a range for one or more vehicles detected through the at least first sensor and a range determined from the at least first response is different: determining that a functional range of the at least first sensor is restricted.

[0083] In an embodiment, there is a self-driving vehicle comprising a processor and a non- transitory storage medium, the storage medium storing instructions operative to perform functions comprising: broadcasting a detection data request; waiting for at least a first response from at least one nearby vehicle; calculating locations of all received objects at the time of a last self-driving vehicle perception time; matching objects with the calculated locations; generating a certainty value for each object; storing the objects in a perception capability database of the self-driving vehicle; generating an estimate of the self-driving vehicle's current perception range; and making the estimated current perception range available to all self-driving and safety applications onboard the self-driving vehicle.

[0084] In an embodiment, there is a vehicle comprising a processor and a non-transitory storage medium, the storage medium storing instructions operative to perform functions comprising: sending a detection data request; determining the location of all vehicles within a communication range based on the response messages; determining a certainty value for each vehicle based on a comparison of the first vehicle's measurements and received measurements; and responsive to determination that an object's range detected through perception sensors and said object's range determined using receiving V2V messages is different: determining the range of the perception sensor is restricted; and alerting the user that the autonomous mode cannot be used. [0085] Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.

[0086] FIG. 10 is a system diagram of an exemplary WTRU 102, which may be employed in embodiments described herein. As shown in FIG. 10, the WTRU 102 may include a processor 118, a communication interface 119 including a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, a non-removable memory 130, a removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and sensors 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.

[0087] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 10 depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.

[0088] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.

[0089] In addition, although the transmit/receive element 122 is depicted in FIG. 10 as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MTMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116. [0090] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, as examples.

[0091] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).

[0092] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. As examples, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel- zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li -ion), and the like), solar cells, fuel cells, and the like.

[0093] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 1 16 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.

[0094] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.

[0095] FIG. 11 depicts an exemplary network entity 190 that may be used in embodiments of the present disclosure. As depicted in FIG. 11, network entity 190 includes a communication interface 192, a processor 194, and non-transitory data storage 196, all of which are communicatively linked by a bus, network, or other communication path 198.

[0096] Communication interface 192 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 192 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 192 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 192 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi-Fi communications, and the like). Thus, communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.

[0097] Processor 194 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.

[0098] Data storage 196 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non- transitory data storage deemed suitable by those of skill in the relevant art could be used. As depicted in FIG. 11, data storage 196 contains program instructions 197 executable by processor 194 for carrying out various combinations of the various network-entity functions described herein.

[0099] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer- readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD- ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.